How to calculate the carbon emission of your cloud services

Note: we now have a cloud computing endpoint which does a lot of the heavy lifting described in this article for you. Try it out here.

You can use Climatiq to calculate the carbon emissions of your use of cloud computing. How much power cloud computing uses depends on a lot of factors such as:

  • The amount of CPU, Memory and Storage your services use
  • How well utilized the machines in the data centers are used.
  • How efficiently the data centers are cooled and use their electricity, generally specified as Power Usage Effectiveness or PUE.
  • Where the data centers are located to determine the electricity grid mix.

This means that there's no one size fits all emission factor for cloud computing, Climatiq has emission factors covering the different regions for the three major cloud providers: Amazon Web Services (AWS), Google Cloud Platform (GCP) and Azure

This guide assumes that you already know the different metrics you'll need to measure, such as CPU usage, memory usage and storage usage. If you're not sure how to get these metrics, we refer you to this document, or the documentation of your cloud provider.

Now, let's take an example to try and calculate the different aspects of a Virtual Machine (VM) running on AWS.

To calculate the energy consumption of the vCPU usage for the us-west-1 region we should use the emission factor cpu-provider_aws-region_us_west_1. This emission factor takes in an amount of vCPU time, an amount and it outputs the emitted greenhouse gas emissions. vCpu time can be understood as the average load for a particular period of time, multiplied per vCPU.

Depending on how your metrics are structured, they might be per vCPU or across all vCPU's, so you might need to scale it with the amount of vCPU’s your instance has.

  • Running a server with 2 vCPU available at 50 percent load for 24 hours
  • amount = 2 vCPU * 0.5 load = 1
  • time = 24
  • time_unit = hours
curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "cpu-provider_aws-region_us_west_1",
"parameters": {
"number": 1,
"time": 24,
"time_unit": "h"
}
}'
# Response
{
"co2e": 0.0199272,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "cpu-provider_aws-region_us_west_1",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - CPU",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 0.0199272,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

  • Running a serverless service consuming 200ms CPU time in total
  • amount = 1
  • time = 200
  • time_unit = ms
curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "cpu-provider_aws-region_us_west_1",
"parameters": {
"number": 1,
"time": 200,
"time_unit": "ms"
}
}'
# Response
{
"co2e": 4.612777777777778e-8,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "cpu-provider_aws-region_us_west_1",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - CPU",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 4.612777777777778e-8,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

  • Running a physical server with 16 cores at 30 percent load for 30 days
  • amount = 16 vCPU * 0.3 = 4.8
  • time = 30
  • time_unit = day
curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "cpu-provider_aws-region_us_west_1",
"parameters": {
"number": 4.8,
"time": 30,
"time_unit": "day"
}
}'
# Response
{
"co2e": 2.8695168,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "cpu-provider_aws-region_us_west_1",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - CPU",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 2.8695168,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

  • Kubernetes cluster cpu usage over a period of time
  • time = period of time in hours
  • amount = CPU Average usage (as it's already measured in multiple CPU cores)

Now that we've looked at CPU emissions, let's take a look at how to calculate the emissions caused by using memory (RAM). We'll use the following emission factor: memory-provider_aws-region_us_west_1

As memory uses power even when it's not used, a good rule of thumb is to use the amount of RAM you have allocated, and not the actual amount of memory you use. You will also have to specify the amount of time you're using the memory for.

If we assume that we have 8GB of RAM for our instance, and we're calculating the emissions over 1 hour, the API call looks like this:

curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "memory-provider_aws-region_us_west_1",
"parameters": {
"data": 8,
"data_unit": "GB",
"time": 1,
"time_unit": "h"
}
}'
# Response
{
"co2e": 0.00124888,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "memory-provider_aws-region_us_west_1",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - Memory",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 0.00124888,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

Note that you might end up slightly overestimating the Co2 footprint of your memory when doing this. If you want more precise numbers, see provider-specific instructions for how to calculate memory here: [AWS], [GCP], [Azure]

Storing things on a hard drive takes power as well, whether it's on a block storage or on a dedicated storage system like S3.

Let's look at calculating the storage - there are three factors you need to take into account:

  1. It matters which hardware is backing your storage - the emission factor for a HDD (storage-provider_aws-region_us_west_1-type_hdd) is not the same as the emission factor for an SSD (storage-provider_aws-region_us_west_1-type_ssd). HDD's generally emit less greenhouse gas.
  2. You should calculate your storage emissions based on how much storage you've bought - even if you're not using the storage, having it available will use electricity.
  3. In many services, storage is replicated across multiple machines or even data centers. If your storage is replicated you should take your storage amount and multiply it by how many times it's replicated. You'll have to look at your cloud provider and the specific services you use to figure out how many times data is replicated. If you're using AWS, here's a list of AWS replication factors.

Here are two examples of how storage can be calculated.

In this example, we mount a 20GB SSD instance storage on our EC2 instance, and we want to calculate the emissions over 24 hours.

We'll use the SSD emission factor. Looking at the AWS replication factors, the EC2 Instance Storage is not replicated. This means we only have to calculate with the 20GB we've purchased.

curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "storage-provider_aws-region_us_west_1-type_hdd",
"parameters": {
"data": 20,
"data_unit": "GB",
"time": 24,
"time_unit": "h"
}
}'
# Response
{
"co2e": 0.00012424799999999998,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "storage-provider_aws-region_us_west_1-type_hdd",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - Storage",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 0.00012424799999999998,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

Let's take a slightly harder example, where we try to calculate emissions based on the use of S3 to store files.

This is harder for several reasons:

  • Amazon has never published which sort of physical storage they use in S3. As HDD's are generally cheaper than SSD's, it's a safe guess that they're probably primarily powered by HDD's - so we'll use the HDD emission factor.
  • In S3 you don't "pre-allocate" storage. You don't buy a 20GB hard disk you can use as you want - you only pay for what you actually store. It's likely that you share hard-drives with other consumers - so we'll only consider the amount of storage we're actually using in this case.
  • S3 is replicated. Your data is replicated in 3 different locations, which means that we'll need to take the amount we've stored and multiply it by 3.

So if we're storing 5GB in S3 - we'll make the call with the HDD emission factor, and an amount of "15GB" - because since the data is allocated three places, we're actually using 15GB of storage.

Let's see how much co2e it uses to store this data for a week.

curl --request POST \
--url https://beta3.api.climatiq.io/estimate \
--header 'Authorization: Bearer YOUR_API_KEY' \
--data '{
"emission_factor": "storage-provider_aws-region_us_west_1-type_hdd",
"parameters": {
"data": 15,
"data_unit": "GB",
"time": 7,
"time_unit": "day"
}
}'
# Response
{
"co2e": 0.0006523019999999999,
"co2e_unit": "kg",
"co2e_calculation_method": "ar4",
"co2e_calculation_origin": "source",
"emission_factor": {
"id": "storage-provider_aws-region_us_west_1-type_hdd",
"source": "CCF",
"year": "2021",
"region": "US-GIR",
"category": "Cloud Computing - Storage",
"lca_activity": "usephase",
"data_quality_flags": []
},
"constituent_gases": {
"co2e_total": 0.0006523019999999999,
"co2e_other": null,
"co2": null,
"ch4": null,
"n2o": null
}
}

We've shown you how you can calculate the carbon emissions of some of your cloud compute uses.

While Climatiq's emission factors are reasonably comprehensive for many parts of the cloud computing, it still doesn't cover things like network traffic to the end-user, or network traffic between your servers.

If you're looking at supporting more accurate and complex use-case, please get in touch.