Scope 3 emissions are what stops many companies when it comes to reducing emissions. It is not a surprise, as the area is notoriously hard to understand, monitor and change. We spoke to Serge Schamschula, Head of Ecosystem at Transporeon, about the challenged faced but mostly about what solutions can be put in place and how companies are embarking on this mammut journey.
Serge, what are the challenges when it comes to Scope 3 emissions?
Scope 3 emissions reduction presents two issues. Firstly, there is a data-sharing issue: parties in the supply chain are still reluctant to share data, and it isn´t limited to emission-related data. This is particularly valid if real data can be shared, as it means that it gets out of the supplier’s control.
This lack of a data-sharing culture can be described as a “lack of trust”. We see examples where a shipper and its logistics service providers maintain a long-term trustful relationship and share data with a plus 90% coverage, while other shippers must live with a coverage of less than 10% in the same set of data.
This issue cannot be solved by “technology” such as blockchain. What can help to overcome some of the hurdles is a neutral body in between the parties, acting like a trustee.
Secondly, there is a data-quality issue: with GHG emissions, the quality of the data makes a decisive difference. Only calculations and insights based on high-quality data enable the actors in the supply chain to plan, manage and improve emissions in their network.
Where can data come from and how do we collect and analyse it?
The data for emission calculation can have different sources: shipper, LSP or carrier. Each source has a different data quality, whether it’s planning data or execution data. The data can also be default averages, modeled, or so-called primary data. Primary data is always execution data at the carrier, since the execution takes place there.
In the simplest use case, which is FTL (full-truckload), primary data is the entire consumption of energy used during the transport, including empty miles and the energy type that triggers a specific carbon equivalent intensity (also including the energy production if we talk about well-to-wheel). If this data is given, no other information is needed to calculate the CO2e, thus giving a realistic picture of the situation.
If this information isn´t provided, we come to modeled calculation. The value of modeling is widely debated. Delivering useful results requires a high data accuracy, such as actual transport mode, vehicle information, fuel/energy type, payload, routing/total distance per vehicle or mode, etc.
Finally, most data is just an industry default average, which is an industry standard today. The result is that every FTL load from e.g. Brussels to Milano shows the same emissions, regardless of the transport mode (e.g. truck, rail or intermodal transport), how the vehicles are powered (diesel EUR-V, a bio-gas truck or a BEV), or whether it was routed via Switzerland (350 km shorter but more expensive), Austria (longer but cheaper) or via Central Europe where the driver spent his weekend.
The above observations are equally valid for every norm adhered to, whether it’s the GLEC framework, the new ISO 14083, or any older standard.
What happens at the data source? To calculate by industry defaults, one can use planned or execution data. For modeled calculation, “better” data may come from the shipper or – more likely – from the carrier. Primary data simply comes from where the industry gets its (accurate) data for scope 1 and scope 2 emissions from: sensors. For vehicles, this data comes from the telematics / rFMS. If the data is only claimed to be primary but shared from other sources, we would name it “wanna-be primary” and categorize it for modelling. Case studies show that data from sources other than sensors tend to be counterproductive.
Primary data coming from carriers does not have to be shared with their customers, as the latter must not know what parameters led to a high or low carbon intensity. This is where a neutral intermediary body can help, by receiving primary data from the carrier and sharing the insight with the customer.
Thus, we can consider that primary data only is a high-quality data type. It can be trusted and disaggregated, it makes differences between suppliers visible and allows for informed decisions.
Once the data is analysed, how do we know it is factually accurate and usable?
The accuracy of the data lies in the data quality. The usability lies in the granularity of the data, where corridor, carrier, and transport modality can be properly identified.
How can we act upon the results? How do we measure success and set KPIs?
One can act by implementing a decarbonisation strategy at the corporate level, setting a NetZero target or climate neutrality goal in line with the SBT science-based target initiative. Then, field reduction targets, which include logistics, need to be deducted. Since companies can have a hard time controlling their scope 3 emissions, the reduction targets for transport usually start at a lower level.
From a shipper´s or LSPs point-of-view, the main KPI is the CO2e/tkm (carbon dioxide equivalent intensity per ton-kilometer), wtw (well-to-wheel) – or, even better, including emissions caused by production. The absolute CO2e needs to be observed too. Finally, both need to become climate neutral.
These targets need to be pursued during strategic network planning and sourcing, tactical planning (procurement), and execution in parallel. The different measures are well referred to in the ALICE roadmap to NetZero, following the system established by Alan McKinnon. However, it is essential to combine short-term abatement solutions (carrier selection, means of digitalization, ….) with mid-term solutions (such as modal shift and collaborative purchasing) and long-term investments (fleet electrification, green energy production, …). ✷