Category Archives: Energy

machine learning in energy – part one

This is the first of a two part series on the intersection of machine learning and energy.

machine learning in energy – part one
Introduction, why it’s so exciting, challenges.

machine learning in energy – part two
Time series forecasting, energy disaggregation, reinforcement learning, Google data
centre optimization.


Technological innovation, environmental politics and international relations all influence the development of our global energy system.  There is one less visible trend that will be one of the most important.  Machine learning is blowing past previous barriers for a wide range of problems.  Computer vision, language processing and decision making have all been revolutionized by machine learning.

I see machine learning as fundamentally new.  Mankind developed using only the intelligence in his own brain until we learned to communicate.  Since then the development of technologies such as the printing press or the internet now allow us to access the intelligence across the entire human species.  But machine learning is something different.  We can now access the intelligence of another species – machines.

Part One of this series will introduce what machine learning is, why it’s so exciting and some of the challenges of modern machine learning. Part Two goes into detail on applications of machine learning in the energy industry such as forecasting or energy disaggregation.

What is machine learning

Machine learning gives computers the ability to learn without being explicitly programmed. Computers use this ability to learn patterns in large, high-dimensionality datasets. Seeing these patterns allows computers to achieve results at superhuman levels – literally better than what a human expert can achieve.  This ability has now made machine learning the state of the art for a wide range of problems.

To demonstrate what is different about machine learning, we can compare two landmark achievements in computing & artificial intelligence.

In 1996 IBM’s Deep Blue defeated World Chess Champion Gary Kasparov. IBMs Deep Blue ‘derived it’s playing strength mainly from brute force computing power’. But all of Deep Blue’s intelligence originated in the brains of a team of programmers and chess Grandmasters.

In 2016 Alphabet’s Alpha Go defeated Go legend Lee Sedol 4-1. AlphaGo also made use of a massive amount of computing power. But the key difference is that AlphaGo was not given any information about the game of Go from its programmers. Alpha Go used reinforcement learning to give Alpha Go the ability to learn from its own experience of the game.

Both of these achievements are important landmarks in computing and artificial intelligence. Yet they are also fundamentally different because machine learning allowed AlphaGo to learn on it’s own.

Why now

Three broad trends have led to machine learning being the powerful force it is today.

One – Data

It’s hard to overestimate the importance of data to modern machine learning. Larger data sets tend to make machine learning models more powerful. A weaker algorithm with more data can outperform a stronger algorithm with less data.

The internet has brought about a massive increase in the growth rate of data. This volume of data is enabling machine learning models to achieve superhuman performance.

For many large technology companies such as Alphabet or Facebook their data has become a major source of the value of their businesses. A lot of this value comes from the insights that machines can learn from such large data sets.

Two – Hardware

There are two distinct trends in hardware that have been fundamental to moving modern machine learning forward.

The first is the use of graphics processing units (GPUs) and the second is the increased availability of computing power.

In the early 2000’s computer scientists innovated the use of graphics cards originally designed for gamers for machine learning. They discovered massive increases in training times – reducing them from months to weeks or even days.

This speed up is important. Most of our understanding of machine learning is empirical . This knowledge is built up a lot faster by reducing the iteration time for training machine learning models.

The second trend is the availability of computing power. Platforms such as Amazon Web Services or Google Cloud allow on-demand access to a large amount of GPU-enabled computing power.

Access to computing power on demand allows more companies to build machine learning products. It enables companies to shift a capital expense (building data centres) into an operating expense, with all the balance sheet benefits that brings.

Three – Algorithms & tools

I debated whether to include this third trend. It’s really the first two trends (data & hardware) that have unlocked the latent power of machine learning algorithms, many of which are decades old. Yet I still think it’s worth touching on algorithms and tools.

Neural networks form the basis of many state of the art machine learning applications. Neural networks with multiple layers of non-linear processing units (known as deep learning) that forms the backbone of the most impressive applications of machine learning today. These artificial neural networks are inspired by the biological neural networks inside our brains.

Convolutional neural networks have revolutionised computer vision through a design based on the structure of our own visual cortex. Recurrent neural networks (specifically the LSTM implementation) have transformed sequence & natural language processing by allowing the network to hold state and ‘remember’.

Another key trend in machine learning algorithms is the availability of open source tools. Companies such as Alphabet or Facebook make many of their machine learning tools all open source and available.

It’s important to note that while these technology companies share their tools, they don’t share their data. This is because data is the crucial element in producing value from machine learning. World-class tools and computing power are not enough to deliver value from machine learning – you need data to make the magic happen.

Challenges

Any powerful technology has downsides and drawbacks.

By this point in the article the importance of data to modern machine learning is clear. In fact large datasets are so important for supervised machine learning algorithms used today that it is a weakness. Many techniques don’t work on small datasets.

Human beings are able to learn from small amounts of training data – burning yourself once on the oven is enough to learn not to touch it again. Many machine learning algorithms are not able to learn in this way.

Another problem in machine learning is interpretability. A model such as a neural network doesn’t immediately lend itself to explanation. The high dimensionality of the input and parameter space means that it’s hard to pin down cause to effect. This can be difficult when considering using a machine learner in a real world system. It’s a challenge the financial industry is struggling with at the moment.

Related to this is the challenge of a solid theoretical understanding. Many academics and computer scientists are uncomfortable with machine learning. We can empirically test if machine learning is working, but we don’t really know why it is working.

Worker displacement from the automation of jobs is a key challenge for humanity in the 21st century. Machine learning is not required for automation, but it will magnify the impact of automation. Political innovations (such as the universal basic income) are needed to fight the inequality that could emerge from the power of machine learning.

I believe it is possible for us to deploy automation and machine learning while increasing the quality of life for all of society. The move towards a machine intelligent world will be a positive one if we share the value created.

In the context of the energy industry, the major challenge is digitization.  The energy system is notoriously poor at managing data, so full digitalization is still needed.  By full digitalization I mean a system where everything from sensor level data to prices are accessible to employees & machines, worldwide in near real time.

It’s not about having a local site plant control system and historian setup. The 21st-century energy company should have all data available in the cloud in real time. This will allow machine learning models deployed to the cloud to help improve the performance of our energy system. It’s easier to deploy a virtual machine in the cloud than to install & maintain a dedicated system on site.

Data is one of the most strategic assets a company can own. It’s valuable not only because of the insights it can generate today, but also the value that will be created in the future. Data is an investment that will pay off.

Part Two of this series goes into detail on specific applications of machine learning in the energy industry – forecasting, energy disaggregation and reinforcement learning.  We also take a look at one of the most famous applications of machine learning in an energy system – Google’s work in their own data centers.

Thanks for reading!

Sources and further reading

CHP Feasibility & Optimization Model v0.3

See the introductory post for this model here.  


This is v0.3 of the open source CHP feasibility and optimization model I am developing.  The model is setup with some dummy data.  This model is in beta – it is a work in progress!

If you want to get it working for your project all you need to do is change:

  • heat & power demands (Model : Column E-G)
  • prices (Model : Column BD-BF)
  • CHP engine (Input : Engine Library).

You can also optimize the operation of the CHP using a parametric optimization VBA script (Model : Column BQ).

You can download the latest version of the CHP scoping model here.

If you would like to get involved in working on this project please get in touch.

Thanks for reading!

 

Oil Reserves Growth – Energy Basics

Energy Basics is a series covering fundamental energy concepts.


As we consume non-renewable resources, the amount of that resource depletes. Makes sense right?

Yet when it comes to oil reserves we find that oil reserves actually grow over time! This is known as ‘oil reserves growth’. Why does this phenomenon occur?

First, let’s start by defining some relevant terms.

Oil reserves are the amount of oil that can be technically recovered at the current price of oil.

Oil resources are all oil that can be technically recovered at any price.

Oil in place is all the oil in a reservoir (both technically recoverable & unrecoverable oil).

oil reserves growth

Figure 1 – Proved oil reserves and Brent crude oil price (BP Statistical Review 2016)

So why do reserves grow over time? There are three reasons.

One – Geological estimates

Initial estimates of the oil resource are often low. It’s very difficult to estimate the amount of oil in a reservoir as you can’t directly measure it. Often a lot of computing power is thrown at trying to figure out how much oil is underground.

It’s also good engineering practice to stay on the low side when estimating for any project. I expect geologists intentionally do the same for geological estimates.

Two – Oil prices

Oil reserves are a direct function of the current oil price. Increasing oil prices means that more of the oil resource can be classed as an oil reserve.

Historically we have seen oil prices increase – leading to growth in oil reserves (even with the oil resource being depleted at the same time).

But increasing prices can also have secondary effects. A higher price might incentivise an oil company to invest more into an existing field – leading to an increase in oil recovery.

The reserves growth of existing fields can actually be responsible for the majority of additions to reserves.  Between 1977 to 1995 approximately 89% of the additions to US proved reserves of crude oil were due to oil reserves growth rather than the discovery of new fields.

Three – Technology

Improvements in technology have two effects. The first is to make more of the oil in place technically recoverable at any price (ie to increase the oil resource). Hydraulic fracturing (fracking) and horizontal drilling now allow

The first is to make more of the oil in place technically recoverable at any price (ie to increase the oil resource). Hydraulic fracturing (fracking) and horizontal drilling now allow access to oil that previously was technically unrecoverable.

The second is that as technology improves it also gets cheaper. This improvement in economics means that more of the oil resource can be classed as an oil reserve (even at constant or falling prices).


Thanks for reading!

The Complexity of a Zero Carbon Grid – Energy Insights

Energy Insights is a series highlighting interesting energy content from around the web.

Previous posts in this series include Automated Cars and How to save the energy system.


I’m excited to present this Energy Insights post. I’m highlighting a few interesting insights from the ‘The Complexity of a Zero Carbon Grid’ show.

This is very special as The Interchange podcast has only been publically relaunched recently.

The show considers what may be necessary to get to levels of 80-100% renewables. Stephen Lacey and Shayle Kann host the show with Jesse Jenkins as the guest.

The concept of flexibility

Jenkins observes that the concept of flexibility of electrical capacity appearing in literature. Flexibility means how quickly an asset is able to respond to change.

A combined cycle gas turbine plant is usually more flexible than a coal or nuclear generator. One reason for this is the ability to control plant electric output by modulating the supplementary burner gas consumption.


We will need flexibility on a second, minute, hourly or seasonal basis.

This concept of flexibility was also recently touched on by the excellent Energy Analyst blog. Patrick Avis notes that we need both flexibility (kW or kW/min) and capacity (kWh) for a high renewables scenario.

The post ‘Flexibility in Europe’s power sector’ could easily be enough material for a few Energy Insights posts. Well worth a read.

One investment cycle away

Jenkins observes that the investment decisions we make today will affect how we decarbonise in the future. Considering the lifetime of many electricity generation assets, we find that we are only a single investment cycle away from building plants that will be operating in 2050.

Most deep decarbonisation roadmaps include essentially zero carbon electricity by 2050. We need to ensure that when the next investment cycle begins we are not installing carbon intense generation as it would still be operating in 2050.

For both gas and coal the implied cutoff date for plant operation to begin is between 2010 – 2020.

Increasing marginal challenge of renewables deployment

The inverse relationship between the level of deployment of renewables and the marginal value added is well known. Jenkins notes that this relationship also applies to the deployment of storage and demand side response.

As renewable deployment increases the challenges for both storage and demand side response also increase.

Seasonal storage technologies

1 – Power to gas

Electricity -> hydrogen -> synthetic methane.

Figure 3 – Apros Power to Gas

Intermittency of the supply of excess renewable generation means that power to gas asset wouldn’t be fully utilized.

Didn’t cover the possibility of storage of electricity to allow a constant supply of electricity to the power to gas asset.

2 – Underground thermal

Limited to demonstration scale.

Didn’t cover the feasibility of generating electricity from the stored heat.

I would expect that the temperature of the stored heat is low.  Perhaps the temperature could be increased with renewable powered heat pumps.


Thanks for reading!

 

Energy Basics – Capacity Factor

All men & women are created equal. Unfortunately the same is not true for electricity generating capacity.
 
Capacity on it’s own is worthless – what counts the electricity generated (kWh) from that capacity (kW). If the distinction between kW and kWh is not clear this previous post will be useful.
 

Capacity factor is one way to quantify the value of capacity. It’s the actual electricity (kWh) generated as a percentage of the theoretical maximum (operation at maximum kW).

For example to calculate the capacity factor on an annual basis:
 
 

There are many reasons why capacity will not generate as much as it could.

Three major reasons are maintenance, unavailability of fuel and economics.

 
Maintenance
 
Burning fossil fuels creates a challenging engineering environment. The core of a gas turbine is high pressure & temperature gases rapidly rotating blazing hot metal. Coal power stations generate electricity by high pressure steam forcing a steam turbine to spin incredibly fast.
 
These high challenges mean that fossil fuel plants need a lot of maintenance. The time when the plant is being maintained is time the capacity isn’t generating electricity.
 
Renewables plants need a lot less maintenance than a fossil fuel generator. No combustion means there is a lot less stress on equipment.
 
Availability of fuel
 
Yet while renewables come ahead in terms of maintenance, they fall behind due to a constraint that fossil fuel generation usually doesn’t suffer from – unavailability of fuel.
 
This is why renewables like wind & solar are classed as intermittent. Fuel is often not available meaning generation is often not possible.
 
Solar panels can’t generate at night. Wind turbines need wind speeds to be within a certain range – not too low, not too high – just right.
 
This means that wind & solar plants are often not able to generate at full capacity – or even to generate at all. This problem isn’t common for fossil fuel generation. Fossil fuels are almost always available through natural gas grids or on site coal storage.
 
Economics
 
The final reason for capacity to not generate is economics.
 
The relative price of energy and regulations change how fossil fuel capacity is dispatched. Today’s low natural gas price environment is the reason why coal capacity factors have been dropping.
 
Here renewables come out way ahead of fossil fuels. As the fuel is free renewables can generate electricity at much lower marginal cost than fossil fuels. Wind & solar almost always take priority over fossil fuel generation.
 
Typical capacity factors
 
The capacity factor wraps up and quantifies all of the factors discussed above.
 
Table 1 – Annual capacity factors (2014-2016 US average)

CoalCCGTWindSolar PVNuclear
Annual Capacity Factor56.13%53.40%33.63%26.30%92.17%

 

Table 1 gives us quite a bit of insight into the relative value of different electricity generating technologies. The capacity factor for natural gas is roughly twice as high as solar PV.

 

We could conclude that 1 MW of natural gas capacity is worth around twice as much as 1 MW of solar PV.

How useful is the capacity factor?

Yet the capacity factor is not a perfect measure of how valuable capacity is. Taking the average of anything loses infomation – capacity factor is no different.
 
Two plants operating in quite different ways can have the same capacity factor. A plant that operated 50% for the entire year and a plant that generated for half of the year at full capacity will both have an identical capacity factor.
 
The capacity factor loses infomation about the time of energy generation. The time of generation & demand is a crucial element in almost every energy system.
 
Generation during a peak can be a lot more valuable to the world than generation at other times. Because of the nature of dispatchable generation it is more likely to be running during a peak.
 
This leads us to conclude that low capacity factor generation could be more valuable than higher capacity factor generation.  This is especially true for solar in many countries as a) the peak often occurs when the sun is down and b) all solar generation is coincident.
 
The solution to the intermittency problem of renewables is storage. Storage will allow intermittent generation to be used when it’s most valuable – not just whenever it happens to be windy or sunny.
 
Thanks for reading!

Energy Insights – How to save the energy system – André Bardow

Energy Insights is a series where we highlight interesting energy content from around the web.

The previous post in this series was about the 2017 BP Annual Energy Outlook.


These three Energy Insights come from a TED talk titled ‘How to save the energy system’ given by André Bardow.   André is a fantastic speaker and his talk is well worth a watch.

Below are three of the many interesting things André discusses. I really enjoyed writing this – I find all of this fascinating.

Why peak oil won’t save the planet

As humanity burns more oil the amount of oil left to recover should decrease. This is logical – right?

Figure 1 below shows the opposite is true. Estimated global oil reserves have actually increased over time.  The key is to understand the definition of oil reserves.

Figure 1 – Estiamted global oil reserves Estimated global oil reserves (1980 – 2014)

Oil reserves are defined
 as the amount of oil that can be technically recovered at a cost that is financially feasible at the present price of oil.

Oil reserves are therefore a function of a number of variables that change over time:

  • Total physical oil in place – physical production of oil reduces the physical amount of oil in the ground.
  • Total estimated oil in place – the initial estimates are low and increased over time.
  • Oil prices – historically oil prices have trended upwards (Figure 2). Oil reserves defined as a direct function of the present oil price.
  • Technology – the oil & gas industry has benefited more than any other energy sector from improved technology.  Improved technology reduces the cost of producing oil.  This makes more oil production viable at a given price.
Figure 2 – Crude oil price (1950 – 2010)

Only the physcial production of oil has had a negative effect on oil reserves.

The other three (total oil estimate, technology & oil price) have all caused oil reserve estimates to increase.

We are not going to run out of oil any time soon. The limit on oil   production is not set by physcial reserves but by climate change.  I find this worrying – it would be much better if humanity was forced to swtich to renewables!

Wind & solar lack an inherent economy of scale

 A lot of the advantages in systems are from economies of scale – energy systems are no different.  Larger plants are more energy efficient and have lower specific capital & maintenance costs.

Energy efficiency improves with size as the ratio of fixed energy losses to energy produced improves.   Figure 3 shows an example of this for spark ignition gas engines.
Figure 3 – Effect of gas engine size [kWe] on gross electric efficiency [% HHV]

This is also why part load efficiency is worse than full load efficiency.  Energy production reduces but fixed  energy losses remain constant.

Specific capital & operating costs also improve with size.  For example, a 10 MW and 100 MW plant may need the same land area at a cost of £10,000.  The specific capital cost of land for both projects is £1,000/MW versus £100/MW respectively.

Fossil fuel plants use their economy of scale to generate large amounts of electricity from a small number of prime movers.

Wind & solar plants are not able to do this. The problem is the prime movers in both wind & solar plants.

The maximum size of a wind or solar prime movers (wind turbines or solar panels) is small comapred with fossil fuel prime movers.  For example GE offer a 519 MWe gas turbine – the world’s largest wind turbine is the 8 MWe Vestas V164.

Figure 4 – The Vestas V164

A single gas turbine in CCGT mode is more than enough to generate 500 MWe.  A wind farm needs 63 wind turbines to generate the same amount.

The reason for the difference is fundamental to the technologies – the energy density of fuel.  Fossil fuels offer fantastic energy densities – meaning we can do a lot with less fuel (and less equipment).  Transportation favours liquid fossil fuels for this reason.

Wind & solar radiation have low energy densities. To capture more energy we need lots more blade or panel surface area.  This physical constraint means that scaling the prime movers in wind & solar plants is difficult. The physical size increases very fast as we increase electricity generation.

This means that wind turbines & solar panels need to very cheap at small scales. As wind & solar technologies improve there will be improvements in both the economy of scale & maximum size of a single prime mover.

But to offer a similar economy of scale as fossil fuels is difficult due to low energy density fuel.  It’s not that wind & solar don’t benefit from any economy of scale – for example grid connection costs can be shared. It’s the fact that fossil fuels:

  • share most of these economies of scale.
  • use high energy density fuels, which gives a fundamental advantage in the form of large maximum prime mover sizes.

We need to decarbonise the supply of heat & power as rapidly as possible.  Renewables are going to be a big part of that.  The great thing is that even with this disadvantage wind & solar plants are being built around the world!

Average German capacity factors
Andre gives reference capacity factors for the German grid of:

  • Solar = 10 %.
  • Wind = 20 %.
  • Coal = 80 %.

This data is on an average basis.  The average capacity factor across the fleet is usually more relevant than the capacity factor of a state of the art plant.

It is always good to have some rough estimates in the back if your mind.  A large part of engineering is using your own estimates based on experience with the inputs or outputs of models.

Thanks for reading!

CHP Scoping Model v0.2

See the introductory post for this model here.  

This is v0.2 of the CHP scoping model I am developing.  The model is setup with some dummy data.

If you want to get it working for your project all you need to do is change:

  • heat & power demands (Model : Column F-H)
  • prices (Model : Column BF-BH)
  • CHP engine (Input : Engine Library).

You can also optimize the operation of the CHP using a parametric optimization VBA script (Model : Column BW).

You can download the latest version of the CHP scoping model here.

Thanks for reading!

CHP Scoping Model v0.1

The most recent version of this model can be found here.

My motivation for producing this model is to give engineers something to dig their teeth into.

My first few months as an energy engineering graduate were spent pulling apart the standard CHP model used by my company.  A few years later I was training other technical teams how to use my own model.

I learnt a huge amount through deconstructing other peoples models and iterating through versions of my own.

So this model is mostly about education. I would love it to be used in a professional setting – but we may need a couple of releases to iron out the bugs!

In this post I present the beta version of a CHP feasibility model.  The model takes as inputs (all on a half hourly basis):

  • High grade heat demand (Model : Column F).
  • Low grade heat demand (Model : Column G).
  • Electricity demand (Model : Column H).
  • Gas, import & export electricity price (Model : Column BF-BH).

Features of the model:

  • CHP is modeled as a linear function of load. Load can be varied from 50-100 %.
  • Can model either gas turbines or gas engines. No ability to model supplementary firing.  Engine library needs work.
  • The CHP operating profile can be optimized using a parametric optimization written in VBA.
    • Iteratively increases engine load from 50% – 100% (single HH period),
    • Keeps value that increased annual saving the most (the optimum),
    • Moves to next HH period,
    • Optimization can be started using button on (Model : Column BV). I reccomend watching it run to try understand it.  The VBA routine is called parametric.
  • Availability is modeled using a randomly generated column of binary variables (Model : Column C).
You can download the latest version of the CHP scoping model here.

Thanks for reading!

Monte Carlo Q-Learning to Operate a Battery

I have a vision for using machine learning for optimal control of energy systems.  If a neural network can play a video game, hopefully it can understand how to operate a power plant.

In my previous role at ENGIE I built Mixed Integer Linear Programming models to optimize CHP plants.  Linear Programming is effective in optimizing CHP plants but it has limitations.

I’ll detail these limitations in future post – this post is about Reinforcement Learning (RL).  RL is a tool that can solve some of the limitations inherent in Linear Programming.

In this post I introduce the first stage of my own RL learning process. I’ve built a simple model to charge/discharge a battery using Monte Carlo Q-Learning. The script is available on GitHub.

I made use of two excellent blog posts to develop this.  Both of these posts give a good introduction to RL:

Features of the script
 

As I don’t have access to a battery system I’ve built a simple model within Python.  The battery model takes as inputs the state at time t, the action selected by the agent and returns a reward and the new state.  The reward is the cost/value of electricity charged/discharged.

def battery(state, action):  # the technical model
    # battery can choose to :
    #    discharge 10 MWh (action = 0)
    #    charge 10 MWh or (action = 1)
    #    do nothing (action = 2)

    charge = state[0]  # our charge level
    SP = state[1]  # the current settlement period
    action = action  # our selected action
    prices = getprices()
    price = prices[SP - 1]  # the price in this settlement period

    if action == 0:  # discharging
        new_charge = charge - 10
        new_charge = max(0, new_charge)  
        charge_delta = charge - new_charge
        reward = charge_delta * price
    if action == 1:  # charging
        new_charge = charge + 10
        new_charge = min(100, new_charge)
        charge_delta = charge - new_charge
        reward = charge_delta * price
    if action == 2:  # nothing
        charge_delta = 0
        reward = 0

    new_charge = charge - charge_delta
    new_SP = SP + 1
    state = (new_charge, new_SP)
    return state, reward, charge_delta

The price of electricity varies throughout the day.
The model is not fed this data explicitly – instead it learns through interaction with the environment.
 
One ‘episode’ is equal to one day (48 settlement periods).  The model runs through thousands of iterations of episodes and learns the value of taking a certain action in each state.  
 
Learning occurs by apportioning the reward for the entire episode to every state/action that occurred during that episode. While this method works, more advanced methods do this in better ways.
def updateQtable(av_table, av_count, returns):
    # updating our Q (aka action-value) table
    # ********
    for key in returns:
        av_table[key] = av_table[key] + (1 / av_count[key]) * (returns[key] - av_table[key])
    return av_table
The model uses an epsilon-greedy method for action selection.  Epsilon is decayed as the number of episodes increases.
Results
 
Figure 1 below shows the the optimal disptach for the battery model after training for 5,000 episodes.  
Figure 1 – Electricity prices [£/MWh] and the optimal battery dispatch profile [%]
I’m happy the model is learning well. Charging occurs during periods of low electricity prices. It is also fully draining the battery at the end of the day – which is logical behavior to maximise the reward per episode.  
 

Figure 2 below shows the learning progress of the model.

Figure 2 – Model learning progress
Next steps
 
Monte Carlo Q-learning is a good first start for RL. It’s helped me to start to understand some of the key concepts.
 
Next steps will be developing more advanced Q-learning methods using neural networks.

Energy Insights – 2017 BP Energy Outlook

Energy Insights is a series where we pull out key points from energy articles around the web. This is not a full summary but a taste – if you like the ideas then please watch the presentation & read the report.  

Previous posts in this series include the IEA 2016 World Outlook and Automated Cars.

Often people jump to the conclusion that anything released by an oil major is self-serving.  Don’t be like this!  If you ignore a report like the Outlook it is only you that is missing out.

The search for truth requires humility.  You need to be honest about your own ignorance.  You need to be open to learning from any source of infomation.  You need to be confident that you can judge the quality of that infomation.

Below I highlight BP’s view on passenger cars and the effect of climate policies on natural gas over the course of the Outlook (2015-2035).

Oil consumption for passenger cars

Figure 1 – Net change in car oil consumption

BP project a doubling of the global passenger car fleet due to the continued emergence of the middle class.

Luckily the increased oil consumption associated with double the number of cars is almost entirely offset by a 2.5 % annual improvement in fuel efficiency.

This fuel efficiency assumption seems quite small – but actually it is a strong break with the past.  The average for the last twenty years is only 1 %.

Even small improvements in fuel efficiency have a large effect on oil consumption due to the size of the combustion engine fleet.

The opposite is true with electric cars.  BP are projecting the number of electric cars increasing from 1.2 million to 100 million.  This is a compounded annual growth rate of around 25 %!

Unlike with fuel efficiency this relative increase has very little effect.  Electric car deployment increasing by 100 times leads to only a 6 % reduction versus 2015 oil consumption.

Electric cars are a sexy topic that gets a lot of media attention – yet vehicle fuel efficiency may be more important if we care about climate change.

What we need to remember is that large relative increases can be dwarfed by small relative increases.  It’s important to take everything back to the absolute value (in this case oil consumption) that we care about.

Risks to gas demand

Oil majors and clean energy professionals are both interested in the future of natural gas.  In the Outlook BP take a look at how climate policy could affect the growth of natural gas.

Strong climate policies pose a risk to all fossil fuels – natural gas included.  Strong climate policies lead to the reduction of all fossil fuels in favour of low carbon energy generation.

However the Outlook shows that actually both strong and weak climate policies pose risks to natural gas consumption.

Figure 2 – The effect of climate policy strength on natural gas consumption growth

Weak climate policies will favour fossil fuels but also benefit coal over natural gas.  BP expect the net effect of this would be a reduction in gas growth versus their base case.

This is quite a nice example of a Laffer curve.  The Laffer curve is traditionally used for demonstrating the relationship between tax revenue and the tax rate.  The curve shows there is an optimum somewhere in the middle.

Figure 3 – The Laffer Curve

BP are showing that natural gas consumption likely follows a Laffer curve with respect to climate policy.

I hope you found these two insights as interesting as I did.  I encourage you to check out either the presentation or the report for further interesting insights.