Oil Reserves Growth – Energy Basics

Energy Basics is a series covering fundamental energy concepts.

As we consume non-renewable resources, the amount of that resource depletes. Makes sense right?

Yet when it comes to oil reserves we find that oil reserves actually grow over time! This is known as ‘oil reserves growth’. Why does this phenomenon occur?

First, let’s start by defining some relevant terms.

Oil reserves are the amount of oil that can be technically recovered at the current price of oil.

Oil resources are all oil that can be technically recovered at any price.

Oil in place is all the oil in a reservoir (both technically recoverable & unrecoverable oil).

oil reserves growth

Figure 1 – Proved oil reserves and Brent crude oil price (BP Statistical Review 2016)

So why do reserves grow over time? There are three reasons.

One – Geological estimates

Initial estimates of the oil resource are often low. It’s very difficult to estimate the amount of oil in a reservoir as you can’t directly measure it. Often a lot of computing power is thrown at trying to figure out how much oil is underground.

It’s also good engineering practice to stay on the low side when estimating for any project. I expect geologists intentionally do the same for geological estimates.

Two – Oil prices

Oil reserves are a direct function of the current oil price. Increasing oil prices means that more of the oil resource can be classed as an oil reserve.

Historically we have seen oil prices increase – leading to growth in oil reserves (even with the oil resource being depleted at the same time).

But increasing prices can also have secondary effects. A higher price might incentivise an oil company to invest more into an existing field – leading to an increase in oil recovery.

The reserves growth of existing fields can actually be responsible for the majority of additions to reserves.  Between 1977 to 1995 approximately 89% of the additions to US proved reserves of crude oil were due to oil reserves growth rather than the discovery of new fields.

Three – Technology

Improvements in technology have two effects. The first is to make more of the oil in place technically recoverable at any price (ie to increase the oil resource). Hydraulic fracturing (fracking) and horizontal drilling now allow

The first is to make more of the oil in place technically recoverable at any price (ie to increase the oil resource). Hydraulic fracturing (fracking) and horizontal drilling now allow access to oil that previously was technically unrecoverable.

The second is that as technology improves it also gets cheaper. This improvement in economics means that more of the oil resource can be classed as an oil reserve (even at constant or falling prices).

Thanks for reading!

The Complexity of a Zero Carbon Grid – Energy Insights

Energy Insights is a series highlighting interesting energy content from around the web.

Previous posts in this series include Automated Cars and How to save the energy system.

I’m excited to present this Energy Insights post. I’m highlighting a few interesting insights from the ‘The Complexity of a Zero Carbon Grid’ show.

This is very special as The Interchange podcast has only been publically relaunched recently.

The show considers what may be necessary to get to levels of 80-100% renewables. Stephen Lacey and Shayle Kann host the show with Jesse Jenkins as the guest.

The concept of flexibility

Jenkins observes that the concept of flexibility of electrical capacity appearing in literature. Flexibility means how quickly an asset is able to respond to change.

A combined cycle gas turbine plant is usually more flexible than a coal or nuclear generator. One reason for this is the ability to control plant electric output by modulating the supplementary burner gas consumption.

We will need flexibility on a second, minute, hourly or seasonal basis.

This concept of flexibility was also recently touched on by the excellent Energy Analyst blog. Patrick Avis notes that we need both flexibility (kW or kW/min) and capacity (kWh) for a high renewables scenario.

The post ‘Flexibility in Europe’s power sector’ could easily be enough material for a few Energy Insights posts. Well worth a read.

One investment cycle away

Jenkins observes that the investment decisions we make today will affect how we decarbonise in the future. Considering the lifetime of many electricity generation assets, we find that we are only a single investment cycle away from building plants that will be operating in 2050.

Most deep decarbonisation roadmaps include essentially zero carbon electricity by 2050. We need to ensure that when the next investment cycle begins we are not installing carbon intense generation as it would still be operating in 2050.

For both gas and coal the implied cutoff date for plant operation to begin is between 2010 – 2020.

Increasing marginal challenge of renewables deployment

The inverse relationship between the level of deployment of renewables and the marginal value added is well known. Jenkins notes that this relationship also applies to the deployment of storage and demand side response.

As renewable deployment increases the challenges for both storage and demand side response also increase.

Seasonal storage technologies

1 – Power to gas

Electricity -> hydrogen -> synthetic methane.

Figure 3 – Apros Power to Gas

Intermittency of the supply of excess renewable generation means that power to gas asset wouldn’t be fully utilized.

Didn’t cover the possibility of storage of electricity to allow a constant supply of electricity to the power to gas asset.

2 – Underground thermal

Limited to demonstration scale.

Didn’t cover the feasibility of generating electricity from the stored heat.

I would expect that the temperature of the stored heat is low.  Perhaps the temperature could be increased with renewable powered heat pumps.

Thanks for reading!


energy_py – reinforcement learning for energy systems

If you just want to skip to the code, the energy_py library is here.

energy_py is reinforcement learning for energy systems.  

Using reinforcement learning agents to control virtual energy environments is the first step towards using reinforcement learning to optimize real-world energy systems. This is a professional mission of mine – to use reinforcement learning to control real world energy systems.

energy_py supports this goal by providing a collection of reinforcement learning agents, energy environments and tools to run experiments.

What is reinforcement learning

supervised vs unsupervised vs reinforcement

Reinforcement learning is the branch of machine learning where an agent learns to interact with an environment.  Reinforcement learning can give us generalizable tools to operate our energy systems at superhuman levels of performance.

It’s quite different from supervised learning. In supervised learning we start out with a big data set of features and our target. We train a model to replicate this target from patterns in the data.

In reinforcement learning we start out with no data. The agent generates data (sequences of experience) by interacting with the environment. The agent uses it’s experience to learn how to interact with the environment. In reinforcement learning we not only learn patterns from data, we also generate our own data.

This makes reinforcement learning more democratic than supervised learning. The reliance on massive amounts of labelled training data gives companies with unique datasets an advantage. In reinforcement learning all that is needed is an environment (real or virtual) and an agent.

If you are interested in reading more about reinforcement learning, the course notes from a one-day introductory course I teach are hosted here.

Why do we need reinforcement learning in energy systems

Optimal operation of energy assets is already very challenging. Our current energy transition makes this difficult problem even harder.

The rise of intermittent generation is introducing uncertainty on the generation and demand side. The rise of distributed generators and increasing the number of actions available to operators.

For a wide range of problems machine learning results are both state of the art and better than human experts. We can get this level of performance using reinforcement learning in our energy systems.

Today many operators use rules or abstract models to dispatch assets. A set of rules is not able to guarantee optimal operation in many energy systems.

Optimal operating strategies can be developed from abstract models. Yet abstract models (such as linear programming) are often constrained. These models are limited to approximations of the actual plant.  Reinforcement learners are able to learn directly from their experience of the actual plant. These abstract models also require significant amount of bespoke effort by an engineer to setup and validate.

With reinforcement learning we can use the ability of the same agent to generalize to a number of different environments. This means we can use a single agent to both learn how to control a battery and to dispatch flexible demand. It’s a much more scalable solution than developing site by site heuristics or building an abtract model for each site.

beautiful wind turbines

There are challenges to be overcome. The first and most important is safety. Safety is the number one concern in any engineering discipline.

I believe that by reinforcement learning should be first applied on as high a level of the control system as possible. This allows the number of actions to be limited and existing lower level safety & control systems can remain in place. The agent is limited to only making the high level decisions operators make today.

There is also the possibility to design the reward function to incentivize safety. A well-designed reinforcement learner could actually reduce hazards to operators. Operators also benefit from freeing up more time for maintenance.

A final challenge worth addressing is the impact such a learner could have on employment. Machine learning is not a replacement for human operators. A reinforcement learner would not need a reduction in employees to be a good investment.

The value of using a reinforcement learner is to let operations teams do their jobs better.
It will allow them to spend more time and improve performance for their remaining responsibilities such as maintaining the plant.  The value created here is a better-maintained plant and a happier workforce – in a plant that is operating with superhuman levels of economic and environmental performance.

Any machine requires downtime – a reinforcement learner is no different. There will still be time periods where the plant will operate in manual or semi-automatic modes with human guidance.

energy_py is one step on a long journey of getting reinforcement learners helping us in the energy industry. The fight against climate change is the greatest that humanity faces. Reinforcement learning will be a key ally in fighting it. You can checkout the repository on GitHub here.


The best place to take a look at the library is the example of using Q-Learning to control a battery. The example is well documented in this Jupyter Notebook and this blog post.

My reinforcement learning journey

I’m a chemical engineer by training (B.Eng, MSc) and an energy engineer by profession. I’m really excited about the potential of machine learning in the energy industry – in fact that’s what this blog is about!

My understanding of reinforcement learning has come from a variety of resources. I’d like to give credit to all of the wonderful resources I’ve used to understand reinforcement learning.

Sutton & Barto – Reinforcement Learning: An Introduction – the bible of reinforcement learning and a classic machine learning text.

Playing Blackjack with Monte Carlo Methods – I built my first reinforcement learning model to operate a battery using this post as a guide. This post is part two of an excellent three part series. Many thanks to Brandon of Δ ℚuantitative √ourney.

RL Course by David Silver – over 15 hours of lectures from Google DeepMind’s lead programmer – David Silver. Amazing resource from a brilliant mind and brillaint teacher.

Deep Q-Learning with Keras and gym – great blog post that showcases code for a reinforcement learning agent to control a Open AI Gym environment. Useful both for the gym integration and using Keras to build a non-linear value function approximation. Many thanks to Keon Kim – check out his blog here.

Artificial Intelligence and the Future – Demis Hassabis is the co-founder and CEO of Google DeepMind.  In this talk he gives some great insight into the AlphaGo project.

Minh et. al (2013) Playing Atari with Deep Reinforcement Learning – to give you an idea of the importance of this paper – Google purchased DeepMind after this paper was published.  DeepMind was a company with no revenue, no customers and no product – valued by Google at $500M!  This is a landmark paper in reinforcement learning.

Minh et. al (2015) Human-level control through deep reinforcement learning – an update to the 2013 paper published in Nature.

I would also like to thank Data Science Retreat.  I’m just finishing up the three month immersive program – energy_py is my project for the course.  Data Science Retreat has been a fantastic experience and I would highly recommend it.  The course is a great way to invest in yourself, develop professionally and meet amazing people.

Energy Basics – Capacity Factor

All men & women are created equal. Unfortunately the same is not true for electricity generating capacity.
Capacity on it’s own is worthless – what counts the electricity generated (kWh) from that capacity (kW). If the distinction between kW and kWh is not clear this previous post will be useful.

Capacity factor is one way to quantify the value of capacity. It’s the actual electricity (kWh) generated as a percentage of the theoretical maximum (operation at maximum kW).

For example to calculate the capacity factor on an annual basis:

There are many reasons why capacity will not generate as much as it could.

Three major reasons are maintenance, unavailability of fuel and economics.

Burning fossil fuels creates a challenging engineering environment. The core of a gas turbine is high pressure & temperature gases rapidly rotating blazing hot metal. Coal power stations generate electricity by high pressure steam forcing a steam turbine to spin incredibly fast.
These high challenges mean that fossil fuel plants need a lot of maintenance. The time when the plant is being maintained is time the capacity isn’t generating electricity.
Renewables plants need a lot less maintenance than a fossil fuel generator. No combustion means there is a lot less stress on equipment.
Availability of fuel
Yet while renewables come ahead in terms of maintenance, they fall behind due to a constraint that fossil fuel generation usually doesn’t suffer from – unavailability of fuel.
This is why renewables like wind & solar are classed as intermittent. Fuel is often not available meaning generation is often not possible.
Solar panels can’t generate at night. Wind turbines need wind speeds to be within a certain range – not too low, not too high – just right.
This means that wind & solar plants are often not able to generate at full capacity – or even to generate at all. This problem isn’t common for fossil fuel generation. Fossil fuels are almost always available through natural gas grids or on site coal storage.
The final reason for capacity to not generate is economics.
The relative price of energy and regulations change how fossil fuel capacity is dispatched. Today’s low natural gas price environment is the reason why coal capacity factors have been dropping.
Here renewables come out way ahead of fossil fuels. As the fuel is free renewables can generate electricity at much lower marginal cost than fossil fuels. Wind & solar almost always take priority over fossil fuel generation.
Typical capacity factors
The capacity factor wraps up and quantifies all of the factors discussed above.
Table 1 – Annual capacity factors (2014-2016 US average)

CoalCCGTWindSolar PVNuclear
Annual Capacity Factor56.13%53.40%33.63%26.30%92.17%


Table 1 gives us quite a bit of insight into the relative value of different electricity generating technologies. The capacity factor for natural gas is roughly twice as high as solar PV.


We could conclude that 1 MW of natural gas capacity is worth around twice as much as 1 MW of solar PV.

How useful is the capacity factor?

Yet the capacity factor is not a perfect measure of how valuable capacity is. Taking the average of anything loses infomation – capacity factor is no different.
Two plants operating in quite different ways can have the same capacity factor. A plant that operated 50% for the entire year and a plant that generated for half of the year at full capacity will both have an identical capacity factor.
The capacity factor loses infomation about the time of energy generation. The time of generation & demand is a crucial element in almost every energy system.
Generation during a peak can be a lot more valuable to the world than generation at other times. Because of the nature of dispatchable generation it is more likely to be running during a peak.
This leads us to conclude that low capacity factor generation could be more valuable than higher capacity factor generation.  This is especially true for solar in many countries as a) the peak often occurs when the sun is down and b) all solar generation is coincident.
The solution to the intermittency problem of renewables is storage. Storage will allow intermittent generation to be used when it’s most valuable – not just whenever it happens to be windy or sunny.
Thanks for reading!

Energy Insights – How to save the energy system – André Bardow

Energy Insights is a series where we highlight interesting energy content from around the web.

The previous post in this series was about the 2017 BP Annual Energy Outlook.

These three Energy Insights come from a TED talk titled ‘How to save the energy system’ given by André Bardow.   André is a fantastic speaker and his talk is well worth a watch.

Below are three of the many interesting things André discusses. I really enjoyed writing this – I find all of this fascinating.

Why peak oil won’t save the planet

As humanity burns more oil the amount of oil left to recover should decrease. This is logical – right?

Figure 1 below shows the opposite is true. Estimated global oil reserves have actually increased over time.  The key is to understand the definition of oil reserves.

Figure 1 – Estiamted global oil reserves Estimated global oil reserves (1980 – 2014)

Oil reserves are defined
 as the amount of oil that can be technically recovered at a cost that is financially feasible at the present price of oil.

Oil reserves are therefore a function of a number of variables that change over time:

  • Total physical oil in place – physical production of oil reduces the physical amount of oil in the ground.
  • Total estimated oil in place – the initial estimates are low and increased over time.
  • Oil prices – historically oil prices have trended upwards (Figure 2). Oil reserves defined as a direct function of the present oil price.
  • Technology – the oil & gas industry has benefited more than any other energy sector from improved technology.  Improved technology reduces the cost of producing oil.  This makes more oil production viable at a given price.
Figure 2 – Crude oil price (1950 – 2010)

Only the physcial production of oil has had a negative effect on oil reserves.

The other three (total oil estimate, technology & oil price) have all caused oil reserve estimates to increase.

We are not going to run out of oil any time soon. The limit on oil   production is not set by physcial reserves but by climate change.  I find this worrying – it would be much better if humanity was forced to swtich to renewables!

Wind & solar lack an inherent economy of scale

 A lot of the advantages in systems are from economies of scale – energy systems are no different.  Larger plants are more energy efficient and have lower specific capital & maintenance costs.

Energy efficiency improves with size as the ratio of fixed energy losses to energy produced improves.   Figure 3 shows an example of this for spark ignition gas engines.
Figure 3 – Effect of gas engine size [kWe] on gross electric efficiency [% HHV]

This is also why part load efficiency is worse than full load efficiency.  Energy production reduces but fixed  energy losses remain constant.

Specific capital & operating costs also improve with size.  For example, a 10 MW and 100 MW plant may need the same land area at a cost of £10,000.  The specific capital cost of land for both projects is £1,000/MW versus £100/MW respectively.

Fossil fuel plants use their economy of scale to generate large amounts of electricity from a small number of prime movers.

Wind & solar plants are not able to do this. The problem is the prime movers in both wind & solar plants.

The maximum size of a wind or solar prime movers (wind turbines or solar panels) is small comapred with fossil fuel prime movers.  For example GE offer a 519 MWe gas turbine – the world’s largest wind turbine is the 8 MWe Vestas V164.

Figure 4 – The Vestas V164

A single gas turbine in CCGT mode is more than enough to generate 500 MWe.  A wind farm needs 63 wind turbines to generate the same amount.

The reason for the difference is fundamental to the technologies – the energy density of fuel.  Fossil fuels offer fantastic energy densities – meaning we can do a lot with less fuel (and less equipment).  Transportation favours liquid fossil fuels for this reason.

Wind & solar radiation have low energy densities. To capture more energy we need lots more blade or panel surface area.  This physical constraint means that scaling the prime movers in wind & solar plants is difficult. The physical size increases very fast as we increase electricity generation.

This means that wind turbines & solar panels need to very cheap at small scales. As wind & solar technologies improve there will be improvements in both the economy of scale & maximum size of a single prime mover.

But to offer a similar economy of scale as fossil fuels is difficult due to low energy density fuel.  It’s not that wind & solar don’t benefit from any economy of scale – for example grid connection costs can be shared. It’s the fact that fossil fuels:

  • share most of these economies of scale.
  • use high energy density fuels, which gives a fundamental advantage in the form of large maximum prime mover sizes.

We need to decarbonise the supply of heat & power as rapidly as possible.  Renewables are going to be a big part of that.  The great thing is that even with this disadvantage wind & solar plants are being built around the world!

Average German capacity factors
Andre gives reference capacity factors for the German grid of:

  • Solar = 10 %.
  • Wind = 20 %.
  • Coal = 80 %.

This data is on an average basis.  The average capacity factor across the fleet is usually more relevant than the capacity factor of a state of the art plant.

It is always good to have some rough estimates in the back if your mind.  A large part of engineering is using your own estimates based on experience with the inputs or outputs of models.

Thanks for reading!

CHP Scoping Model v0.2

See the introductory post for this model here.  

This is v0.2 of the CHP scoping model I am developing.  The model is setup with some dummy data.

If you want to get it working for your project all you need to do is change:

  • heat & power demands (Model : Column F-H)
  • prices (Model : Column BF-BH)
  • CHP engine (Input : Engine Library).

You can also optimize the operation of the CHP using a parametric optimization VBA script (Model : Column BW).

You can download the latest version of the CHP scoping model here.

Thanks for reading!

Tuning Model Structure – Number of Layers & Number of Nodes

Imbalance Price Forecasting is a series applying machine learning to forecasting the UK Imbalance Price.  

Last post I introduced a new version of the neural network I am building.  This new version is a feedforward fully connected neural network written in Python built using Keras.

I’m now working on tuning model hyperparameters and structure. Previously I setup two experiments looking at:

  1. Activation functions
    • concluded rectified linear (relu) is superior to tanh, sigmoid & linear.
  2. Data sources
    • concluded more data the better.

In this post I detail two more experiments:

  1. Number of layers
  2. Number of nodes per layer

Python improvements

I’ve made two improvements to my implementation of Keras.  An updated script is available on my GitHub.

I often saw during training that the model trained on the last epoch was not necessarily the best model. I have made use of a ModelCheckpoint that saves the weights of the best model trained.

The second change I have made is to include dropout layers after the input layer and each hidden layer.  This is a better implementation of dropout!

Experiment one – number of layers

Model parameters were:

  • 15,000 epochs. Trained in three batches. 10 folds cross-validation.
  • 2016 Imbalance Price & Imbalance Volume data scraped from Elexon.
  • Feature matrix of lag 48 of Price & Volume & Sparse matrix of Settlement Period, Day of the week and Month.
  • Feed-forward neural network:
    • Input layer with 1000 nodes, fully connected.
    • 0-5 hidden layers with 1000 nodes, fully connected.
    • 1-6 dropout layers. One under input & each hidden layer.  30% dropout.
    • Output layer with 1000 nodes, single output node.
    • Loss function = mean squared error.
    • Optimizer = adam (default parameters).

Results of the experiments are shown below in Fig. 1 – 3.

Figure 1 – number of layers vs final training loss
Figure 2 – number of layers vs MASE

Figure 1 shows two layers with the smallest training loss.

Figure 2 shows that two layers also has the lowest CV MASE (although has a high training MASE).

Figure 3 – number of layers vs overfit. Absolute overfit = Test-Training. Relative = Absolute / Test.

In terms of overfitting two layers shows reasonable absolute & relative overfit.  The low relative overfit is due to a high training MASE (which minimizes the overfit for a constant CV MASE).

My conclusion from this set of experiments is to go forward with a model of two layers.  Increasing the number of layers beyond this doesn’t seem to improve performance.

It is possible that training for more epochs may improve the performance of the more complex networks which will be harder to train.  For the scope of this project I am happy to settle on two layers.

Experiment two – number of nodes

For the second set of experiments all model parameters were all as above except for:

  • 2 hidden layers with 50-1000 nodes.
  • 5 fold cross validation.
Figure 4 – number of layers vs final training loss
Figure 5 – number of layers vs MASE
Figure 6 – number of layers vs overfit.  Absolute overfit = Test-Training.  Relative = Absolute / Test

My conclusion from looking at the number of nodes is that 500 nodes per layer is the optimum result.


Both parameters can be further optimized using the same parametric optimization.  For the scope of this work I am happy to work with the results of these experiments.

I trained a final model using the optimal parameters.  A two layer & 500 node network achieved a test MASE of 0.455 (versus the previous best of 0.477).

Table 1 – results of the final model fitted (two layers, 500 nodes per layer)

The next post in this series will look at controlling overfitting via dropout.

CHP Scoping Model v0.1

The most recent version of this model can be found here.

My motivation for producing this model is to give engineers something to dig their teeth into.

My first few months as an energy engineering graduate were spent pulling apart the standard CHP model used by my company.  A few years later I was training other technical teams how to use my own model.

I learnt a huge amount through deconstructing other peoples models and iterating through versions of my own.

So this model is mostly about education. I would love it to be used in a professional setting – but we may need a couple of releases to iron out the bugs!

In this post I present the beta version of a CHP feasibility model.  The model takes as inputs (all on a half hourly basis):

  • High grade heat demand (Model : Column F).
  • Low grade heat demand (Model : Column G).
  • Electricity demand (Model : Column H).
  • Gas, import & export electricity price (Model : Column BF-BH).

Features of the model:

  • CHP is modeled as a linear function of load. Load can be varied from 50-100 %.
  • Can model either gas turbines or gas engines. No ability to model supplementary firing.  Engine library needs work.
  • The CHP operating profile can be optimized using a parametric optimization written in VBA.
    • Iteratively increases engine load from 50% – 100% (single HH period),
    • Keeps value that increased annual saving the most (the optimum),
    • Moves to next HH period,
    • Optimization can be started using button on (Model : Column BV). I reccomend watching it run to try understand it.  The VBA routine is called parametric.
  • Availability is modeled using a randomly generated column of binary variables (Model : Column C).
You can download the latest version of the CHP scoping model here.

Thanks for reading!

Monte Carlo Q-Learning to Operate a Battery

I have a vision for using machine learning for optimal control of energy systems.  If a neural network can play a video game, hopefully it can understand how to operate a power plant.

In my previous role at ENGIE I built Mixed Integer Linear Programming models to optimize CHP plants.  Linear Programming is effective in optimizing CHP plants but it has limitations.

I’ll detail these limitations in future post – this post is about Reinforcement Learning (RL).  RL is a tool that can solve some of the limitations inherent in Linear Programming.

In this post I introduce the first stage of my own RL learning process. I’ve built a simple model to charge/discharge a battery using Monte Carlo Q-Learning. The script is available on GitHub.

I made use of two excellent blog posts to develop this.  Both of these posts give a good introduction to RL:

Features of the script

As I don’t have access to a battery system I’ve built a simple model within Python.  The battery model takes as inputs the state at time t, the action selected by the agent and returns a reward and the new state.  The reward is the cost/value of electricity charged/discharged.

def battery(state, action):  # the technical model
    # battery can choose to :
    #    discharge 10 MWh (action = 0)
    #    charge 10 MWh or (action = 1)
    #    do nothing (action = 2)

    charge = state[0]  # our charge level
    SP = state[1]  # the current settlement period
    action = action  # our selected action
    prices = getprices()
    price = prices[SP - 1]  # the price in this settlement period

    if action == 0:  # discharging
        new_charge = charge - 10
        new_charge = max(0, new_charge)  
        charge_delta = charge - new_charge
        reward = charge_delta * price
    if action == 1:  # charging
        new_charge = charge + 10
        new_charge = min(100, new_charge)
        charge_delta = charge - new_charge
        reward = charge_delta * price
    if action == 2:  # nothing
        charge_delta = 0
        reward = 0

    new_charge = charge - charge_delta
    new_SP = SP + 1
    state = (new_charge, new_SP)
    return state, reward, charge_delta

The price of electricity varies throughout the day.
The model is not fed this data explicitly – instead it learns through interaction with the environment.
One ‘episode’ is equal to one day (48 settlement periods).  The model runs through thousands of iterations of episodes and learns the value of taking a certain action in each state.  
Learning occurs by apportioning the reward for the entire episode to every state/action that occurred during that episode. While this method works, more advanced methods do this in better ways.
def updateQtable(av_table, av_count, returns):
    # updating our Q (aka action-value) table
    # ********
    for key in returns:
        av_table[key] = av_table[key] + (1 / av_count[key]) * (returns[key] - av_table[key])
    return av_table
The model uses an epsilon-greedy method for action selection.  Epsilon is decayed as the number of episodes increases.
Figure 1 below shows the the optimal disptach for the battery model after training for 5,000 episodes.  
Figure 1 – Electricity prices [£/MWh] and the optimal battery dispatch profile [%]
I’m happy the model is learning well. Charging occurs during periods of low electricity prices. It is also fully draining the battery at the end of the day – which is logical behavior to maximise the reward per episode.  

Figure 2 below shows the learning progress of the model.

Figure 2 – Model learning progress
Next steps
Monte Carlo Q-learning is a good first start for RL. It’s helped me to start to understand some of the key concepts.
Next steps will be developing more advanced Q-learning methods using neural networks.

Energy Insights – 2017 BP Energy Outlook

Energy Insights is a series where we pull out key points from energy articles around the web. This is not a full summary but a taste – if you like the ideas then please watch the presentation & read the report.  

Previous posts in this series include the IEA 2016 World Outlook and Automated Cars.

Often people jump to the conclusion that anything released by an oil major is self-serving.  Don’t be like this!  If you ignore a report like the Outlook it is only you that is missing out.

The search for truth requires humility.  You need to be honest about your own ignorance.  You need to be open to learning from any source of infomation.  You need to be confident that you can judge the quality of that infomation.

Below I highlight BP’s view on passenger cars and the effect of climate policies on natural gas over the course of the Outlook (2015-2035).

Oil consumption for passenger cars

Figure 1 – Net change in car oil consumption

BP project a doubling of the global passenger car fleet due to the continued emergence of the middle class.

Luckily the increased oil consumption associated with double the number of cars is almost entirely offset by a 2.5 % annual improvement in fuel efficiency.

This fuel efficiency assumption seems quite small – but actually it is a strong break with the past.  The average for the last twenty years is only 1 %.

Even small improvements in fuel efficiency have a large effect on oil consumption due to the size of the combustion engine fleet.

The opposite is true with electric cars.  BP are projecting the number of electric cars increasing from 1.2 million to 100 million.  This is a compounded annual growth rate of around 25 %!

Unlike with fuel efficiency this relative increase has very little effect.  Electric car deployment increasing by 100 times leads to only a 6 % reduction versus 2015 oil consumption.

Electric cars are a sexy topic that gets a lot of media attention – yet vehicle fuel efficiency may be more important if we care about climate change.

What we need to remember is that large relative increases can be dwarfed by small relative increases.  It’s important to take everything back to the absolute value (in this case oil consumption) that we care about.

Risks to gas demand

Oil majors and clean energy professionals are both interested in the future of natural gas.  In the Outlook BP take a look at how climate policy could affect the growth of natural gas.

Strong climate policies pose a risk to all fossil fuels – natural gas included.  Strong climate policies lead to the reduction of all fossil fuels in favour of low carbon energy generation.

However the Outlook shows that actually both strong and weak climate policies pose risks to natural gas consumption.

Figure 2 – The effect of climate policy strength on natural gas consumption growth

Weak climate policies will favour fossil fuels but also benefit coal over natural gas.  BP expect the net effect of this would be a reduction in gas growth versus their base case.

This is quite a nice example of a Laffer curve.  The Laffer curve is traditionally used for demonstrating the relationship between tax revenue and the tax rate.  The curve shows there is an optimum somewhere in the middle.

Figure 3 – The Laffer Curve

BP are showing that natural gas consumption likely follows a Laffer curve with respect to climate policy.

I hope you found these two insights as interesting as I did.  I encourage you to check out either the presentation or the report for further interesting insights.