Skip to main content
  • Original article
  • Open access
  • Published:

Uncertainty analysis using Bayesian Model Averaging: a case study of input variables to energy models and inference to associated uncertainties of energy scenarios

Abstract

Background

Energy models are used to illustrate, calculate and evaluate energy futures under given assumptions. The results of energy models are energy scenarios representing uncertain energy futures.

Methods

The discussed approach for uncertainty quantification and evaluation is based on Bayesian Model Averaging for input variables to quantitative energy models. If the premise is accepted that the energy model results cannot be less uncertain than the input to energy models, the proposed approach provides a lower bound of associated uncertainty. The evaluation of model-based energy scenario uncertainty in terms of input variable uncertainty departing from a probabilistic assessment is discussed.

Results

The result is an explicit uncertainty quantification for input variables of energy models based on well-established measure and probability theory. The quantification of uncertainty helps assessing the predictive potential of energy scenarios used and allows an evaluation of possible consequences as promoted by energy scenarios in a highly uncertain economic, environmental, political and social target system.

Conclusions

If societal decisions are vested in computed model results, it is meaningful to accompany these with an uncertainty assessment. Bayesian Model Averaging (BMA) for input variables of energy models could add to the currently limited tools for uncertainty assessment of model-based energy scenarios.

Background

In this paper, a method for an explicit, quantitative uncertainty assessment suitable for quantitative energy models with input variables is proposed. The method discussed renders the uncertainty evaluation more tangible to modellers and receivers of energy model scenarios. It can be perceived as an application of the discussion provided by Culka in [1]. The proposed quantification of uncertainty departs from a probabilistic assessment in Bayesian terms. The Bayesian Model Averaging (BMA) method is a well-established concept which already has been applied in energy economics [2, 3]. By intention, the method presented is not novel and relies on accepted concepts and theories. However, the presented definition of uncertainty derived from basic probability theory is novel, and the application was not formulated as such for energy economic contexts to my knowledge. The approach could add to the currently limited tools of uncertainty quantification in energy modelling. The presented method as discussed in this paper, does not claim to resolve the question of reliability of model results. In particular, it is limited to the assessment of input variables, and a specific kind of assumption uncertainty. It does explicitly not address the quantification of model-specific error propagation.

The objectives of this paper are to (1) raise awareness that uncertainty in energy scenarios needs to be addressed (“Background” section ), (2) provide a definition of an uncertainty measure (“Methods” section) and (3) exemplify the uncertainty assessment for input variables with BMA and probabilistic uncertainty in a case study (Methods: “Case study” section). In this section, I will focus on the relevance of such a method based on current practice and criticisms on available uncertainty assessments. I am briefly introducing the kinds of uncertainty the case study assess, and I am ending the section with the introduction of the premise which has to be accepted in order to formulate an uncertainty assessment for the results of energy models based on their input variables.

Energy models are representations of the energy system including different sub-systems. The target system cannot be described solely in terms of one system without ignoring decisive elements. In contrast, a system model of a physical process could be described for example by the laws of thermodynamics. Energy models, however, represent a target system involving physical, societal, political, environmental and other aspects as central elements, for example, energy system models such as TIMES [4], or MESSAGE [5]. Many energy models aim at a broad inclusion of target system elements, e.g. an inclusion of different energy carriers, different economic sectors, environmental aspects and extensive regional inclusion. The results are large and complex models that are difficult to analyse with respect to uncertainty and internal error propagation. Error propagation estimation and analysis of individual models may even be impossible due to the complexity of mathematical formulations, ad hoc assumptions, idealisations of the target system and lack of empirical verification (of parts) of the model. The choice of model boundaries, the level of abstraction or idealisation, mathematical representation and used optimisation routines are highly individual for every energy model. Early attempts of quality improvement for energy models have mainly focussed on technical modelling aspects [6]. Recent evaluation processes include next to classical uncertainty estimation methods [7] also approaches for developing adaptive policies under uncertainty [8]. Classical uncertainty estimation includes statistical analysis as, for example, output means, variances, sampling techniques, e.g. Monte Carlo, sensitivity analysis, and the like. In the light of uncertainty, which is not uniquely due to model characteristics, other techniques as robustness analysis or explicit integration of subjective features have been discussed recently [9]. These methods are focussed on decision support, which is especially relevant if energy model results are used to derive recommendations.

Models are used to compute scenarios of the energy system in future. These may be demanded by economy, political institutions or stakeholders. Assisting the evaluation of impacts for policy analysis is a central role of quantitative energy modelling [10, 11]. Mathematical energy models are a simplified and idealised representation of a target system. Due to these simplifications and idealisations of mathematical models, the uncertainty regarding assumptions and stochastic processes in (parts of) the energy system being modelled, energy models and their results—energy scenarios—face uncertainty. As energy scenarios can serve for political advice and may be influential for political decisions, an uncertainty analysis for energy scenarios seems to be necessary. For the decision maker, it would provide relevant information regarding the relevance and reliability of energy scenarios.

What the term “scenario” refers to is not clearly defined in the literature. Van Notten describes more than 11 definitions and common applications for scenarios [12]. Lindgren concisely summarises paradoxes and applications for scenario techniques [13]. Both agree that scenario building is a fundamentally intuitive and creative process, involving associations, inferred causal patterns and other ideas. Scenarios are a widely used technique if future developments are to be evaluated [14]. Önkal et al. differentiate between method-based statistical forecasting and forecasting with scenarios and emphasise the latter being a conceptual description of a plausible future which underlines reasoning and uncertainty [15]. In this paper, the term energy scenario refers to the output of quantitative energy models of any mathematical kind. In energy economics, energy scenarios are used to describe possible, plausible or probable future energy system states [16]. The limitation of the future to some defined input scenarios, also called storylines or key assumptions, implies a subjective and decisive pre-selection of futures scrutinised with an energy model. This is a delicate process which should involve expert knowledge and rigorous attention concerning plausibility and reciprocal assumption impact [17]. The agreed assumptions, model design and formulation, and the specific question to be answered by an energy model, form the basis for the calculation of energy scenarios. So, energy scenarios should be understood for this paper as the quantitative and interpretative outputs of mathematical energy models.

Typically, an energy scenario is accompanied with a statement such as “if the assumptions hold, the cost-effective [mutatis mutandis, e.g. high supply security, low-carbon] pathway to an energy system with low carbon emissions [mutatis mutandis, e.g. share of renewables] in 2030 is energy scenario X”. Attention should be paid to the disclaimer "if the assumptions hold". This makes any energy scenario (the output of a given energy model) conditional on the input assumptions. Many model-based outlooks explicitly avoid terms like “forecast” being well aware that energy scenarios hold uncertainty but avoid specification, let alone quantification. For example the ENTSO-E scenario outlook 2014–2030 speaks of four “visions” explicating that “The four visions are based on distinctively different assumptions, thus the actual future evolution of parameters is expected to lie in-between.” [18] Implicitly, this asserts that model outcomes depend on the assumptions made. In the Word Energy Outlook (WEO) of 2011, the International Energy Agency explicitly distances from producing forecasts, but provides “a set of internally consistent projections: none should be considered as a forecast” [19]. In spite of the uncertain nature of these projections, these are designed by the authors for real decision support. The WEO explicates in their self-presentation that “the WEO projections are used by the public and private sector as a framework on which they can base their policy-making, planning and investment decisions and to identify what needs to be done to arrive at a supportable and sustainable energy future” [20]. I will call these forms of uncertainty admittance a general disclaimer. Another example of the current standard uncertainty treatment by BP, who provides an explicit disclaimer, yet does not specify how uncertain results are in their perception. “Forward-looking statements involve risks and uncertainties because they relate to events, and depend on circumstances, that will or may occur in the future. Actual outcomes may differ depending on a variety of factors […]” [21].

Evidentially if, as IEA states, such outlooks can be used to base decisions on them, an accompanying explicit uncertainty assessment should be beneficiary for recipients. Given that, as explained by the BP disclaimer, basically all relevant model inputs face uncertainty to an unspecified extent, how should the recipient include that in her decision-making process?

If an uncertainty assessment should be of value for a recipient of an outlook or a study, it seemingly demands more than a general disclaimer that things could turn out to be different. Ideally, it would render uncertainty associated with outcomes tangible and understandable, cf. [1].

Uncertainty associated with energy models and their results have received little attention in the literature. Walker et al. have developed a definition framework, i.e. the uncertainty matrix, used to identify uncertainty in energy models according to location, nature and level [22]. Van der Sluijs et al. developed the Numerical Unit Spread Assessment Pedigree (NUSAP) method and applied the uncertainty evaluation to different models [23] to produce a diagnostic diagram. The decisive role of assumption value-ladenness is stressed by Kloprogge [24]. Refsgaard et al. have reviewed 14 uncertainty assessment methods, including the two methods mentioned above. They present suitable methods for uncertainty treatment at various stages of the modelling process detailed for different levels of ambition or available resources [25]. The NUSAP approach applied different methods: an expert elicitation workshop, a meta-level analysis of similarities and differences in scenario results of six energy models, and a sensitivity analysis based on Morris [26]. Expert elicitation for uncertainty quantification (or qualification), in particular faces some challenges. The often used Delphi method [27] suffers from inevitably present psychological aspects, for example, question design, conflict aversion, status and competence presentation of an expert, majority opinion pressure, cf. [28], to name just a few. The subjective character of such evaluations entails some disadvantages of the method. An uncertainty assessment should yield reproducible results, irrespective of the individual expert questioned. But given a different group of experts or different questionnaire design, an uncertainty assessment based on expert elicitation could yield significantly different results.

An uncertainty assessment should provide a clear understanding how reliable a model result is. This understanding should not be conditional on the expertise of the recipient, nor should it depend on the person(s) assessing uncertainties. However, expert knowledge is an important part of uncertainty assessment and should be included, relativised by statistical facts and a reproducible computation method. The assessment should be applicable to the different energy models with a sufficient degree of individuality for each model, yet a common methodology applicable for different mathematical energy model types. The presented method accommodates all these aspects: BMA is a reproducible computation method, based on statistical facts which are independent of psychological and subjective evaluation. By means of prior probabilities, valuable expert knowledge can nonetheless be incorporated and is relativised by statistical evidence. Applying the method to input variables allows versatile employment regarding different mathematical types of energy models. The result of the method is an explicit quantitative uncertainty assessment indicating reliability in well-known terms of probability.

A typical energy model incorporates physical facts, for example, stocks of electricity generation capacity within system boundaries, storage capacities, electricity or gas grid infrastructure information, car stock, and the like, depending on the scope and aim of the energy model. These facts (at least for the base year or calibration) face uncertainty to a lesser extent than other assumptions. A more delicate issue are “facts” about the future. A typical model would need such input to compute an optimal solution or simulate consequences for a given time horizon. For example, energy prices, population growth, future efficiency standards, housing stocks, technology fix and variable costs, etc. These assumptions are highly influential for model results. Intuitively, the further in the future the assumption applies, for example, technology investment cost in 2030, the less certain can the modeller be that such an assumption will hold given the target system may develop in many different ways over decades. However, quantitatively formulating assumptions for future developments is indispensable if a model-based assessment is desired. Speaking somewhat loosely and exaggerating, one might say, a model shows what is assumed to be true—in a sophisticated and complex way, beyond what thinking about the modelled question would allow the average person—or even the expert—comprehending in terms of inter-relations and cross-impact. However, using a complex model does not implicitly mean using a good model. Complex models tend to rely on many assumptions. These assumptions might or might not be true and are hence facing uncertainty. Uncertainty over time of such assumptions means that if a model has a modelling horizon,Footnote 1 and model results are displayed as time series, the related uncertainty of the results must increase. This is due to the facts that model inherent simplifications and generalisations perform error propagation, and that assumptions regarding mid- or long-term future are uncertain. The interactive, reactive and non-deterministic nature of the world leads to limited predictability of states of the target system in the future. For example, a natural gas price assumption is influenced by other processes in the target system such as extraction rates, transportation infrastructure, political stability of producing countries, and the like. There might even be a time-lagged back loop, for, if the natural gas price is high enough, alternative extraction methods (aka unconventional gas) may become economic, this in turn increases supply and this in turn could lower the price.Footnote 2 Such timely interrelated effects may be put aside for now; however, it exemplifies that a complex world demands for complex models. Given that natural gas extraction is a process which has to be planned and engineered, is contractually governed, and relies on exhaustible sources, the assumption for the next month may be less uncertain than an assumption for 2040 where little planning and engineering investment have yet been communicated, contracts may not yet exist, and, the location and development of possible new gas fields is not foreseeable at the moment. BMA for input variables allows an assessment of these relations, and via predictive densities, consistent storylines can be developed. The approach does not analyse which specific value for an assumption is more or less likely, rather, it evaluates the uncertainty of relations in the target system represented in the energy model, whichever value is used to design a storyline.

Another practice in modelling, which is sensible to uncertainty, are ad hoc assumptions. These may, for example, be bound restricting the solution space of a Linear Program (LP) or elasticity of substitution assumptions in Computable Generalised Equilibrium (CGE) models [29, 30]. Such assumptions are a modeller’s subjective choice and decisively influence the model results. This is not to say that these assumptions lack legitimacy; however, model-embedded ad hoc assumptions seem not to be as transparent as key assumptions which are reported as scenarios, for example, the “New Policies Scenario” of the International Energy Agency [19] in contrast to the model-embedded ad hoc assumptions [31]. Assumptions based on the modeller’s expert opinion can be beneficiary in answering a research question. However, if these assumptions are not transparent, the possibility of unwarranted and uncertain model results should be communicated.

This type of uncertainty is called assumption uncertainty or, in Walker’s terms, input uncertainty and parameter uncertainty. Other locations of uncertainty analysed by Walker are context uncertainty which is addressed if Bayesian Model Averaging (BMA) for input variables is carried out. What explicitly is not addressed in this example of input variable uncertainty is model uncertainty, i.e. uncertainty that is held conceptually in the energy model (the mathematical formulation) and model technical uncertainty (arising from the computer implementation) [22]. This is for two reasons: model uncertainty is a highly individual issue and can best be investigated for a specific model. The method discussed in this paper aims to be a versatile assessment method that can be applied to any energy model using input variables. Secondly, the respective (dis-)advantages of energy model types are suspected to be known, for example, the need for linearisation of constraints in Linear Programs, or the potentially wrong default position in probabilistic non-linear models, cf. [32]. It is upon the energy modeller to choose a suitable model for a given question. Evidentially, other sources of uncertainty than input uncertainty and parameter uncertainty which is investigated in the presented paper are present. In response to that fact, a lower bound of uncertainty is proposed. The corresponding premise that needs to be accepted can be formulated as follows.

Premise: The output of an energy model cannot be less uncertain than the input to an energy model.

The premise captures the idea that an energy model is not a tool which reduces uncertainty and could predict a future with certainty based on uncertain assumptions about the target system. The relation of energy system aspects, the bearing of interrelated systems and the actual values are uncertain. The idealisations and generalisations in quantitative models cannot reduce uncertainty but increase it as they are known to not represent the target system isomorphic. Simplifying a highly complex reality is the whole point of using a model. However, comparing model results and reality, called empirical adequacy, is the only way to determine whether the model is over simplifying or missing relevant aspects. Actually, model output is not only dependent on input variables but on the very formulation of a specific model, and its version. The relevance of model changes (structural, system constraints and parameters) for model output is significant, cf. [33]. This indicates that a model—designed accordingly—may generate any output. But model-specific empirical inadequacy and error propagation is not the focus of the presented case study. Rather, it is an assessment which is independent of individual models, providing a tool to evaluate input value uncertainty (even if the model could represent reality). In other words, even a perfect model (w.r.t. empirical adequacy) cannot provide certain results, for it is uncertain which values the input variables will take. This uncertainty is addressed, generally referring to “relevant developments”, captured in story lines and scenario formulation. The presented method provides a tool for statistical analysis of such assumptions.

If this premise is accepted, an uncertainty assessment of input variables of a specific energy model could yield an evaluation of the form: uncertainty for input variable Y is at least X%. One could argue that this is an unspecific uncertainty assessment, but actually, it also accounts for a type of uncertainty that can hardly be addressed differently, viz. epistemic uncertainty.Footnote 3 Although assessment methods for epistemic uncertainty have been proposed in other contexts [34], these methods do not seem to be applicable in energy modelling. By epistemic uncertainties in the context of energy models I mean a form of under-determinism or “not-knowability” of decisive influences on the energy system.

Unfortunately, energy models have a rather poor history of empirical adequacy (aka model fit) [35, 36]. For example, most energy models cannot account for political decisions, scientific cognition of other disciplines (relevant for the energy system) or natural hazard consequences. This implicitly means that any energy model has an embedded assumption regarding a “business as usual”—environment for the energy system, and some influences are simply unknown and not accounted for. This fact needs to be addressed. But as anticipating or quantifying such influences seems impossible, a lower bound of uncertainty is suggested, respecting that unknown influences might increase deviation of model results and reality by more than the assessed uncertainty and to an unspecifiable extent.

Methods

Probability assessment

Departing from the above stated premise that an energy model output cannot be less uncertain than its input and the fact that epistemic uncertainty must be respected by a method, an uncertainty assessment in probabilistic terms of input variables will be presented. Input variables are specific for each energy model, for example, the model documentation of the World Energy Model (WEM) details the relevant input variables [31].

The idea is that input variables are dependent on influences. The assumed dependencies in the target system are the basis for scenarios or storylines for energy models, for example, the IEA states in their WEO 2015 executive summary “The process of adjustment in the oil market is rarely a smooth one, but, in our central scenario, the market rebalances at $80/bbl in 2020, with further increases in price thereafter. […] A more prolonged period of lower oil prices cannot be ruled out. We examine in a low oil price scenario what it would take for this to happen—and what it would mean for the entire energy sector if it did. The oil price in this scenario remains close to $50/bbl until the end of this decade, before rising gradually back to $85/bbl in 2040.” These scenarios are based on dependencies in the target system, in particular: “This trajectory is based on assumptions of lower near-term growth in the global economy; a more stable Middle East and a lasting switch in OPEC production strategy in favour of securing a higher share of the oil market (as well as a price that defends the position of oil in the global energy mix); and more resilient non-OPEC supply, notably from US tight oil.” [37]. Following this argumentation, there should be statistical evidence for the dependency of the oil price on the global economy, Middle East and OPEC production, and non-OPEC supply (the influences). The proposed method examines this dependency in statistical terms, that is, it allows an analysis whether the influences can explain the value of an input variable. In this sense, the context uncertainty in Walker et al.'s terms can be addressed. The question arises which influences should be chosen to describe an input variable. These are in particular those, which form the argumentative basis for a scenario, but the method allows for more. The BMA method allows the inclusion of many potentially influencing aspects of the target system, which are ranked according to their explanatory power based on statistical evidence. This is a distinct advantage w.r.t. classical statistical analysis, in which the analyst chooses the explanatory variables. In classical statistical analysis, the risk of choosing the wrong, too much or too less explanatory variables (influences) is present. BMA addresses this model uncertainty (note, that now the model that will be used to describe the input variable is meant not the energy model that consequently uses these input variables) by evaluation and ranking of many different possible models explaining the data of the dependent variable.

Equally, it is possible to assess parameter uncertainty for a specific energy model. The influences on parameter assumptions, for example, in some models (LPs in particular) efficiency standards of technologies are assumed over the complete modelling horizon. These assumptions are sometimes (often not) accompanied with justifications why these assumptions are reasonable, what can be evaluated by the method. And also ad hoc assumptions, as for example, bounds on shares of technologies and the like.

The process as presented in the case study can be described in steps. First, the relevant input variables of an energy model which should be assessed in terms of their uncertainty are chosen.Footnote 4 Secondly, data that potentially could influence the dependent variable (i.e. the input variable for the energy model) is gathered. Using BMA, many potential influences can be included which are suspected to impact the input variable. A model describing the relationship between the dependent and the explanatory variables is chosen. This can be a multiple linear regression or other models, depending on the influences and their bearing on the dependent variable. Given many different influences are considered, it can be difficult formulating a mathematical relationship which is for all influences the best representation. One might, as in this case study, choose a mathematical relation which represents whether, and by what order of magnitude, the influence increases or decreases the value of the dependent variable by a multiple linear regression model. However, other mathematical formulations are possible and may be more suitable, depending on the input variable and the influences in question. Thirdly, BMA provides the best performing model in terms of its posterior model probability (PMP). Finally, the probability that an input variable can be described with the influences of highest posterior inclusion probability (PIP) is translated in uncertainty, specifically, a lower bound of uncertainty.

In order to perform the last step, it is necessary to provide a definition of uncertainty which departs from probability. The following mathematical formulation is derived from basic probability theory. However, the definition I propose is novel in so far as it is used to interpret uncertainty.

Theory: a definition of uncertainty

An uncertainty assessment should enable the recipient of uncertain information to evaluate how reliable, probable or likely an event is to happen or a statement is true.

Definition: Uncertainty Ψ(A) equals the probability P(A) that an event A might not occur, i.e.

$$ \varPsi (A)=P\left({A}^C\right). $$

The now presented uncertainty measure (Eq. 12) is called Probabilistic Uncertainty. A set which represents all possible outcomes of a random process is called sample space Ω. Let be a set of subsets (a collection of events) of Ω, and an event A . Be a σ-algebra on Ω, satisfying the properties

$$ \varnothing,\ \Omega \in \mathrm{\mathcal{F}} $$
(1)
$$ A\in \mathrm{\mathcal{F}}\Rightarrow {A}^C\in \mathrm{\mathcal{F}} $$
(2)
$$ {A}_1,{A}_2,\dots, {A}_{\mathrm{n}}\in \mathrm{\mathcal{F}}\Rightarrow {\mathrm{U}}_{i=1}^n{A}_i\in \mathrm{\mathcal{F}} $$
(3)

Thus, the empty set (the impossible event) and the sample space are elements of the σ-algebra , the complement set A C of any event A is an element of the σ-algebra (A C is the case when A is not the case) and the σ-algebra is closed under countable unions.Footnote 5 Let P be a finitely additive measure on the measurable space (Ω, ), then P:  → [0,1] with P(Ω) = 1 is a probability measure. The probability space (Ω, , P) consists therefore of the sample space containing all possible events, the σ-algebra , and the probability measure P.Footnote 6 In particular, let the axioms of probability hold

$$ \forall \mathrm{A}\in \mathrm{\mathcal{F}}\ P(A)\ge 0 $$
(4)
$$ P\left(\Omega \right)=1 $$
(5)
$$ P\left({\mathrm{U}}_{\mathrm{i}=1}^{\infty }{A}_{\mathrm{i}}\right)={\displaystyle {\sum}_{\mathrm{i}=1}^{\infty }}P\left({A}_{\mathrm{i}}\right) $$
(6)

if A 1, A 2 are pairwise disjoint.

Any event A, B, C,… that is an element of the σ-algebra is non-negative, the probability of all events of the sample space equals one, and probabilities of disjoint events are additive. This meansFootnote 7 that for any events A and B of the probability space (Ω, , P), the following holds

$$ P(A) + P\left({A}^{\mathrm{C}}\right) = 1 $$
(7)
$$ P\left(\varnothing \right)=0 $$
(8)
$$ \mathrm{If}\kern0.2em A\subseteq B,\mathrm{then}\kern0.2em P\left(B\setminus A\right)=P(B)\kern0.2em {\textstyle -}\kern0.2em P\left(\mathrm{A}\right)\kern0.5em \mathrm{and}\kern0.2em \mathrm{hence}\kern0.5em P(A)\le P(B) $$
(9)
$$ P\left(A\cup B\right)=P(A)+P(B){\textstyle -}P\left(A\cap B\right) $$
(10)

Let Ψ be a finitely additive measure on the measurable space (Ω, ) called probabilistic uncertainty measure, such that Ψ:  → [0,1] with Ψ(Ω) = 0, Ψ() = 1, and Ψ(A) = P(A C). It follows from (7) that

$$ P(A) + P\left({A}^{\mathrm{C}}\right) = 1 $$
$$ P\left({A}^{\mathrm{C}}\right) = 1\ \hbox{--}\ P(A)\ \mathrm{which}\ \mathrm{is} $$
(11)
$$ \varPsi (A) = 1\hbox{--}\ P(A) $$
(12)

What this means is that the associated uncertainty of an event A equals the probability that A might not occur, hence the complementary set of events in Ω. For example, if an unfair coin is tossed that yields heads (H) in 7 out of 10 tosses on average, the uncertainty of H is 0.3. Note that the thusly defined uncertainty refers to the event’s uncertainty of occurring. That is, if an event has a low probability, its associated uncertainty is high and vice versa.

This simple definition has many advantages. For the discussion here, most relevant is that it is an intuitively acceptable interpretation of uncertainty as the probability of all alternative events in the sample space. Secondly, it is easily derivable from the well-established probability theory.

The following explication for the case study is based on Zeugner [38]. In the BMA of the case study, the prior probability concerns the intuition of the uncertainty analyst how probable she believes a model M γ might be before looking at the data. BMA uses a weighted average of all possible models from the potential explanatory variables (which we called influences and both terms will be used interchangeably). The weights for the averaging are defined via the posterior model probabilities that arise from Bayes’ Theorem. The chosen model formulation is linear with a typical input variable y to energy models as dependent variable (natural gas price), α γ a constant for a model γ, β γ the coefficients for a model γ, ε a normal IID error term with variance σ2, and: X a matrix of K potential explanatory variables (influences) with some chosen variables for a model γ. As the uncertainty analyst does not know which influences are relevant for modelling the dependent variable, the sample space Ω contains 2K models of the general form

$$ y = {\alpha}_{\gamma } + {\beta}_{\gamma }{X}_{\gamma } + \varepsilon $$
(13)

Then Bayes’ Theorem for the posterior model probability for the weights Zeugner [38] is given by

$$ p\left({M}_{\gamma}\Big|y,\ X\right) = \frac{p\left(y\Big|M\gamma,\ X\right)p\left(M\gamma \right)}{p\left(y\Big|X\right)}=\frac{p\left(y\Big|M\gamma,\ X\right)p\left(M\gamma \right)}{\sum_{s=1}^{2^K}p\left(y\Big|Ms,X\right)p(Ms)} $$
(14)

The results of the BMA are the PMPs for statistical models representing the input variables of energy models in terms of the influences bearing on them. The relevance of individual influences can be analysed in terms of their individual PIP. Also, a predictive density can be calculated which would allow a coherent and transparent assumption generating process, taking into account statistical data of relevant influences. For example, profitability of natural gas production is often an implicit assumption which can be analysed in terms of its relevance (PIP) for the natural gas price.Footnote 8 The choice of key assumptions for scenarios could thence depart explicitly from relevant influences rather than choosing the value of an input variable and implicitly assuming context facts that might not only be unknown (or not communicated) but also—in the worst case—contradictory.

Such an assessment allows analysing context uncertainty, parameter uncertainty and input variable uncertainty in Walker et al.’s terms. By Eq. 12, the uncertainty can be quantified using the calculated PMP of a given model M γ as assumption probability, starting with (12) and substituting (14) leading to (15).

$$ \varPsi (A) = 1-P(A)= $$
$$ \varPsi\ \left({M}_{\gamma}\right) = 1-p\left({M}_{\gamma}\Big|y,\ X\right) = $$
$$ \varPsi\ \left({M}_{\gamma}\right) = 1-{\mathrm{PMP}}_{M\gamma } $$
(15)

Similarly, the uncertainty of individual influences in the context of sample space Ω can be assessed by using PIPk instead of PMP . With Ψ(•) defined as a measure on [0,1] uncertainty can be expressed as a percentage. This unambiguous representation has distinctive advantages with respect to qualitative uncertainty assessments.

The aim of this section was introducing a notion of uncertainty that is probability based, called Probabilistic Uncertainty. The notion has the benefit of unambiguous uncertainty quantification, using the well-established mathematical framework of probability and measure theory.

Case study: uncertainty of a natural gas price assumption

The probability assessment for this example of an energy model input variable, the natural gas price, is based on a set of data with 18 variables and 26 observations.Footnote 9 Detailed information on the data used, references and units before log transformation can be found in the Appendix. However, the aim of this case study is not deriving a general statement regarding the uncertainty of a natural gas price assumption. Rather it is an example how the proposed approach is applied. A meaningful statement can be obtained in the context of a specific energy model, its time resolution and geographic scope which would have to be aligned with data used for BMA. This case study has no specific model application.

The subset of possible models, the model space Ω, to choose from, is 2k = 262,144 potential models. First, an adequate model that respects the body of evidence must be chosen. To this end, BMA is applied [3840]. The method yields posterior inclusion probabilities (PIP) for all candidate explanatory variables, i.e. influencesFootnote 10 (aka regressors). The PIPs of the regressors can then be converted to an uncertainty assessment of individual influences on the dependent variable (the input variable to an energy model) by application of formula (15). This yields insight which influences within the best performing model in terms of posterior model probability (PMP) are more uncertain to explain the data than others. The following results are computed with R statistical software. For a discussion and introduction to BMA see [3942]. The results follow from the data set detailed in the Appendix, a burn-in of 50,000 with 100,000 iterations, the g prior is set to g = max (N;K2), that is a mechanism such that posterior model probabilities (PMP) asymptotically either behave like the Bayesian information criterion (with g = N) or the risk inflation criterion (g = K2).Footnote 11 Generally, a small hyperparameter g reflects low prior coefficient variance and implies a strong initial belief that the coefficients are 0. In contrast, as \( \mathit{\mathsf{g}}\ \to\ \infty \), the coefficient estimator approaches the ordinary least squares OLS estimator.

Due to the large number of potential variable combinations, a Markov Chain Monte Carlo (MCMC) sampler is used that relies on a Metropolis-Hastings algorithm to analyse the model space, the manner of model choice is a birth-death-sampler.Footnote 12 Using a MCMC sampler facilitates the inclusion of many potential influences only at the expense of computation time. In Table 1, the results are detailed, sorted by posterior inclusion probability (PIP) of the regressors. For a description of the regressors, i.e. the influences, the Appendix can be consulted. In the first column, the influences are named. In the second column, the posterior inclusion probability is displayed. The PIP is the sum of posterior model probability (PMP) for all models wherein the influence was included. One can imagine the PIP as the quality of a variable to explain the data measured with respect to all other possible variables which were chosen by the birth-death-sampler of the Metropolis-Hastings algorithm. The quality is good, when the actual data point which has to be explained (i.e. the dependent variable’s value in a given year) can be “calculated” by the model with little deviation and hence small residuals. The posterior mean in the third column provides further information of the quality of the variable by averaging over all models, including models wherein the variable was omitted. The posterior standard deviation “Post SD” in the fourth column indicates how much dispersion the variable has, also displayed in Fig. 3. The conditional posterior sign “Cond.Pos.Sign” in the fifth column is the “sign certainty” as Zeugner names it, meaning that in some models, the variable may be included with a positive and in some with a negative sign, and the conditional posterior sign indicates posterior probability of a positive coefficient expected value, conditional on inclusion. That is, the fifth column displays the probability that the expected value of the coefficient is positive based on all cases (i.e. models), the variable was included.

Table 1 Bayesian Model Averaging summary (rounded)

The “Cond.Pos. Sign” is also indicated in Fig. 1. Red colour indicates a negative coefficient, blue colour indicates a positive coefficient and white colour indicates non-inclusion, i.e. a zero coefficient. As the first column of Table 1 depicts (the PIP), 99.2 % of all models in the model space include US Natural Gas Consumption (USCONNG) as an explanatory variable for the dependent variable Natural Gas Price, which is illustrated in Fig. 1 as high inclusion probability, that is, the variable was included in most models, illustrated by the red colour across almost all probabilities (from left to right). The cumulative model probability axis depicted against the explanatory variables (y-axis) in Fig. 1 can be interpreted as the mass of models (how many models) in which the variable was included. The first section of the x-axis (on the left side, the largest section of the x-axis) shows the best model, that is the model with highest PMP mass and the variables of which this model is comprised (on the y-axis). The corresponding model can be seen in Table 3: “Model 1”. In other words, this graph, which is based on the best 2000 models, shows that among the best 2000 models, the models which included the variables indicated in red or blue comprise 11 % of all models (the model mass). The abscissa shows the 2000 best models, scaled by their cumulated PMP. The second section, roughly 8 % of model mass (between 0.11 and 0.19) on the abscissa shows a model which also includes one other influence, in Table 3 this model is named “Model 2”. Alike one can retrieve all models of Table 3 in this graph.

Fig. 1
figure 1

Model inclusion of explanatory variables based on best 2000 models: Red colour indicates a negative coefficient, blue colour indicates a positive coefficient and white colour indicates non-inclusion. The abscissa shows the 2000 best models, scaled by their cumulated PMP. On the ordinate, the influences ranked by their inclusion are shown

In Fig. 2, model size distribution and the posterior model probabilities are shown. Model size distribution represents the number of adequate regressors depicted against the prior assumption, that was “uniform”, i.e. the prior expected model size implicitly used in the model definition. With 2K possible variable combinations, a uniform model prior means a common prior model probability of p(Model) = 2−K. This implies a prior expected model size of \( {\displaystyle {\sum}_{k=0}^K\left({}_k{}^K\right)}k{2}^{-K}=\mathrm{K}/2 = 18/2 = 9 \). With a beta-binomial specification and prior model size of K/2, the model prior would be completely flat over the model sizes (x-axis of first graph). For a discussion of prior influence in an econometric context see Eicher [43], or Ley and Steel section 3 [44].

Fig. 2
figure 2

Posterior Model Size Distribution and Posterior Model Probabilities: model size reflects the number of variables suggested by prior and posterior distribution of potential variables (mean 7.003). Posterior model probabilities of the best performing 2000 models: blue indicates the sampled model probabilities; in red, the exact probabilities (analytical) are shown (Corr 0.9986)

Posterior Model Probabilities of the best performing 2000 models are displayed on the ordinate of the second graph. Analytical Posterior Model Probabilities are displayed in the red line. The blue line indicates the MCMC iteration counts.

The graphical analysis in combination with the model summary (Table 2) indicates that convergence was achieved to an acceptable extent, namely 0.9986. “Corr PMP” defines the correlation between iteration counts and analytical posterior model probabilities for the 2000 best models.

Table 2 Summary of BMA

In Table 2, further model and simulation statistics as well as input are shown. The BRIC g-prior was explained before. Large shrinkage statistics can be used to analyse the importance of different model specific g-priors. The results of this computation are probability densities for regression coefficients and posterior model probabilities for the models. In Table 3, the best five models in terms of PMP and the coefficient estimates are displayed.

Table 3 Inclusion of variables (coefficient estimate) and posterior model probability for the best five models (rounded)

The exact PMP are analytical values, MCMC are simulated values. The PMP is proportional to the marginal likelihood of the respective model, i.e. the probability of the data given the model, times the prior model probability. The best model has a PMP of ca. 11 %. In Fig. 3, the marginal distribution density of the used variables coefficients is depicted. The numerical coefficient estimators correspond with the coefficient values in Table 3. The integral of the densities sums up to the analytical PIP of the regressors, as reported in Table 1.

Fig. 3
figure 3

Marginal density distributions of coefficient estimates of the six most relevant explanatory variables: the blue line represents the marginal density; conditional expected values (Cond.EV) are displayed in red solid line, the median in green solid line. The red dotted lines represent the double conditional standard deviation (2× Cond. SD)

In order to understand how the distributions are generated, Fig. 4 illustrates the expected values of the best 2000 models generated by the MCMC sampler. These expected values produce the density of the coefficient of the variable CRUDE_PRICE. In other words, every vertical grey line corresponds to the expected value of a model (x-axis); the density describes how many models indicate this value (y-axis). The conditional expected value of the variable which is depicted in red (cond. EV) is the analytical value and Cond. EV MCMC is the sampled expected value. Due to reasonable convergence statistics as depicted in Table 2, the values are rather close to one another.

Fig. 4
figure 4

Marginal density and expected values of models (EV Models) for the variable CRUDE_PRICE with a posterior inclusion probability (PIP) of ca. 30 %. The red line indicates the analytical conditional expected value (Cond. EV), the blue line indicates the sampled conditional expected value (Cond. EV (MCMC))

Although the interpretation of the results is not the main focus of this case study, some explanatory remarks are provided as an example for other uncertainty assessments with BMA and Probabilistic Uncertainty. Applied to a specific model, these would indeed refer to the reasoning of assumptions, as discussed in the background section. The posterior inclusion probability (PIP) of individual influences depends on the explanatory power of the variable over the whole model space. “WTI Price” with almost 99 % posterior inclusion probability can be interpreted as a reflection of the linkage to oil price developments in natural gas prices. The highest PIP is found in the variable “USCONNG” which is the US Natural Gas consumption,Footnote 13 what is due to the fact that the response variable, the natural gas price, is the import price for Germany. If the analysed natural gas price were the end consumer price for households or industry, other dependencies could become transparent. Hence, an appropriate choice of the statistical representation of influences, including considerations of data time resolution and geographical scope of the data is necessary to allow for BMA finding relevant influences for a given energy model. Intuitively, the regression model should meet the needs of the subsequently used energy economic model in terms of spatial and time resolution, inclusion of regional variables and the use of a high number of observations is recommended. For example, if an energy model has a time resolution of 12 time slices a year (summer/winter/spring/fall/day/night/peak), the appropriate time resolution could be monthly data, or even daily data if available.

The posterior model probability (PMP) of ca. 11 % seems to reflect a low capacity of the model to represent the observed data. BMA approaches in other contexts should relativize that finding. The example presented by Fernandez et al. for an econometric context reports a PMP of 0.3 [45]. The much cited paper of Hoeting, stressing advantages of BMA with respect to classical statistical methods, uses a medical context and reports a PMP of 0.17 in a dataset on primary biliary cirrhosisFootnote 14 [39]. A PMP of 0.11 for the case study seems thus, even though being low, not unacceptable. What this means in terms of uncertainty is discussed in the next section.

The BMA calculation method of PMP for a statistical model is not new. I chose the method for several reasons. First, it is an approach with solid mathematical formulation as cited. Second, it resolves one of the main problems when dealing with an interrelated target system, as energy models do: what influences the exogenous variable (the input variable) of an energy model. Using the BMA method, the modeller is provided a statistical tool to assess the significance of an influence based on data, and can argue with statistical relevance in contrast to pure intuition. This is not to say that intuition can or should not be applied in energy modelling, rather, it can be supplemented by data.

From probability to uncertainty

Communicating an explicit uncertainty assessment, rather than general disclaimers as is current practice, seems necessary because most energy scenarios are presented in great detail in terms of numbers and figures and may thereby suggest a certainty which is possibly not justified. The presented definition of uncertainty departing from probability provides a tool which corresponds to a given energy models results’ in terms of its input variables, its geographical coverage and time resolution. Yet, the method is applicable to very different kinds of energy models (by adjustment of input variables, and potential influences) and is thus also flexible. In the next sections, the application is exemplified with the theoretical concept of the previous sections and the BMA calculation.

The posterior model probability (PMP) can be used to derive the structural, model dependent uncertainty. PMPs are generated with respect to all variables (influences) of a given model. Applying Eqs. 12, 14 and 15, we derive uncertainty as follows.

$$ \varPsi (A) = 1-P(A)= $$
$$ \varPsi\ \left({M}_{\gamma}\right) = 1-p\left({M}_{\gamma}\Big|y,\ X\right) = $$
$$ \varPsi\ \left({M}_{\gamma}\right) = 1-{\mathrm{PMP}}_{M\gamma } $$

with Model (M) and γ = {1,2,3,4,5} the five best performing models

BMA yields that the model with the highest PMP (thus the lowest uncertainty) includes six variables, hence, k = 6. In Tables 4 and 5, the results as percentages are presented. The uncertainty is calculated straight forwardly from the PMP, the probability specifying the explanatory power of the model (the extent to which the influences can explain the dependent variable statistically). The model uncertainty of model 1 of approx. 89 % is the uncertainty of the model representing the natural gas price. If model 1 was chosen to represent the natural gas price, the variables would have individual uncertainties of approx. 11 % on average. The individual uncertainties allow an assessment of the explanatory variables and may help to choose the model of lowest uncertainty, or, to consciously include influences with limited explanatory power due to other considerations.Footnote 15 To illustrate this, Table 5 depicts model 2, the second-best model in terms of PMP that includes seven of the initial 17 variables. This model choice yields an uncertainty of influences of approximately 19 % on average and a corresponding model uncertainty of ca. 92 %.

Table 4 Individual uncertainties of variables (influences) and model uncertainty for model 1 (cf.Table 3) with n = 6
Table 5 Individual Uncertainties of variables (influences) and model uncertainty for model 2 (cf.Table 3) with n = 7

Posterior inclusion probabilities (PIPs) are used to derive uncertainty of individual explanatory variables, that is, influences. By means of BMA, it becomes clear which variables contribute explanatory power to a model in terms of increased PMP. Applying formula (12) yields the uncertainty of individual influences, hereby taking the inclusion probability the variable k (influence) PIP k has. That is,

$$ \varPsi (A) = 1 - P(A)=1\hbox{--} {\mathrm{PIP}}_k $$

Assuming that every variable is supposed to explain the dependent variable with low uncertainty, it follows that variables which perform poorly in terms of explanatory power (i.e. low PIP) should be excluded. However, it may be the case that an influence should be included in an analysis in spite of its low PIP. By individual uncertainties it becomes transparent at what “cost” such an inclusion comes.

The results show that an inclusion of variables which are of little explanatory value to the model increase uncertainty. This uncertainty is captured in the lower PMP. In Table 4, the uncertainty assessment for model 1 (cf. Table 3) is depicted. The individual uncertainties, (i.e. the uncertainty of influences) yield uncertainties between less than 1 % and almost 50 %. This percentage quantifies to what extent the variable is uncertain in its individual contribution to explaining the dependent variable, the natural gas price in the sample space Ω. This is not to say that these variables in general have the calculated individual uncertainty. The posterior inclusion probability yielding the individual uncertainty quantifies in how many cases of the Monte Carlo simulation of the sample space Ω this variable contributed to an increased posterior model probability (PMP). Within that model, the individual uncertainties further specify the uncertainty whether a specific influence contributes to explaining the data of the dependent variable.

Please note that one can use any statistical data suspected to influence the input variable of an energy model. I argue that the assumption is not well justified if the probability of these influences explaining the input variable is low and, consequently, its uncertainty is high. For example, if a natural gas price assumption in a fictive energy model (with a scope adequate to the data used for this BMA calculation) were justified referring to developments in US natural gas consumption (USCONNG), oil prices (WTI_PRICE), electricity consumption (ELC), road sector energy consumption (TRA), GDP, and combustible renewables and waste (RENEW)—model 1—than the statistical uncertainty that these developments impact the assumption would be 89 %. This renders transparent whether assumptions are well justified on statistical grounds and provides an explicit assessment for recipients of energy model results.

From model to prediction

BMA densities can not only be applied for inference but also for a prediction based on historical data. The Bayesian regression can be used to calculate predictive densities, similarly to the coefficient densities. Predictive quality can be used to investigate how well the model performs, given real data are available. For the present exercise, the same BMA parameters as described above are used, with 1e + 05 draws of the MCMC and a burn-in of 5e + 04. The 26 observations of the dataset are split in order to predict the last two observations, i.e. the natural gas price in 2008 and 2009.Footnote 16 For a detailed analysis of the predictive performance see Chua et al. [46], for a prediction exercise with dynamic factor models [47].

In Fig. 5, the predictive density and the real value of the natural gas price for observation 25 (2008) are depicted. Response variable on the abscissa is the natural gas price which was called dependent variable and is the input variable to an energy model. The predictive density on the ordinate shows where—with respect to the BMA model chosen—natural gas price assumption should be settled given the influences. The predictions underestimate the natural gas price cf. Table 6. A detailed analysis could explain whether the two real values are outliers. The associated uncertainty for the model (cf. Table 4) already indicates that prediction results can be expected to be rather poor.

Fig. 5
figure 5

Expected value and predictive density: The predicted density of the natural gas price (response variable) in predicted observation 25 (2009) is shown. The real natural gas price (realized y) is represented by the black dotted line and double standard errors (2× Std. Errs) indicated as red dotted line of the expected value (exp. value). The expected value based on BMA is shown as red solid line

Table 6 Prediction of last two observations of the data set

This is a notably coherent approach when key assumptions are to be chosen for input variables in quantitative terms. When defining storylines for scenarios, the need for quantitative assumptions arises. Using predictive densities for input variables can be employed to find the numerical values for assumptions based on statistical data. Not only become the implicit assumptions about the variable context in the target system explicit (and can be communicated to recipients), but also a value for the assumption that respects past observations can be retrieved. Valuable information of the predictive density are the form (well-shaped), the standard errors (0.36 for 2008 and 0.319 for 2009) and the expected value that could be used to formulate a statement of the form: “With an uncertainty of at least 89 %, the natural gas price lies between 11.44 and 9.97 US dollars per million Btu. This estimation is based on six influences on the Natural gas price over a period of 24 years.”

From uncertainty of input variables to uncertainty of energy model results

The assessed uncertainty of input variables to energy models can subsequently be used to formulate a lower bound of the associated uncertainty of energy model results, under the introduced premise that the output of an energy model cannot be less uncertain than the input.

$$ {\Psi}_{\mathrm{Energy}\ \mathrm{Model}\ \mathrm{Output}\ }\ \ge \kern1.25em {\Psi}_{\mathrm{Input}\ \mathrm{Variable}} $$
(16)

A statement of the form: “Given the input variables used in this energy model, the results and forecast statements from it are uncertain by at least 89 %. Relevant influences on the input variables which can be analysed statistically indicate this uncertainty.Footnote 17” The statement could be further refined for specific input variables and energy model results which can be appointed to these inputs. In general, the least certain input variable should determine the uncertainty statements. In particular, if decision support is aspired and explicit recommendations are formulated, recommended measures should include an uncertainty assessment for relevant variables concerning the measure.

It seems necessary to clarify that such an uncertainty assessment should not be perceived as tool to attack the credence of energy model results. Quite contrarily, quantifying uncertainty should render model results more realistic in the light of a constantly changing, interrelated and non-deterministic target system. Any other methodology of decision support for mid- and long-term future choices supposedly will have comparable difficulties anticipating developments. Even with (very) high associated uncertainty, energy models can be of (relativised) value in decision support after all.

If model-based statements, projections and recommendations are used in policy advice, it is necessary to accompany these statements with an indication of how dependable they are. Transparency in the modelling process and evaluation by recipients would be improved, if energy model results were communicated with their associated uncertainty (Eq. 15), context dependency (influences deemed relevant), and explicit assumptions (numeric assumptions of influences for prediction).

The aim of the case study is to exemplify how a quantitative assessment with the BMA method and Probabilistic Uncertainty for input variables to energy models could work. The presented modelling choices are not meant to be a unique solution; rather, the approach could be developed along these lines. Software solutions other than R could be applied.

Results

In summarising the findings of the case study, the main observation is that by applying Probabilistic Uncertainty and BMA method, a quantitative assessment of uncertainty in terms of input, parameter, and context uncertainty is possible. The embedded nature of input variables in other systems and beyond energy model boundaries is referred to as “context” and can be analysed by suitable choice of potential explanatory variables, called influences. Among the small database used for this case study, significant influences on the natural gas price are consumption in the USA, crude oil price, electricity power consumption, road sector energy consumption, gross domestic product and the share of combustible renewables and waste in total energy consumption.

The results suggest that the natural gas price as an input variable holds an uncertainty of at least 89 %, given the data used. Clearly, as this case study was not designed for a specific energy model, these findings cannot be applied to an energy model. For an application to a specific energy model, the regional scope, time resolution and sectorial (dis-)aggregation of influences must be chosen accordingly. It is important to note that—as with all statistical analyses—long observation periods and reliable data sources increase the quality of the assessment, as it reflects reality more accurately. This paper is not focussed on the interpretation of results but rather on a presentation of the practical aspects of the method. The case study has an exemplifying character and could be used as a reference for an uncertainty assessment with suitable data for a specific energy model.

However, based on the case study, some general analyses can be formulated. It seems that at least some input variables to energy models are highly uncertain with respect to influences bearing on them. Those influences which can be estimated by statistical methods seemingly explain (at least some) input variables of energy models with little probability. The conjecture is that resorting to general disclaimers, as it is common practice, does not reflect the high uncertainty that is associated with individual energy model results. The presented approach could provide an energy model specific, explicit and understandable uncertainty assessment which could accompany energy scenarios and render associated uncertainties tangible for the recipients.

Discussion

According to Walker et al. statistical uncertainty is the least ignorant of all uncertainties involved in models. It thus seems reasonable to use statistical data for uncertainty assessments. However, subjective expert knowledge should be included and the method can accommodate this by prior probabilities.

Statistical data dependence of the approach clearly limits its applicability if data are scarce or not available. Assuming that many influential input variables to energy models are well studied and observed, such as gross domestic product, population growth and the like, the approach should at least be discussed for those variables. The problem of statistical data correlation is present, and is illustrated in the annex. Gathering statistical data of influences can be a significant amount of work. The question arises if this work is justified. I think that it is insofar justified, as energy model results aim to be used in decision support, as presented, for example, the IEA states this in their self-presentation [20]. It seems reasonable and good scientific practice providing an uncertainty assessment if energy scenarios are to be the basis of policy-making, planning and investment decisions. Recipients may hold an unjustified confidence in “projections” or scenarios suggest by studies, if uncertainties are not analysed in detail. One key argument of energy models is that if all assumptions hold, the development would be as presented in the energy scenario. The proposed approach can render transparent whether statistical evidence is lacking for this argument. First, it is highly unlikely that all assumptions hold, given there can be hundreds in an energy model. Second, even if all assumptions hold, the method potentially proves that for at least some assumptions (the assessed input variables), the assumed cause-effect-relations are statistically highly uncertain. If not, the better, for energy model results would be affiliated with an uncertainty assessment proving its reliability statistically. To give an example, the specific assumption of the input variable “natural gas price” can be justified with developments in production and consumption patterns (the influences), yet given historical data, it may prove that these developments did not explain the value. If it can be justified by the influences, the uncertainty assessment provides statistical evidence.

Another limiting aspect if it comes to large and complex models, it is the parametric nature of the proposed uncertainty assessment. An intelligent choice of input variables which are analysed seems necessary, given that large models employ thousands of assumptions. This could be done in several ways. For one, the most influential input variables could be chosen, possibly determined by a sensitivity analysis. This approach proved successful in the NUSAP method [23]. Or, if tracking of input variables across the energy model processing is possible, the number of variables that are assessed could be limited. Yet another possibility departs from model-based recommendations, which should be based on a specific model result, and that result should be evaluated with respect to its uncertainty. In other words, if a model-based energy scenario is publicly presented, it should be accompanied with an uncertainty assessment for the variables which are the basis for recommendations. Typically, these would be the exogenous variables presented in the storylines, or definitions of scenarios, as for example, the numerical key assumptions in the “New policies scenario” in the World Energy Outlook by the IEA [37]. It is possible that some input variables are less uncertain than others, in which case, the most uncertain input variable should provide the lower bound (with reference to the premise that output cannot be less uncertain than input to the energy model). It is not claimed that the presented example of an input variable (the natural gas price assumption) is the most important assumption of an energy system model. It is chosen for it is a typical assumption which is decisive for energy model results, in particular, if an optimisation model is used with an economic rationale, as are other price and cost assumptions, be it of fuels or technologies. The uncertainty assessment of input variables is indeed work, statistical work and computational work. However, if one aspires an uncertainty quantification (rather than a general disclaimer) with a solid method, it cannot be achieved through one assessment for an energy model, neither quantitatively nor qualitatively (the number of expert elicitations in qualitative assessments may render proof). Also, it would not do justice to highly detailed energy models, which might well profit from the possibility to analyse specific input variables in terms of their uncertainty.

Another legitimate criticism is that the mathematical formulation for the BMA analysis may not be suitable. Indeed, it is very likely that a multiple linear regression, or any other mathematical relationship (quadratic or polynomial regression, or other non-linear functions) for that matter, does not depict the bearing of influences on the dependent variable realistically. But neither do energy models represent the bearing of target system aspects in a realistic way. Hence, if one is willing to accept that the idealising mathematical formulation of an energy modelFootnote 18 can have any significance, one would consequently accept that the mathematical formulation of the proposed uncertainty assessment can have significance. If not, the uncertainty assessment is obsolete for such a person would not accord any significance to energy model results anyway. However, if the energy model results are granted in any credibility by a recipient, the person should be provided an assessment of that credibility in adequate terms. I do not claim that a multiple linear regression is the best representation for all assessments, but for most For exceptional cases it is possible to adapt the functional relation between a dependent variable (the input variable to the energy model) and the influences deemed relevant. Indeed, the functional relation needs to be evaluated for a specific energy model input and relevant data. I, however, do claim that effort should be made to find a functional relation (linear or non-linear), for if there is none, it is questionable how the energy models which themselves assume functional relations can be justified. The presented approach provides a quantitative explicit tool that is adjustable for different energy model types (their input variables and influences bearing on them), and flexible in the definition of the functional mathematical relation of influences on an input variable.

The computational effort for the presented case study was reasonable, taking approx. 10-min calculation time with the used software. All graphs are standard functionalities of the R BMS package. Data collection and preparation is indeed more time consuming. It is suspected that energy modellers would already dispose of much relevant statistical data for calibration purposes from reliable sources.

The method, as any statistical method, assesses probabilities based on past events. This implies that the expectation and focus of future developments is in a sense limited. However, the energy models operate within the same set of assumptions and expectations, which is why this method does not determine the future but rather provides expectations based on the past. A statistical analysis is always focussing on the past. The implicit assumption is that relations in the past are at least likely to hold in future. This must not be the case. But given that the energy models are also calibrated with and based on statistical data, the analysis is consistent with practices in energy modelling, less subjective than expert elicitation and more specific than general disclaimers.

The method can be criticised for an idealisation inherent to all modelling techniques. The assumption that the sample space Ω can be known in its entirety does not hold from a realistic point of view. It is not possible to know and account for all thinkable and hence possible alternative events (which by definition comprise uncertainty). Given that an energy model is confronted with the same arbitrarily defined sample space, it is equally meaningless to compute a projection or forecast with an energy model as it is to assess its uncertainty. However, if the energy models are used, it becomes meaningful to assess their uncertainty within the model’s sample space. The approach has the distinct advantage of analysing the relevance of potential influences in statistical terms and subjective expert knowledge (by means of prior specification), in contrast to a purely subjective uncertainty assessment as expert elicitation. However, experts may have a clear intuition which influences affect an input variable of an energy model. Analysing these influences in terms of BMA and Probabilistic Uncertainty may render some intuitive over- or underestimations transparent.

The energy models may have time horizons of decennia, and the question arises, whether the uncertainty assessed by the proposed approach is stable for energy model results in the long-term future. Indeed, it is not. Uncertainty is expected to be higher, the further in the future, a model-based statement is settled. If, for example, an energy study promotes a specific energy system state in the year 2040, the uncertainty is expected to be higher than the uncertainty 1 year from now. Uncertainty over time of assumptions means that if a model has a modelling horizon, and model results are displayed as time series, the related uncertainty of the results must increase towards the end of the horizon in future. As stated above, this is due to the facts that model inherent simplifications and generalisations perform error propagation and that assumptions regarding mid- or long-term future cannot be verified or falsified with available tests at present time. Note, that the related uncertainty increases the further in the future an assumption is made. Let A be an assumption with a time index x, x = 1,2,3,…,n. The time index is to be read as some sort of regular time interval, e.g. hour, month, or year. Further, assume that a model is comprised of different assumptions A, each of which is time dependent. An assumption A, as the natural gas price, in 1 month from now is dependent on different influences, let those influence be named I j , with an index for there is more than one influence, indexed by j = 1,2,3,…m. For this example influences such as extraction rates, transportation infrastructure, political stability of producing countries and the like may be relevant. Let A x be a function of I xj , that is, an assumption A at a given point in time x is dependent on the influence I j of that time x. The assumptions of influences, e.g. the extraction rate I xj , is with increasing time less based on information which translates to uncertainty. This uncertainty is also increasing for A, as it is a function of I xj . It becomes clear that the uncertainty of assumption A heavily depends on the availability of information regarding the influences I j . If no information whatsoever is available, the influences are pure assumptions which can be arranged via the predictive density of the presented approach, thereby respecting historical data. Generally, uncertainty can but must not increase as the assumption lies further in the future, due to unforeseeable developments. The proposed lower bound of uncertainty captures the fact that an uncertainty analysis based on statistical data may give an evaluation of the lowest uncertainty indicated by statistics, and that some energy model results may be more uncertain.

Conclusions

The presented Bayesian approach for uncertainty quantification in terms of assumption probabilities allows an uncertainty assessment of input to energy models.

In the first section, the question where uncertainty is present in energy models and why it should be addressed was discussed. This discussion should raise the awareness that energy model results are highly dependent on input assumptions. The practice of “general disclaimers” seems unsatisfactory, especially, if model-based statements are used for policy advice and decision support creating far-reaching social, economic, and environmental consequences.

The next section provided a definition of uncertainty. The probabilistic uncertainty measure defined allows quantifying uncertainty in a coherent manner with probability theory. The distinct advantage with respect to commonly used qualitative assessments is an unambiguous representation of uncertainty, which is understandable without tedious lecture of explanatory notes. “The probability that a statement might not be true is at least Ψ (e.g. 89 %)” seems more explicit and understandable than commonly used “likely”, “very likely” and the like cf. [48]. Intentionally, the approach relies on established methodologies (BMA) and conceptual frameworks (probability theory) to derive such quantitative statements. Another advantage is a consistent and transparent quantitative interpretation of implicit assumptions as predictive densities for assumptions of input variables to energy models.

Finally, the presented case study aimed at exemplifying how the approach in practice could work. Indeed, specifications of BMA such as the chosen sample-routine or the prior choice should be discussed individually for a specific input variable of an energy model. The case study illustrates how the approach could add to the tools of uncertainty quantification in energy modelling.

Further research should include practical specifications of the approach and legitimate inference from uncertain energy model results. A comparison of the different energy models could be carried out to evaluate their associated uncertainty.

Notes

  1. Typical model horizons are short term: hours up to days, mid-term: up to 2030, long term: up to 2050 or 2100

  2. Assuming the” Law of Demand”, i.e. the quantity demanded depends negatively on price, ceteris paribus. This is a very useful and convenient theory, as long as the ceteris paribus assumption is not ignored, and it is understood that complements as well as substitutes exist for most traded goods, as Bierens and Swanson note [55].

  3. In contrast to aleatoric uncertainty, for example, natural variability, which can be modelled by stochastic techniques cf. [56].

  4. This step could involve a sensitivity analysis in order to reduce the amount of input variables when large models are evaluated. See also limitations and discussion of the approach.

  5. Note that by condition 1 and 2 is closed under finite intersections.

  6. Other measures, such as a Popper-measure would be thinkable, allowing for conditionalization on zero-probability events. The benefit of such a definition is left to future work.

  7. The proof is omitted but can be found in most lecture notes on probability and measure theory, e.g. [57, 58].

  8. In the case study, this influence is represented by the explanatory variable NG_rent, the difference between the value of natural gas production at world prices and total costs of production.

  9. Data and inferences from them are not the focus of this text, rather a discussion of the methodology and its application in the field of energy economics is sought. A coherent database with as many observations as possible should be applied in an uncertainty assessment for a specific input variable of an energy model.

  10. In the following, the terms regressors, explanatory variables, and influences are used interchangeably.

  11. This translates to “BRIC” g-prior in the modelling exercise

  12. For other options such as reversible-jump sampler for BMS model package in R, see [38].

  13. See Appendix.

  14. Which is used for medical treatment, nota bene.

  15. For example, based on expert knowledge one could choose to include an influence that is expected to become more relevant in future than it was in the past.

  16. For source information refer to the appendix.

  17. Of course, the influences should be detailed too.

  18. For example, the formulation of the relations in the target system in linear terms as a Linear Program does.

References

  1. Culka M (2014) Applying Bayesian model averaging for uncertainty estimation of input data in energy modelling. Energy Sustainability Soc 4(1):21. doi:10.1186/s13705-014-0021-9

    Article  Google Scholar 

  2. Sloughter JM, Gneiting T, Raftery AE (2010) Probabilistic wind speed forecasting using ensembles and Bayesian Model Averaging. J Am Stat Assoc 105(489):25–35. doi:10.1198/jasa.2009.ap08615

    Article  MathSciNet  MATH  Google Scholar 

  3. Nowotarski J, Raviv E, Trück S, Weron R (2014) An empirical comparison of alternative schemes for combining electricity spot price forecasts. In: Energy Economics 46:S.395–412. doi:10.1016/j.eneco.2014.07.014

    Google Scholar 

  4. Loulou R, Remme U, Kanudia A et al (2005) Documentation for the TIMES Model: PART I

    Google Scholar 

  5. Schrattenholzer L (1981) The energy supply model MESSAGE. RR/International Institute for Applied Systems Analysis, 81-31. International Institute for Applied Systems Analysis, Laxenburg, Austria

    Google Scholar 

  6. Labys WC (1982) Measuring the validity and performance of energy models. Energ Econ 4(3):159–168. doi:10.1016/0140-9883(82)90015-9

    Article  Google Scholar 

  7. Allaire D, Willcox K (2014) Uncertainty assessment of complex models with application to aviation environmental policy-making. Transp Policy 34:109–113. doi:10.1016/j.tranpol.2014.02.022

    Article  Google Scholar 

  8. Hamarat C, Kwakkel JH, Pruyt E (2013) Adaptive Robust Design under deep uncertainty. Technol Forecast Soc Chang 80(3):408–418. doi:10.1016/j.techfore.2012.10.004

    Article  Google Scholar 

  9. Isaac AM (2014) Model uncertainty and policy choice: a plea for integrated subjectivism. Stud Hist Philos Sci Part A 47:42–50. doi:10.1016/j.shpsa.2014.05.004

    Article  MathSciNet  Google Scholar 

  10. Gass SI, United States. National Bureau of Standards, United States. Energy Information Administration et al. (1980) Validation and assessment issues on energy models: proceedings of a workshop held at the National Bureau of Standards, Gaithersburg, Maryland, January 10-11, 1979. NBS special publication. Dept. of Commerce, National Bureau of Standards: for sale by the Supt. of Docs., U.S. Govt. Print. Off

  11. Munasinghe M, Meier P (1993) Energy policy analysis and modeling. Cambridge studies in energy and the environment. Cambridge University Press, Cambridge [England], New York, NY, USA

    Book  Google Scholar 

  12. van Notten P (2005) Writing on the wall: scenario development in times of discontinuity. Universal-Publishers, Boca Rotan, FL

    Google Scholar 

  13. Lindgren M, Bandhold H (2002) Scenario planning: the link between future and strategy. Palgrave Macmillan, Basingstoke

    Book  Google Scholar 

  14. Rothman DS (2008) Chapter Three A Survey of Environmental Scenarios. In: Environmental futures—the practice of environmental scenario analysis, vol 2. Elsevier; pp 37–65

  15. Önkal D, Sayım KZ, Gönül MS (2013) Scenarios as channels of forecast advice. Technol Forecast Soc Chang 80(4):772–788. doi:10.1016/j.techfore.2012.08.015

    Article  Google Scholar 

  16. Dieckhoff C (2011) Energieszenarien: Konstruktion, Bewertung und Wirkung—“Anbieter” und “Nachfrager” im Dialog., Energieszenarien

    Google Scholar 

  17. Weimer-Jehle W (2006) Cross-impact balances: a system-theoretical approach to cross-impact analysis. Technol Forecast Soc Chang 73(4):334–361. doi:10.1016/j.techfore.2005.06.005

    Article  Google Scholar 

  18. ENTSO-E (2014) Scenario Outlook & Adequacy Forecasts 2014-2030., https://www.entsoe.eu/publications/system-development-reports/adequacy-forecasts/Pages/default.aspx. Accessed 10.03.2016

    Google Scholar 

  19. Buch (Monographie) International Energy Agency (2011) World energy outlook 2011. International Energy Agency, Paris, s.l. ISBN 978 92 64 12413 4. Online verfügbar unter http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10520301

  20. (2015) IEA—About WEO. http://www.bp.com/en/global/corporate/energy-economics/energy-outlook-2035/energy-outlook-downloads.html. Accessed 04 Dec 2015

  21. BP (2016) BP Energy Outlook 2035., http://www.bp.com/en/global/corporate/energy-economics/energy-outlook-2035/energy-outlook-downloads.html. Accessed 10.03.2016

    Google Scholar 

  22. Walker WE, Harremoes P, Rotmans J et al (2005) Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr Assess 4:1

    Article  Google Scholar 

  23. Sluijs JP van der, Potting J, Risbey J et al. (2002) Uncertainty assessment of the IMAGE/TIMER B1 CO2 emissions scenario, using the NUSAP method: Dutch National Research Programme on Global Air Pollution and Climate Change. Report No: 410 200 104 (2002)

  24. Kloprogge P, van der Sluijs JP, Petersen AC (2011) A method for the analysis of assumptions in model-based environmental assessments. Environ Model Softw 26(3):289–301. doi:10.1016/j.envsoft.2009.06.009

    Article  Google Scholar 

  25. Refsgaard JC, van der Sluijs JP, Højberg AL et al (2007) Uncertainty in the environmental modelling process—a framework and guidance. Environ Model Softw 22(11):1543–1556. doi:10.1016/j.envsoft.2007.02.004

    Article  Google Scholar 

  26. Morris MD (1991) Factorial sampling plans for preliminary computational experiments. Technometrics 33(2):161–174

    Article  Google Scholar 

  27. Häder M (2000) Die Delphi-Technik in den Sozialwissenschaften: Methodische Forschungen und innovative Anwendungen. ZUMA-Publikationen. Westdt. Verl., Wiesbaden

  28. Bolger F, Stranieri A, Wright G et al (2011) Does the Delphi process lead to increased accuracy in group-based judgmental forecasts or does it simply induce consensus amongst judgmental forecasters? Delphi Tech 78(9):1671–1680. doi:10.1016/j.techfore.2011.06.002

    Google Scholar 

  29. Horridge M, Parmenter BR, Pearson KR (2000) ORANI-G: a general equilibrium model of the Australian economy. Centre of Policy Studies

  30. Welsch H (2008) Armington elasticities for energy policy modeling: evidence from four European countries. Energy Economics 30(5):2252–2264. doi:10.1016/j.eneco.2007.07.007

    Article  Google Scholar 

  31. Baroni M (2013) Model description: World Energy Model Documentation 2013 Version., http://www.worldenergyoutlook.org/media/weowebsite/2013/WEM_Documentation_WEO2013.pdf. Accessed 21 Nov 2015

    Google Scholar 

  32. Frigg R, Bradley S, Machete R et al (2013) Probabilistic forecasting: why model imperfection is a poison pill. In: Andersen H, Dieks D, Gonzalez WJ et al (eds) New challenges to philosophy of science, vol 4. Springer, Netherlands, pp 479–491

  33. Dodds PE, Keppo I, Strachan N (2015) Characterising the evolution of energy system models using model archaeology. Environ Model Assess 20(2):83–102. doi:10.1007/s10666-014-9417-3

    Article  Google Scholar 

  34. Hofer E, Kloos M, Krzykacz-Hausmann B, Peschke J, Woltereck M (2002) An approximate epistemic uncertainty analysis approach in the presence of epistemic and aleatory uncertainties. In: Reliab Eng Syst Saf 77(3):S.229–238. doi:10.1016/S0951-8320(02)00056-X

    Google Scholar 

  35. Pilavachi PA, Dalamaga T, Rossetti di Valdalbero D et al (2008) Ex-post evaluation of European energy models. Energy Policy 36(5):1726–1735. doi:10.1016/j.enpol.2008.01.028

    Article  Google Scholar 

  36. Bezdek R, Wendling R (2002) A half century of long-range energy forecasts: errors made, lessons learned, and implications for forecasting. J Fusion Energ 21(3-4):155–172. doi:10.1023/A:1026208113925

    Article  Google Scholar 

  37. International Energy Agency (2015) World energy outlook 2015. OECD, Paris

    Google Scholar 

  38. Zeugner S (2015) R package BMS—Bayesian Model Averaging., http://bms.zeugner.eu/. Accessed 10.03.2016

    Google Scholar 

  39. Hoeting JA, Madigan D, Raftery AE et al (1999) Bayesian Model Averaging: a tutorial. Stat Sci 14(4):382–417

    Article  MathSciNet  MATH  Google Scholar 

  40. Raftery AE, Madigan D, Hoeting JA (1997) Bayesian model averaging for linear regression models. J Am Stat Assoc 92(437):179–191

    Article  MathSciNet  MATH  Google Scholar 

  41. Draper D (1999) Bayesian Model Averaging: a tutorial: comment. Stat Sci; 405–409

  42. George EI, Clyde M (2004) Model uncertainty. Statist Sci 19(1):81–94. doi:10.1214/088342304000000035

    Article  MathSciNet  MATH  Google Scholar 

  43. Eicher T, Papageorgiou C, Raftery A (2007) Determining growth determinants: default priors and predictive performance in Bayesian model averaging. Center for Statistics and the Social Sciences Working Paper no. 76

  44. Ley E, Steel MF (2012) Mixtures of -priors for Bayesian model averaging with economic applications. J Econ 171(2):251–266. doi:10.1016/j.jeconom.2012.06.009

    Article  MathSciNet  Google Scholar 

  45. Fernández C, Ley E, Steel MFJ (2001) Model uncertainty in cross-country growth regressions. J Appl Econ 16(5):S. p. 563. Online verfügbar unter http://www.jstor.org/stable/2678594

  46. Chua CL, Suardi S, Tsiaplias S (2013) Predicting short-term interest rates using Bayesian model averaging: evidence from weekly and high frequency data. Int J Forecast 29(3):442–455. doi:10.1016/j.ijforecast.2012.10.003

    Article  Google Scholar 

  47. Koop G, Potter S (2004) Forecasting in dynamic factor models using Bayesian model averaging. Econ J 7(2):550–565

    MathSciNet  MATH  Google Scholar 

  48. Mastrandrea MD, Mach KJ, Plattner G et al (2011) The IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups. Clim Chang 108(4):675–691. doi:10.1007/s10584-011-0178-6

    Article  Google Scholar 

  49. OECD (2012) Main economic indicators. OECD Publishing

  50. The World Bank DataBank (2016) Data | The World Bank DataBank | Explore Databases. Onlineverfügbar unter http://databank.worldbank.org/data/databases.aspx, zuletzt aktualisiert am 17.02.2016, zuletzt geprüft am 10.03.2016

  51. U.S. Energy Information Administration (2016): U.S. Energy Information Administration (EIA) - Data. Natural Gas Data. Hg. v. U.S. Department of Energy. Online verfügbar unter http://www.eia.gov/naturalgas/data.cfm, zuletzt aktualisiert am 10.03.2016, zuletzt geprüft am 10.03.2016

  52. BP (2015) Statistical Review downloads | About BP | BP Global., http://www.bp.com/en/global/corporate/about-bp/energy-economics/statistical-review-of-world-energy/statistical-review-downloads.html. Accessed 08 Dec 2015

    Google Scholar 

  53. Bundesamt für Wirtschaft und Ausfuhrkontrolle BAFA (2011) BAFA: Ausgewählte Statistiken. Ausgewählte Statistiken. BAFA - ENERGIE - ERDGAS Bundesamt für Wirtschaft und Ausfuhrkontrolle BAFA. Onlineverfügbar unter http://www.bafa.de/bafa/de/energie/erdgas/ausgewaehlte_statistiken/index.html. zuletzt aktualisiert am 23.09.2014, zuletzt geprüft am 10.03.2016.

  54. EUROSTAT (2015) Eurostat statistics: COMM/ESTAT., http://ec.europa.eu/eurostat. Accessed 21 Nov 2015

    Google Scholar 

  55. Bierens HJ, Swanson NR (2000) The econometric consequences of the ceteris paribus condition in economic theory. J Econ 95(2):223–253. doi:10.1016/S0304-4076(99)00038-X

    Article  MATH  Google Scholar 

  56. Smith RC (2013) Uncertainty quantification: theory, implementation, and applications. Comput Sci Eng Series

  57. Skorokhod AV, Prokhorov IV (2004) Basic principles and applications of probability theory. Springer, Berlin, London

    Google Scholar 

  58. Pollard D (2002) A user’s guide to measure theoretic probability. Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, Cambridge, New York

    Google Scholar 

Download references

Acknowledgements

The author is a funded member of the Helmholtz Research School on Energy Scenarios (ESS) by the Institute of Philosophy of Karlsruhe Institute of Technology (KIT). The author acknowledges support by Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of Karlsruhe Institute of Technology for this publication.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Monika Culka.

Additional information

Competing interests

The author does acknowledge SpringerOpen’s guidance on competing interests and has no competing interests.

Author’s contribution

This is an original text from the author with no other contributions than those cited in the text.

Author’s information

Monika Culka is a doctoral candidate at Karlsruhe Institute of Technology (KIT), in a shared programme of the Institute of Philosophy and ITAS. She has worked with the energy system model TIMES and in consultancy. She has initiated the interdisciplinary research initiative New Interdisciplinary Collaboration Association (NICA).

Appendix

Appendix

Used data is freely available from the following sources. However, consistent data with more observations of influences deemed relevant is recommended if the focus lies on interpreting results.

Table 7 Explanation of abbreviations of influences for the case study and source of data
Table 8 Dataset used for the case study
Table 9 Input data—basic statistics for the uncertainty assessment
Fig. 6
figure 6

Scatterplot of potential explanatory variables (influences)

Table 10 Correlation matrix of influences computed by R with method Kendall (red shading indicates strength of correlation)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Culka, M. Uncertainty analysis using Bayesian Model Averaging: a case study of input variables to energy models and inference to associated uncertainties of energy scenarios. Energ Sustain Soc 6, 7 (2016). https://doi.org/10.1186/s13705-016-0073-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13705-016-0073-0

Keywords

JEL