Background

Background of the project

Experiences from a number of large-scale investment projects have shown that the traffic forecasts on which decisions to implement the projects were based have often been insufficient and sometimes misleading (see, e.g., Flyvbjerg et al., 2003 and 2005). Overly optimistic demand analyses have in particular been documented for railroad projects, especially urban rail. There are also examples of underestimation of the demand in situations where growth is not desirable. This often occurs in connection with proposed road investments in urban areas (Næss, Flyvbjerg & Buhl, 2006), where benefits in forecasted improvements in accessibility is vanishing due to overlooked increasing demand and hence new congestion (Nielsen & Fosgerau, 2005).

 
Walker et al (2003) developed a methodology to classify elements that contribute to uncertainties in transport evaluations into the dimensions of a) data, b) model parameters and the c) models themselves. For each of these dimensions, they categorised uncertainties into 1) recognised ignorance (overlooked causal relationships systematically bias the models), 2) scenario uncertainties (the models are run with unrealistic or uncertain assumptions concerning external variables), and 3) statistical uncertainties (which are difficult to quantify in a large model system).


The hypothesis of the project is that by carrying out before and after analyses of specific projects, it is possible to reveal important trends in wrong forecasts, which makes it possible to pinpoint overlooked causal relationships in existing models and to identify typical flaws in the scenario assumptions. Forecasts for projects, that were not decided, will also be analysed. There is a tendency that projects that are evaluated by simplified over-optimistic models are decided with a higher likelihood than projects that are evaluated by more advanced models. This selection bias in before-after studies may reveal a better quality of models that is used for project evaluations that turn out negative and hence rejects the project.


Given that recognised ignorance and scenario uncertainties are overcome, another challenge is to quantify the statistical uncertainties. Surprisingly little research has been done in this respect. Whilst uncertainties on specific input variables can be analysed by standard statistical models, and uncertainties on specific parameter estimates can be deduced from the statistical estimation, little is known on the overall uncertainty of a given transport model. This is even more complicated if the model iterates between demand and supply sub-models, e.g. to calculate congestion effects.


Zhao & Kockelman (2001) carried out the first full overall uncertainty estimation of a 4-step model (the most commonly used model in practice despite severe ignorance bias). They simulated uncertainties in the model parameters, and run the model several times in a simulation environment. Hugosson (2005) evaluated the effect of uncertainty distributions of parameters in a demand model on the overall model results, i.e. a partial analysis but on a real transport model (the Swedish National model). She used a more advanced method (Bootstrap, see e.g. Davidson et al 2005) than the pure Monte Carlo method in Xhao & Kockelman (2001). De Jong et al (2005) is a comprehensive literature study on uncertainties in transport models (synthesized in De Jong et al, 2007). Even though few journal articles have been published on the subject, Ibid identified a number of applied studies and reports.


Nielsen & Knudsen (2005) carried out an uncertainty evaluation of a large route choice model (from Nielsen et al 2002). As far as we know, this is the first published uncertainty evaluation of this model component (opposite the demand uncertainties mentioned above). Nielsen & Knudsen (2006) extend this work to an overall simulation of uncertainties in a transport model. The project will continue research with this approach.


A core method now in cost estimates within European practice is the “Lichtenberg method” (2000), which will be investigated and improved in a sub-project on uncertainties of cost-estimates. Salling et al (2007) developed a simulation method for uncertainties in cost-benefit analyses, which can build upon statistical distributions of uncertainties of input models. This will be the basis for a project on uncertainties in cost-benefit analyses.