Forecasting the Rate of Technical Breakthroughs

For a major energy company, Ventana consultants built a dynamic simulation model to forecast the rate of technical progress for a $100+ million/year development program and support proactive program management. The model supplemented traditional program management tools and forecasting methods, which were funded at much higher levels. The goal of the development program was to improve the performance of an existing device by a factor of three over a two-year period. Because the device was already on a leading edge along several scientific dimensions, the program required multiple technical breakthroughs in areas such as materials science, gas dynamics and system reliability.

The dynamic simulation model represented the strategic interactions among numerous schedule, cost, and technology factors. The energy company regarded the model as interesting, but speculative, because it integrated both scientific and management issues into the same model. The model was calibrated using both hard data (material strength measurements) and soft data (the effect of progress in one technical area on another area) from two earlier development programs as well as data taken from the first few months of the new technology initiative. Once calibrated, the model was used to make an initial program forecast. This forecast consisted of predictions for the dates on which the development team would complete integrated performance tests as well as the measured performance achieved by the test device.

The initial model results shocked and disappointed program managers. The results showed the program making good technical progress for about six months with the predicted performance coming out close to the middle of the uncertainty band shown in the official program plan. The predictions also showed, however, that the number of integrated tests actually performed would fall significantly below the official program forecast. Beyond six months, both the device’s performance and number of integrated tests fell progressively further below plan. Fifteen months into the program, the forecast showed the proven device performance falling below the lower bound of the official uncertainty range.

These model projections were judged unacceptable, although the reasons for the performance shortfall appeared credible. Development managers were bullish and insisted that they could fix existing problem and meet the ultimate performance target. Because of internal ‘political’ problems, program managers elected to discontinue development of the simulation model. They decided to depend strictly on official forecasts based on management consensus and detailed technology theory for such things as materials, gas dynamics, etc. The results of the simulation model were only revealed to a handful of people, and they did not influence program management.

A year later, the development program was canceled. The device performance was falling significantly below official projections and integrated tests were being completed much later than planned. When the predictions of the dynamic simulation model were reviewed, it was discovered that they were hauntingly accurate. Not only were the performance projections accurate to within a few percent of the measured test results, but the dates on which tests were completed varied by no more that a month from the predicted dates.