The Words of the Prophets - Technology Forecasting

THE WORDS OF THE PROPHETS – TECHNOLOGY FORECASTING AND ITS RELEVANCE TO DECISION MAKING

  • Professional forecasters have played a key role in guiding investment within the high technology sector.
  • To what extent then can they be blamed for the failure of so many new technologies in the marketplace?
  • What can actually be predicted with any real confidence?
  • What are the systems and methodologies employed by forecasters today, and what is their real validity?
  • In the face of so many bad guesses, why do major firms continue to rely on forecasters?

One morning in the summer of the year 753, not of the Christian era but of the years elapsed since the founding of Rome, a woman sat in a drafty cavern in the mountains of Greece, preparing herself for her audience with a young man who was at once the nephew of Julius Caesar and the grandson of a slave. The young man’s name was Octavian, and he would shortly become the first Roman emperor.

The woman, who was not much older than the twenty-seven year old Octavian, fidgeted in her chair as her attendant darkened the orbits of her eyes with kohl and expertly aged the woman’s smooth cheeks with applications of ceruse and other cosmetics. The woman, whose real name was known to no one, and who simply called herself Pythea, watched in a mirror as she slowly assumed the appearance of a crone of sixty. “Enough,” she said after her slave had affixed a prosthetic mole to her forehead. Then the Oracle approached a brazier from which dense blue smoke billowed. She breathed the smoke, throwing her shawl over her face so that none of it should escape. When she had taken several breaths, she sighed. “Send him in,” she told her slave.

The slave obeyed, and a slight, blond youth approached her, simply dressed and inscrutable in his expression to all but the Oracle.

“I wish to know--” he began.

“If your fleet will prevail against that of Marc Antony,” the Oracle interrupted.

Octavian gasped. His admirals had been sworn to secrecy.

Pythea strode to the brazier and took another breath. She exhaled smoke in Octavian’s face. “In the battle between the whale and the shark, the shark will triumph,” she intoned, “provided,” she added, “that the shark is hungry.”

“I am starving,” said Octavian.

“And you will never be full,” said Pythea. “Now leave. And don’t forget to deposit the tenth part of a golden talent for the goddess.”

Various Oracles, each assuming the name of Pythea, presided over the shrine at Delphi from before the days of Socrates through the Roman Empire and into the early Christian era. Heads of state regularly consulted the Oracles who always provided answers that were simultaneously cryptic and suggestive and often showed an uncanny insight into the affairs of the questioner. Modern historians believe that the Oracles maintained a network of agents and operatives extending all over the Mediterranean world and beyond, and that the Oracle herself was merely the visible tip of a massive private intelligence agency. And without question the Oracle was frequently accurate in her short term predictions of world events.

It was all a matter of due diligence really. Take Pythea’s remark about the fleets. The Oracle had placed spies in the shipyards serving both of the opposing forces, and she was apprised of the critical lack of maneuverability on the part of the massive, armored-plated, ten banked war galleys Marc Antony favored and which represented the leading edge naval technology of the day. Thus she was able to make an accurate prediction as to their defeat at the hands of Octavian’s fast triremes—technology forecasting, 31 BC.

Two Millennia on, the Same Old Same Old?

Things are not so different today. Both governmental agencies and private corporations consult contemporary oracles, entities with names like RAND Corporation, Battelle, SRI International, the Institute of the Future, and the Futures Group. In addition, many of the world's largest corporations maintain oracles on the premises in the form of internal forecasting and planning divisions. If other opinions are desired, plenty of academicians with specialties in forecasting are willing to consult, as are numberless industry analyst firms without any specific concentration on forecasting or futuristics. In short, there’s no lack of individuals and organizations who will prognosticate for a fee.

Forecasting, technological and otherwise, has become an established industry, one with special pertinence to technology sector in view of the grave uncertainties of the present economic climate. But are the professional forecasters of today, with their trappings of scientific rigor and their multiple regression analysis techniques, really any more trustworthy than the spooky Oracle of Delphi with her smoker’s croak, her cannabis induced ravings, and her network of spies?

The Management of Uncertainty—or Its Exploitation

Professional forecasters are not shy in proclaiming their own prescience—that is after all their stock in trade—but when pressed few will actually claim the ability to predict the future, at least not with anything approaching total accuracy. Most prefer to discuss their services in terms of managing uncertainty, following trend lines beyond the present, examining possible futures (often termed “scenarios” in the trade), stating probabilities, and generally augmenting the decision making process with specialized intelligence. Seen in the light of these fairly modest claims, forecasting then becomes merely an extension of planning, which, if it is reasoned, must always be based upon some expectations about the future. Still, many if not most professional forecasters do lay claim to specialized techniques, if not to unique insights, and attempt to assert a distinctive professional competence going beyond the planning capabilities to be found in most executive suites. Professional forecasters are saying in effect, we have our own way of thinking about the future and our own methodologies for coming to conclusions. Our pronouncements reflect our own rigorous training and our mastery of what is itself a new technology, the technology of forecasting. We can help you because we command tools known only to the adept, and because we possess the mental agility to perceive connections where more conventional minds cannot.

Larry Vanston, is one of those forecasters, indeed, a principal at Technology Futures (Austin, TX) which provides yearly technology predictions for the Telecommunications Forecasting Group. He puts it this way: “We bring method and a track record of success to the planning process. You’ll find most decision makers, particularly in the financial industry, rely on intuition when it comes to new technology. That’s insufficient.”

To the executive contemplating the introduction of an entirely new type of service offering, say, mobile games, or faced with the decision to buy optoelectronic switches now or wait for new developments in all-optical switching, such statements are reassuring. “I may not know the answers now, but I know how to determine them,” the forecaster suggests.

But does he?

The growth of the forecasting industry through the decade of the nineties did nothing to prevent all the ill-conceived investments in new technology that contributed so much to the pervasive downturn in the equipment market. Even giants like Intel, Alcatel, IBM, and JDS Uniphase, who can surely afford the most intensive research efforts from outside specialists, and who also maintain their own well staffed internal forecasting arms, providing prophecy on tap, so to speak, were not exempt from the downturn. Indeed, they were just as guilty of major misapprehensions regarding the feasibility and marketability of new technology as were the wildest startups. So many new technologies, from softswitches, to tunable lasers, to broadband fixed point wireless foundered in the marketplace that one cannot but ask why the forecasters failed to deter much of the misguided investment. Surely the futurist industry as a whole has to be considered at least somewhat remiss.

Danny Briere, CEO of TeleChoice (Richmond, VA), a major analyst firm, is prepared to go further in pronouncing judgment on the pay per prophecy business. “Forecasters are very much to blame for the cycles of boom and bust because they are given to hyping so many dubious technologies. I think the forecasting industry has lost a lot of credibility.”

Paul Kellett, a principal at Pioneer Consulting (Boston, MA), agrees, but argues that the numerous failures are largely due to unqualified practitioners rather than to any failure of the formal techniques. “Most forecasting in high tech is being done by analyst firms who perform other kinds of studies,” he states, “not by specialists in forecasting methodologies. Over the years I’ve interviewed hundreds of analysts for job openings here, and I’ve yet to find one that had a firm grasp of the mathematical techniques for trend extrapolation. Their predictions are just straight line extensions of existing trends. Of course they’re wrong.”

Brock Hinzmann of SRI Consulting Business Intelligence (Menlo Park, CA), a leader in the field of commercial futuristics, makes a similar defense of specialized forecasters, claiming, “true forecasting specialists like ourselves aren’t that much used in your industry, particularly not by startups. It’s not that the tools aren’t available, they’re just not being employed.”

Interestingly, the Federal government of the United States, in particular, the Defense Department, has always been the most avid consumer of forecasters’ studies, and has valued them sufficiently to assign high levels of security classifications to the documents incorporating them. How could an organization that commands technologies that are years ahead of those in the public domain possibly subscribe to theoretical approaches that appear unsound? A consideration of how the forecasting industry came to be may supply some answers.

The Origins of the Trade, or Inching toward Omniscience

Systematic forecasting, as opposed to mere speculation about the future, had its origins in the era of violence beginning with the fascist and communist revolutions of the twenties and thirties and culminating in the Second World War. Stalinist Russia, Nazi Germany, and Fascist Italy all launched ambitious plans for the modernization of their respective societies that involved detailed forecasts extending years into the future, while the United States, with its considerably less intrusive New Deal, initiated the U.S. National Resources Committee to make similar projections, although ones without the same force to influence national economic planning. At least in the U.S., technology forecasts were conducted in tandem with economic forecasting by the government, while in Nazi Germany a considerable amount of forecasting in the area of military technology was undertaken toward the end of World War II as the Reich struggled to bring out a panoply of innovative “wonder weapons”.

In the immediate aftermath of the War, each branch of the American armed services set up its own internal forecasting operation and made predictions concerning military technology extending decades into the future. Such studies remain classified, and neither the methodology employed nor the accuracy of the forecasts can be determined. Though if the efforts of civilian government agencies during the same period are any indication, reliance on expert opinion rather than on computational techniques was the norm.

The for-profit forecasting business we know today had to await the conclusion of World War II to emerge, and its first manifestation appears to have been the formation of RAND (Research AND Development) in 1945 in the U.S. At first a sort of quasi-government agency under the joint management of the Army Air Corps and Douglas Aircraft, RAND quickly established its independence, though for many years it sold its services almost exclusively to the U.S. government.
Those services consisted of analysis as well as forecasting, and covered an extremely wide range of topics having to do with politics, sociology, economics, public health, technology, and later the environment. RAND became the quintessential “think tank”, and became chiefly known for its devotion to public policy issues and its cold blooded forecasts on weapons technology, though in fact the company did produce highly focused studies on commercial issues for business customers, studies which became an increasingly large component of its business as the years proceeded.

RAND was unique among the pioneering forecasting organizations in that its staff devised actual forecasting methodologies, the most widely publicized of which was the Delphi technique, discussed in brief later in this piece. RAND frequently attempted to apply such techniques to technology assessment, evidently with varying results. Partly as a result of its arcane and highly formalized approach to forecasting, and partly due to the official secrecy in which so many of its activities were shrouded, RAND acquired a considerable mystique over the years. But because so much of its work was classified or confidential, the effectiveness of its methodologies, at least as practiced at RAND, cannot be established by outside observers with any certainty.

RAND’s near monopoly on forecasting expertise could not last, and many RAND alumni went on to found their own organizations such as the Institute of the Future and the Futures Group where they disseminated and publicized RAND forecasting techniques. Most of these spinoffs were much more involved in business forecasting than was RAND, and helped to establish forecasting as a business rather than a governmental initiative.

While the crucial importance of RAND to the development of the forecasting movement is undeniable, modern forecasting has other progenitors as well. Bertrand de Jouvenal, author of The Art of Conjecture and the leader of the French Futuribles movement, had a major influence on methodology as did the American academician, Harold Lasswell, the father of policy studies as a recognized academic discipline. The Club of Rome, a not-for-profit international association of economists and scientists, has also played a significant role in bringing forecasting to the attention of the public and to business leaders. Nor should the importance of free lance prophets such as Alvin Toffler and John Naisbitt be underestimated, though arguably neither has made significant contributions to formal forecasting methodology.
Technology forecasting as a distinct forecasting practice area had its origins in the nineteen sixties with academicians such as Jacob Schmookler and Everett Rogers laying the groundwork in the form of history of technology and material culture studies exploring mechanisms for the diffusion and adoption of technological innovations. The first text books on technology forecasting techniques appeared in the seventies as did the appearance of firms with extensive technology forecasting practices. Clearly, it’s a young discipline.

Or is it a discipline? Are the forecasting techniques in use today in any way comparable to truly scientific methods in methodological rigor or in accuracy?

Who to Believe?

I asked that question to every forecaster interviewed for this article, and the answers were almost uniformly evasive. Nobody claims to be accurate all of the time, but everyone claims to have a high proportion of right to wrong predictions. A couple of the firms, namely Technology Futures and Advanced Forecasting (Cupertino, CA) even provided lists of published predictions that have panned out; however, no one produced a contrasting list of erroneous predictions that would allow such a ratio to be established. As with the Oracle at Delphi, one ends up taking a lot on faith.

True, there are forecasting departments at a few universities, and an academic discipline of sorts has grown up around the practice of forecasting, but the academicians are not in the business of actually making forecasts, so the scholarly literature on the subject is not all that helpful to prospective users of forecasting services, especially in view of the fact that the techniques in use by any individual commercial forecaster may not necessarily have roots in academia. The statements of the commercial forecasters are of even less help because their work is of course exempt from peer review and is normally confidential as well. A prospective client simply can’t gain much insight into the details of the forecasting firm’s approach or of the validity of said approach from published sources, unless, as is the case with Technology Futures and perhaps one or two others, the firm chooses to disclose something of its methodology in published monographs.

The issue becomes especially acute when a firm such as Advanced Forecasting or Janus Research Group (Appling, GA) claims to have developed a proprietary technique of unique predictive power. “You just have to form long term relationship with us,” offers Tom Nicholson, a principal with Janus, when I attempt to query him on the firm’s methodology.
“Our established clients are happy.”

And who might those be?

“That’s confidential,” answers Nicholson.

In the light of such statements, and they are by no means unique, it is hard to escape the conclusion that the forecasting industry, by and large, is selling its customers a pig in a poke. What purports to be a science is not in the main being subjected to scientific validation, though there are, as we shall, see certain predictive techniques for certain specified phenomenon that do appear to hold water. Nevertheless, a number of luminaries associated with the better known forecasting firms have been sufficiently afflicted with hubris to publish books of prophecies that do allow us to assess their forecasting acumen. The published results are not reassuring.

Forecasters on Parade

In 1973 Olaf Helmer, one of the pioneering technology forecasters and author of several textbooks on forecasting methodology, was invited to contribute an essay to a technology forecasting compendium entitled An Introduction to Technological Forecasting, and edited by Joseph Martino, another renowned pioneer in the field. In an essay called “Prospects of Technological Progress” Helmer made fifteen predictions for the year 2000, most of which dealt with technological innovation. These included pervasive use of robots in the home, routine organ replacement for life extension, effective immunization against all bacterial and viral diseases, extensive undersea mining, and the elimination of cash transactions. Helmer, who co-invented the Delphi technique, gives no indication as to how he arrived at any of his mistaken conclusions.

Ten years later in a book entitled Looking Forward Helmer again predicted for the year 2000, this time stating that fusion reactors would be in commercial operation, an effective missile defense system would be established, and machines that could replicate themselves would exist. Again, no hint of how these notions were developed.
Similar amazing wrong headedness is evident on almost every page of Herman Kahn’s (another RAND alumnus) once famous and now infamous The Year 2000 published in the year 1967. The Year 2000 did not deal exclusively with technological innovation, but it did accord the subject considerable space, embarrassing its author as profoundly in this regard as Helmer’s books did him. Kahn missed the Internet, mobile telephony, advanced composite materials, and recombinant DNA, but predicted the personal computer and the use of lasers in communications although he supposed that free air laser systems would predominate and that most would involve orbital satellite platforms. He also wrongly predicted that suborbital rockets would be used for international air travel, that nuclear weapons would spread widely through the advanced industrial nations, and that Russia or the United States would invent a “tsunami machine” for drowning enemy combatants! Kahn claimed to have developed his scenarios based on extensive analysis of existing trends, and certainly he showed an impressive generalist’s knowledge of contemporary developments in science and technology. But that didn’t prevent him from erring far more often than not.

Years later, in 1983, SRI International researchers, Paul Hawken, James Ogilvy, and Peter Schwartz (the last of whom is now chairman of Global Business Network, a major forecasting consultancy) also had go at predicting the year 2000, doing so in the form of seven possible scenarios which were published in the book Seven Tomorrows. That’s seven chances to get it right, and none of the seven is even close. Schwartz et al mention a computer model behind their predictions, but fail to elucidate its basis or design. Given its fallibility, one mightn’t be overcurious to learn more.

Of course an objection might be made that I am concentrating on only the most egregious failures in the field of technology forecasting, but in fact the books mentioned are representative. They were written by the leading forecasters of the time, some of whom are still active, and there are no countervailing examples, that is, no collections of predictions that are mostly correct. Individual forecasters may succeed in anticipating some new technologies, but invariably, from what I’ve seen, their overall visions of the future depart considerably from what actually came to be. With one exception considered at length later in this article.

What makes these failures particularly troubling is that the abovementioned authors--unlike the tabloid prophets of the nineteen fifties and sixties such as Criswell and Jean Dixon—were all men with impressive academic credentials and a thorough command of the most sophisticated mathematical modeling techniques for devising forecasts. They were also men who had consulted at the highest levels of the government and on the most pressing issues of national security. We may never know precisely what these individuals told the U.S. Joint Chiefs of Staff, but in the pronouncements they made to a public readership, they erred frequently and grievously. It gives one pause.

It gives one further pause to understand that no really new forecasting techniques have been devised since about 1980, at least none that has been made public, though obviously greater computing power is available to perform data base searches and run mathematical models. Is then the exercise of forecasting hopeless? Has anyone in the field been able to demonstrate any sizable percentage of correct predictions?

That depends, as we shall see, upon the methodology employed and upon the scope of the predictions themselves.

Predict What?

Forecasting can involve almost anything--changes in social norms, economic growth or decline, political developments, even artistic trends, and of course technological innovation--though because the services of forecasters are expensive, commissioned forecasts tend to be limited to matters of public policy or grave economic concern. As it happens, technological forecasting accounts for a major portion of the whole futuristics business, though I’ve been unable to arrive at any precise percentage. Incidentally, technology forecasting involves many of the same techniques used in other types of forecasts, in other words, it’s not a completely separate discipline.

Within the field of technology forecasting certain aspects of technology figure far more prominently in the forecasts than others, and a number of key distinctions become useful, nay essential.

Perhaps the most basic distinction is between invention and innovation. An invention in forecaster’s argot represents the emergence of a new technology in the conceptual or experimental sense, the Wright Brothers’ airplane flown at Kitty Hawk, North Carolina, in 1903 being a good example. An innovation, on the other hand, is the embodiment of the invention in a production form; in the case of the airplane, the invention did not become an innovation until the next decade when pioneering aviation companies like Curtiss and De Havilland began to build and sell airplanes and established a real market for them.

Or, to take an example from the field of telecommunications, the invention of the cellular telephone can be traced back to 1948 when Bell Labs enunciated the principle of frequency reuse from cell to cell and laid the conceptual foundations for cellular networks. The innovation occurred with the establishment of the first cellular service in Saudi Arabia in 1977 almost thirty years on.

The distinction between invention and innovation is of more than academic interest for a number of reasons. First and most obviously, innovations are where the money is made. But equally important in forecasting is the fact that the methodologies for predicting the prospects of an innovation are far more reliable than those applicable to inventions, in other words, you can’t place much confidence in forecasts when you’re still in the r&d stage.

A further example will help to illustrate this point. In the year 1947, Hungarian physicist and Nobel laureate Denis Gabor invented the hologram, publishing a report on his concept the following year. Gabor’s work was almost purely conceptual, and for actual realization required the invention of a laser light source which did not occur until 1962 (holography itself followed a year later). Now let us ask ourselves, could the passage from invention to innovation possibly have been predicted in 1948, given the intellectual resources of an organization such as the then newly established RAND Corporation?

Almost certainly not. Forecasting when and whether a laser light source would be developed would have been extremely difficult at the time even for the geniuses at RAND because several intermediate milestone events had to occur before the fabrication of a functioning laser amplifier could take place. The basic research and development work hadn’t even begun on concocting such a device, so even the most astute and knowledgeable scientist could only have guessed. A few years later one could have inferred from the somewhat analogous maser that lasers were possible, but in 1948 masers hadn’t even been designed, let alone demonstrated. And forecasting the uses to which holograms might be put would have plunged the forecaster into yet more impenetrable mysteries because without the finished device to study one could scarcely comprehend all of its properties.

Incidentally, the same problems might have occurred had RAND evaluated cellular telephony in the same year because the control functions essential for executing cellular handoffs would have been almost impossible to realize with the hardwired analog circuitry of the day and instead required a number of breakthroughs in digital computing, a technology which scarcely existed in even the most basic form. These breakthroughs simply could not have been predicted with any assurance back then because many of the core technologies for achieving them were not in existence nor even conceived.

As we shall see in a following section, there is no indication that our ability to predict major technological and scientific breakthroughs is any better today than in the year 1948. What is better understood is the diffusion of innovations in the marketplace, but such an understanding does not take us very far into the future because it deals only with products already in commercial form.

Yet More Subtle Distinctions

In projecting progress in technology it is important to establish some means of assessing the relative magnitudes and degrees of distinctiveness among inventions, because that, after all, will indicate their impact in the marketplace.

Some inventions are obviously more important than others, though their ultimate importance may not be evident at the time of their first appearance. Inventions such as the steam engine, the locomotive, the Bessemer steel converter, the self obturating cartridge for firearms, the internal combustion engine, the electrical generator, the electric motor, the incandescent bulb, machine tooling, the telephone, the automobile, the vacuum tube, television, and the transistor were clearly of seminal historical importance. Today the Internet Protocol and the personal computer are manifestly in the same category, though what other inventions from the last quarter century are truly seminal remains to be determined.

Some seminal inventions are seminal by virtue of the fact that they profoundly change the lives of their adopters. Electric lights, indoor plumbing, mechanized transportation, modern medicine, the telephone, motion pictures, television, and the Internet fall into that category, what I would call transformational innovations. Others are seminal because they are foundational, that is, they enable a host of additional inventions. The steam engine, the Bessemer converter, petroleum refining, the internal combustion engine, the electrical generator, the vacuum tube, and the transistor were clearly foundational because literally thousands of new kinds of devices proceeded from each. In some cases, one foundational invention can spawn another as in the case of the microprocessor and the personal computer, and often multiple foundational technologies are required before a particular invention can become an innovation—as in the case of the automobile where mass produced steel, petroleum refining, and the internal combustion engine all had to appear on the scene before modern automobiles themselves could emerge.

Foundational technologies are germinal, changing the nature of the entire technology landscape. They represent technological fault lines.

Foundational technologies, which are arguably the most important kind, are not always recognized as such at the time of their introduction. Lee DeForest had little notion of the uses to which the triode vacuum tube would be put when he invented it in 1906. Likewise, the manifold applications for alternating current or digital computers were scarcely envisioned at the time of their first appearance. What are some of the seminal foundational technologies of the present? Possibly fuel cells and carbon nanostructures, but we can’t know that with absolute certainty yet. Which makes accurate long term forecasting doubly difficult.

Identifying seminal inventions of the first type, i.e. those which most profoundly change our lives, can be difficult as well. Personal computers, when they were introduced in the seventies, were distinguished largely by their packaging and form factor; their circuitry wasn’t innovative in the least, nor were the applications associated with them. Which is perhaps why so many individuals in the computer industry did not attach much importance to them or even regard them as new inventions, for that matter. In contrast, most of the transformational innovations of the late nineteenth century such as indoor plumbing and central heating had obvious appeal. Arguably, the identification of transformational innovations is becoming more difficult over time.

But not only can seminal inventions be difficult to identify, they can take a very long time to manifest their importance. The Austrian physicist, Oskar Heil, developed the first transistor in the early thirties. Twenty years passed before a transistor appeared in the marketplace, and that used the Bell Labs bipolar design rather than Heil’s earlier field effect configuration. The French scientific amateur, Denis Papin, invented the steam engine, the steam boat, and the steam carriage at the beginning of the eighteenth century. Years passed before steam engines found any practical application at all, while commercial steam boats lay a hundred years in the future and steam locomotives a hundred and twenty. Finally, fuel cells were invented in the 1830s. And they have yet to find a major market.

Are seminal inventions more common today than in earlier times? Conventional wisdom has it that the pace of technological progress is constantly accelerating, but in fact really seminal inventions may not be occurring at a greater rate than during peak periods of invention in the past. After all, what foundational technology of equivalent importance to the transistor has been invented in the past forty years? What transformational technology equivalent to indoor plumbing has occurred in the same period? Much as some new economy types might try to argue that the PalmPilot is a more important innovation than say the high speed printing press, such a view is simply short sighted. The tremendous fallacy of the late nineties was that every new product introduction would assume seminal importance. But clearly most product introductions fail as they always have.

Nor should we assume on the basis of Moore’s Law that accelerating performance improvements are the norm across the range of modern technologies. The design of reciprocating engines has been subject to only incremental improvements over the past 100 years, as has been the case with electrical batteries, and in fact with most other established manufactured product categories. Integrated circuits appear to be a special case, though one with fairly profound implications for technology in general because of the growing use of them in control functions and “smart” appliances.

So how have professional forecasters fared in identifying seminal technologies? Generally very poorly. Few of the published forecasts from the early seventies identified any of the seminal innovations of the present, and today most forecasts simply concentrate on established product categories and make no attempt to anticipate really significant changes ahead. Something to think about before committing organizational resources based on some forecaster’s opinion.

The Preoccupation with Replacement

Another distinction that is frequently made in the field of technological forecasting is between so-called replacement technologies and those that are fundamentally new—the latter being termed radical technologies by some. An example of the former would be Dacron fiber which might be substituted for silk or linen. Examples of the latter would be the packet network and the PDA. This distinction is so central to forecasting today that I am devoting a whole section to it.

Replacement of one technology by another has engaged the professional forecaster far more than has predicting the reception for entirely unique inventions. Replacement tends to follow a fairly predictable course, and allows the forecaster to prognosticate with greater assurance. The problem is that distinguishing between a replacement technology and one that is entirely novel can be difficult, and such distinctions as are made are often artificial.

For one thing, designated replacement technologies rarely fulfill their assigned role perfectly. Take those artificial fibers like Dacron, rayon, nylon, and their more advanced successors such as Kevlar, Spectra, and Vectran. Initially synthetic fibers were seen as straight substitutes for natural textiles, but over time they were drawn and woven in novel ways or combined with various kinds of synthetic resins to create new structural materials such as epoxy impregnated fiberglass and carbon fiber for aircraft construction, or woven or quilted aramid fiber for high performance tires and ballistic armor. Simply to track such a material within a legacy market such as clothing, as has been the case with many studies, is to miss its real importance, and in the case of the manufacturer commissioning the forecast, to miss crucial profit opportunities.

Or take the cellular telephone, initially introduced in the U.S. as a replacement for the IMTS car phones. Again a concentration on replacement, which is all too frequently what forecasters do, would be to have missed most of cellular’s true market potential.

On the other hand, inventions seen as without precedent may actually function as replacements in the marketplace. A good example is provided by the telegraph when it was introduced into France. France already had a system of semaphore signaling allowing the transmission of coded messages across the country in a matter of hours. While the principle of operation of the telegraph was utterly dissimilar to that of the semaphore it was actually a substitute technology in this context.

Forecasters’ preoccupation with the idea of replacement and their frequent failure to identify true replacement technologies correctly has resulted in a plethora of studies that fail to comprehend the market potential of many new technologies, and in the promotion of a model of technological change that is fundamentally flawed and highly misleading. That the forecasting industry as a whole is captive to this model only adds to its problems in dealing with the uncertainties inherent to any predictive exercise.

Technology Diffusion, the Forgotten Dimension

Before we proceed to the subject of forecasting methodologies, one additional concept should be introduced, this in regard to innovations, that is, inventions that are already products. And that concept is the diffusion of innovation.

Many technological forecasts proceed on the assumption that technological advances essentially sell themselves. They do not. The diffusion of new technology takes place within communities of users based upon individual responses to opinion leaders within those communities and to communications in the mass media. Different communities react in various ways to innovations, based upon their own internal norms and customs, and product innovations can fare very differently in disparate communities.

During the late nineties telecom boom issues of how a particular innovation might diffuse through the user populations were largely ignored. Viral marketing would take care of the market dissemination problem, it was assumed. I remember one promoter of a new gaming console actually telling me, “people will simply have to buy it. They’ll be left behind if they don’t!”

Anyone who predicts market growth for an innovation at its introduction without any precise notions as to how it will diffuse is guessing. Since forecasters only very rarely undertake market research relating to diffusion, most are in fact guessing.

One final comment on diffusion: The hardest things to predict are generational changes in basic social norms and values that can open new markets for innovations. Who could have possibly predicted in the nineteen forties that women would comprise a major market for all kinds of strength and conditioning equipment such as crosstrainers and Stairmasters? For that market to develop required not only the invention of microprocessor controlled displays but an acceptance of extreme fitness activities as appropriate feminine behavior. And that such would occur was not at all obvious.

Diminishing Certainties

Forecasting the prospects for a given innovation rather than the invention from whence it came is a well established quantitative process involving accepted mathematical formulae of proven predictive power (described immediately below), and this is the area where forecasters tend to get it right—if they’re careful and know what they’re about. Forecasting the fate of an invention is, by industry consensus, much more difficult and usually less systematic, though there are patterns that seem to be predictable such as the appearance of an automotive innovation in production vehicles approximately five years after it has been tried on the racing circuit. The earlier examples of the hologram and the cellular telephone demonstrate the magnitude of the uncertainties facing the forecaster confronted with the truly novel technologies embodied in a major invention. Myriads of enabling inventions have to become innovations themselves before the invention in question can reach the innovation stage, and predicting each of those in turn is an almost hopeless task.

Some students of technological innovations have postulated a theory of developmental milestones to indicate how close an invention is to becoming an innovation, but no one has ever succeeded in defining milestones that would apply across a range of innovations. “Sometimes,” says Jules Duga, a senior analyst at Batelle, “you can plot the development of an invention along a time line and indicate what has to be done at each stage, and that can provide you with some basis for prediction. But,” he adds, “as an outside consultant you have no control over the decision making process, so you can’t know when and if the necessary steps will be taken.”

Dave Olesen, director of Intel Communications Group Market Analysis, further observes, “there’s a whole different dynamic when an invention is still in the lab than when it’s out in the marketplace, and I don’t think we can say with any certainty where a research project is going to end up.”

Predicting the appearance of an invention is more difficult still, though forecasters certainly attempt this feat. Generally such predictions will be most successful when the invention in question represents the synthesis of a body of established engineering knowledge and requires no fundamental scientific breakthroughs for its realization, and where there are concerted development efforts already underway to produce the invention. The Wright Brothers’ airplane was just such a synthesis and was predicted by Octave Chanute, a Franco-American aviation engineer, who published two books in the eighteen nineties summarizing virtually all of the research in flight up to that time. The Wrights read Chanute, as did all aspiring aviators of the day, and they used his books as reference works, a clear and wonderful case of the self fulfilling prophecy. On the other hand, predicting the hologram in the early nineteen forties would have probably required the same scientific breakthrough made by Gabor.

Can scientific breakthroughs themselves be successfully predicted? James Bright, author of Practical Technology Forecasting, argues that all such predictions are mere speculation, but some forecasters, notably the above mentioned Peter Schwartz, believe that anticipating scientific discoveries is possible. Schwartz appears to base that belief on the historical theory of scientific thought enunciated in the classic The Structure of Scientific Revolutions by Thomas Kuhn (1962) which gave the world that overused term the “paradigm shift”. I am skeptical, however. Kuhn’s cyclic theory of scientific conceptualizing appears to fit pretty accurately the revolution in modern physics in the early twentieth century, but has limited explanatory power when applied to other major advances in scientific thought, and is by no means universally accepted. And it has only been used to explain past paradigm shifts never to predict those about to occur, at least not with any success.

Steve Millett, a senior researcher at Battelle, takes a more practical approach in arguing that scientific breakthroughs can be forecast. “I think experts in the field know where most of the theoretical work is taking place and where the leaders are in their research,” he offers.
Perhaps so, but no one has ever been able to predict the precise form of transformations in scientific thought extending far into the future, in part because the evidence prompting revisions in basic theory generally isn’t yet available. I would go further and say that any technological forecast based upon assumptions about the state of science years into the future is almost certain to be wrong. Witness all of the wrong predictions in the seventies and eighties based on the notion of imminent breakthroughs in artificial intelligence, or the long and dismal history of purported cancer cures. View with grave suspicion anyone predicting a computer able to communicate in natural language within the next decade. It may happen, but it will require a radically revised understanding of how humans process meaning—in other words, a major scientific breakthrough, an event which simply can’t be predicted.

Finally, many forecasters, most particularly those associated with the prestigious forecasting Institutes like Battelle and SRI International, have been most comfortable in performing technology forecasting within the context of “big picture” projections which look at the evolution of the entire society embracing the predicted innovations. Such mega forecasts are understandably not much requested by companies striving to reconstruct their business plans, and they are frequently wildly inaccurate as well, though Battelle claims some success, impossible to confirm, in performing such studies for the government. But they do reflect the underlying reality that technological change does not occur in a vacuum and is utterly interdependent with economic, social, political, and environment changes, which themselves are extremely hard to forecast with any accuracy.

In any event, at this point one is going far beyond an investigation of technological innovation and attempting to construct a future history. And that takes one very quickly into some very deep and treacherous philosophical waters. Is the world really determinate? A century ago most individuals of a scientific cast of thought would have answered in the affirmative and would have asserted that given enough data one could predict future events with absolute confidence. But in the twenty-first century, with quantum indeterminacy an established principle in modern physics, and chaos theory finding increasing employment in explaining large scale phenomena in both the natural and social sciences, who can say with any certainty that there is a predetermined future to predict? The corporate decision maker may be inclined to dismiss such questions as the worst kind of idle speculation, but in fact they are basic. Unless the future is actually predestined, all forecasting is highly provisional. Logically, it can’t be otherwise.

The Tools of the Trade

Some prominent futurists such as Olaf Helmer, creator of the famous Delphi technique, and cofounder of the Institute of the Future, maintain that at least some forecasting procedures constitute legitimate scientific techniques. Whatever the validity of such assertions, any detailed consideration of them would involve forays into epistemology and the philosophy of science too lengthy too undertake here. Instead I shall simply enumerate the major approaches and leave the reader to determine how closely they approximate scientific experimentation.

Forecasting techniques fall into a handful of basic categories which vary in number according to the expert quoted, but which do not by the most generous estimate exceed a dozen or so. In most cases the same techniques are used to predict population growth, the adoption rates for vaccination in developing countries, or the introduction of a new technology, though there are techniques that seem to me peculiar to technology forecasting. Here in this general listing I include only those techniques mentioned in multiple sources, and only those with particular relevance to technology forecasting.

Bear in mind that the various techniques are not competitive with one another but rather are complementary. Most forecasters use more than one technique when making a prediction.

Extrapolation

Trend extrapolation is arguably the most useful occupant of the forecaster’s toolkit, and I would hesitate to employ any forecaster who is not extremely well versed in this area. In this procedure the forecaster identifies a process, say the adoption of a particular innovation such as the smartphone, and makes predictions as to how that process will proceed based upon progress to date. Typically the forecaster relies upon a mathematical formula in plotting the curve defining the process, and thereby assumes an underyling uniformity to the unfolding of the process.

To doubters this approach may be seen as either patently obvious or, conversely, misguided in assuming that the future will simply mirror the past, but it is trend extrapolation that in fact has provided the forecasting industry with most of its limited successes. Very abundant evidence actually exists that once an innovation begins to be adopted, the pace of adoption will tend to conform to what mathematicians call a logistic curve, or to a family of similar curves.

Such curves assume a familiar S shape with a region of moderate slope at the beginning, a steepening straight line region in the middle, and then a flattening of the curve at the top. They represent the entire adoption cycle over time. If one can determine where a particular innovation is at a particular moment along the trend line, then one can predict where it will be at a given point in time in the future, in other words, one can actually predict its market potential! And that such predictions are frequently though not invariably accurate is well established in the literature.

Havard Business School professor, Clayton Christensen, author of the best selling The Innovator’s Dilemma (revised edition, 2000) claims that such curves also apply to performance improvements within a given technology over time, citing examples within the disk drive category, including successive designs of read-write heads employing ferrite, thin film, and magneto-resistive materials. According to Christensen, each design followed a logistic curve of performance improvement, only to be replaced by another design when the curve began to flatten. It should be noted, however, that in Christensen’s example the overall improvement curve for disk drives is not logistic, but closer to the Moore’s Law pattern of continuous straight line improvement.

I should also mention that back in the seventies Joseph Martino plotted a logistic curve for improvements in energy efficiency for residential illumination over time extending from candles up through neon lights, a trend line which would seem to belie the Christensen example and suggests that such S shaped curves can in fact extend through successive replacement technologies. The issue is far from resolved at this time, and such discrepancies indicate that the forecaster should use extreme caution when attempting to plot trends across successor technologies.

It should be noted here that logistic curves are not only characteristic of the market acceptance of a new technology, or new customs or institutions, for that matter, but of many natural phenomenon such as the growth of a bacteria colony in the face of a finite food supply, or indeed of almost any gradual process that is limited by external constraints. Thus the fact that logistic curves should prove so apt for predicting the spread of technical innovation is hardly surprising.

Several variants of the S shaped curve have been described in the mathematical literature, but two in particular, the Fisher-Pry and the Gompertz, are used with especial frequency in technology forecasting. In all cases the adoption pattern is defined by two parameters, time at which adoption begins and the rate of adoption. The Fisher-Pry curve is symmetrical bottom to top while the Gompertz curve has an extended region of flattening at the top of the curve and an abbreviated flattening at the bottom. The former finds application with the greatest number innovations, particularly those where one technology is substituted for another such as water based paint for oil based paint and disc brakes for drum brakes, while the latter is more applicable to entirely new technologies. A third model, the Pearl curve, is often applied to established technologies subject to minor innovations intended to extend their useful lives or to overcome limits constraining some aspect of performance.

Despite the often successful exploitation of such mathematical models by the forecasting industry, they are not without their detractors. On an immediate level they seem to reflect our intuitive feel for the overall shape of the demand cycle, which should, in the normal course of things, begin slowly as the market is introduced to the innovation in question, then accelerate as the market accepts it, and then finally decline as the market is saturated; still, historians of invention as well as some contrarian professional forecasters cite numerous examples of technology introductions that do not appear to follow the curve. The introduction of atomic energy for electrical generation in the U.S, which came to a virtual halt after strong initial growth, has been mentioned with particular frequency in this regard.

Even when a logistic curve is apposite, the forecaster is left with the problem of where to begin the curve. Often in the literature a figure of 1% penetration is accepted as the appropriate point for plotting a curve (the conservative Everett Rogers prefers 10 to 20%), but then you can’t really know when the 1% point is reached until market saturation has been achieved and so you must resort to an estimate that is based instead on market projections—in other words, you’re projecting the market size you’re attempting to deduce, and then trying to work backwards from that! Furthermore, such curves don’t apply at all prior to the productization of a new technology, so if one is trying to determine the feasibility or marketability of a new invention, extrapolation is of little help. The fact is that plotting a logistic curve is best undertaken after the innovation has completed its life cycle when said curve will be of no predictive value whatsoever.

Another conceptual problem with such extrapolation is the fact, often cited in the academic literature on the subject, that diffusion of a technology across markets geographically may follow a different curve than temporal diffusion within a single market. And indeed the definition of a market often assumes an arbitrariness that casts doubt on the whole enterprise. The telephone diffused very rapidly among businesses in metropolitan areas in late nineteenth century America and almost as rapidly among the affluent in major cities. But diffusion among the masses of the people was painfully slow in the U.S., while in some places in Latin America the telephone never moved beyond a small upper class subscriber base until near the close of the twentieth century. And yet most discussions of the diffusion of the telephone confine themselves to a single curve. One may presume that much of the literature on adoption rates for major innovations is actually giving us composite curves representing the amalgamation of the different adoption rates, but how much do those curves really tell us about market behavior? It is interesting to note that Devendra Sahal, a well known academician who published extensively on the subject of the market diffusion of new technologies, believed that the relevance of logistic adoption curves and their like is severely limited on just such grounds. Even Larry Vanston, who depends heavily upon trend extrapolation in his work, says, “business and consumer markets are often conflated in market projections, and that can be highly misleading.”

A further problem with trend extrapolation is its utter lack of predictive power in the face of what might be termed the repurposing of technology. Examples of this would be the use of the Internet Protocol in public access networks, rather than in the closed governmental networks where it originated, the use of the MP3 audio protocol for music file swaps in massive peer to peer networks rather than in professional audio production, and the use of spread spectrum radio techniques in consumer cell phones rather than in military stealth radios. This failure to anticipate the repurposing of technology becomes particularly disquieting when we realize that many of the most influential inventions derive their importance from abrupt repurposing. The fact that such repurposing does not appear to be predictable, at least not by any generally recognized formal methodologies, has to represent one of the more glaring deficiencies in the current art of technology forecasting.

Another oft stated limitation of trend extrapolation techniques is their lack of explanatory power. Simply put, the techniques posit no mechanism of diffusion, diffusion just happens in a certain way. And one would very much like to know why it happens, particularly why it happens for some innovations and not for others.

Furthermore, most forecasters who use trend extrapolation admit that the presence of similar competing technologies aimed at the same market niche plays hob with the trending process. “Beta and VHS videotape are a good example,” notes Larry Vanston. “There must be a hundred explanations as to why VHS won, but it would have been very hard to call at the time. That kind of the thing is the forecaster’s nightmare.”

Another problem with extrapolation methods, a problem which I have never seen mentioned in all of the vast body of writing pertaining to forecasting, is the singular paucity of accounts relating to failed innovations. What might the adoption curve look like for Sony’s Beta and Minidisc formats? And what about the General Magic magicCap operating system? Since most innovations fail in the marketplace, you’d like to know as a business person if you’re stuck with a dog, and if an adoption curve could tell you that, it would be highly useful. But as John Vanston, founder and president of Technology Futures, relates, “you won’t find anyone spending money to do postmortems.”

One would also like to see the trend lines for innovations that go out of favor without any obvious replacement technology such as citizen band radio in the United States. What sort of curve do these failures describe in their descents?

Then too, one must grapple with the problem of determining when a product deserves to be treated as a distinct innovation. Is a new generation of personal computer entitled to its own logistic curve? Larry Vanston thinks so, but I’ve seen remarkably little written on the issue.

Finally, there is the basic weakness of the trending process to be considered, namely the fact that trend analysis can’t possibly anticipate radical new developments, particularly in markets. The basic failing of most predictions is that they assume the continuation of present trends indefinitely. A case in point is the famous House of the Future in nineteen fifties Disneyland. In keeping with fifties material culture it was crammed with household appliances, some of which never made it onto the market, but the structure was devoid of networks which would actually characterize the model homes of the end of the century.

Comforting though the precision of mathematical curve tracing may be, it is clearly a technique of limited utility on several grounds, and perforce the professional forecaster must resort to other methods in many instances. One of the more common options is the use of analogy.

Analogy

Analogy in forecasting terms is just what the name suggests, the examination of a past innovation deemed comparable to that under consideration, and then an attempt to predict the fate of the newcomer based upon the history of the earlier technology. The use of analogy is sufficiently widespread in the forecasting business to constitute an established technique, and one that can in some instances yield insights of startling prescience. Though, as we shall see in a moment, it is not without its own attendant difficulties.

By the use of analogy the forecaster can consider a technological development at any stage in its lifecycle and can go beyond simply plotting its adoption rate and can begin to ponder its ramifications in the social sphere, its relationship to other innovations, and other considerations having to do with the larger context in which the innovation is situated. On the other hand, the forecaster faces the monumental problem of determining the appropriateness of any given analogy, a process for which there appears to be no formal procedures mentioned in the literature.

False Analogies

The plain fact is that many analogies used by forecasters that seem appropriate are highly misleading, and often those where outward appearances suggest an extremely good fit.

For example, the Laserdisc video format introduced in 1978 by Philips and MCA would appear at first glance to be a good historical analogy for the DVD, itself introduced by a consortium of manufacturers in 1996. Both in their initial forms were optically scanned, read-only, rapid access discs with superior image and sound and support for special features, and both were positioned against a magnetic tape based video format, in fact, the very same format in each instance, VHS. Yet Laserdisc achieved rather poor market penetration while DVD flourished. Where’s the analogy in market terms?

Or take the analogy of the vacuum tube to the transistor, one very often mentioned in the literature of forecasting. Although the two elements are functionally somewhat similar in an electrical sense, and have been used in roughly the same way in many of the same products, the analogy is less useful than it first appears, because the vacuum tube, unlike the transistor, wasn’t a substitute for anything, while the transistor, though it was widely substituted for the tube, was in no sense a mere replacement because it also sparked an enormous range of product innovations with no analogues in the vacuum tube era and which scarcely could have been realized with tube circuitry.

Indeed one begins to perceive that almost never does an analogy based upon functional and or physical similarities provide useful information for predicting the progress of an innovation in the marketplace, counterintuitive though this may seem. Because this notion seems so implausible, but is in fact so apropos, I shall cite a number of further examples.

The first involves IMTS mobile phone service in the U.S. and the successor cellular networks. IMTS appears to be an obvious historical analogy, but the progress of these two services in the marketplace was entirely dissimilar.
The second involves color television in the U.S. which was introduced not once but twice in two entirely different formats, providing, it would seem, a near perfect analogy. The first introduction of color occurred in 1949 when CBS debuted Peter Goldmark’s sequential field system, the second in 1954 when RCA introduced the current analog system, which, incidentally, is technically inferior to Goldmark’s. The CBS experiment tells us precisely nothing about how the later system would fare in the marketplace, because the CBS system was eventually disallowed by the FCC, in part due to RCA’s political lobbying, and because the CBS format was actually introduced prior to the near universal adoption of black and white television--in other words, before the basic medium of television had entirely proven itself in the marketplace.

Finally we have the analogy so frequently made between the growth of dialup Internet access and the growth of broadband. Growth curves have not in fact been comparable, and much money has been lost by those who assumed the explosive growth of the first would be replicated in the second.

Unfortunately a focus on structural similarities of these sorts is a staple in the forecasting industry, largely because they appear plausible. The fact that they tell us nothing is conveniently ignored.

Rather More Useful Analogies

So where is an analogy likely to be useful? Arguably on a much more abstract level than that of functional or structural similarities.

If we look very generally at the most important and successful innovations during the modern age and contrast them with competing technologies that failed, we will find certain intrinsic similarities across a whole range of successful technologies including many which are superficially dissimilar.

Perhaps the attribute that recurs most frequently in extremely successful innovations is adaptability to a wide and growing range of functions and applications on the part of the insurgent technology as compared to its less successful predecessor or current competitor. This is true of all foundational technologies and of most transformational technologies as well.

To take a very notable example, in the competition between the steam engine and its chief rivals, the internal combustion engine and the electric motor the steam engine suffered from an inability to scale downward in size. Steam engines were satisfactory for driving locomotives, ships, large pumps, and heavy industrial machinery, but they were almost wholly unsuitable for portable power tools, household appliances, automobiles, and small generators, let alone airplanes, toys, and clocks. Since internal combustion engines and electrical motors could both be scaled up to compete with steam in heavy duty applications, and were more energy efficient than the steam engines to boot, the use of steam inevitably declined to a few niche applications.

Similarly, if we compare coal gas with electricity we find that while both provided intense residential illumination, coal gas had relatively few other uses—primarily cooking and, to a limited extent, heating. While gaslights for many years provided much better illumination than electrical light bulbs, gas could not run appliances based upon fractional horsepower motors nor active electronic devices employing vacuum tubes or transistors. Gas was essentially a two application technology, while the applications supported by electricity number in the thousands and that number is constantly growing.

Yet another example can be taken from the transportation industry, the competition between the automobile and its principal rivals the motorcycle and the electric trolley car a hundred years ago. During the first decade of the twentieth century motorcycles may have outnumbered cars in the United States, though accurate data is difficult to assemble, and electric street rail systems were found in nearly every large city. But a motorcycle and a streetcar only provide personal transportation while an automobile provides a basis for family transport, commercial fleets, emergency vehicles, and so on. Motorcycles dwindled to a small niche market and in America only one manufacturer survived, while most interurban rail systems disappeared entirely.

A final example may be gleaned from the field of computing. During the nineteen fifties, the first decade of commercial computers, analog computers outsold the digital variety by a wide margin, a fact which is largely forgotten today. Analog computers were relatively small and inexpensive and matched the skill sets of the engineers who used them in that they were manually configured rather like machine tools-- facts that ensured their success through the fifties and early sixties. Their problem was inflexibility. While in theory an analog computer can emulate most if not all of the functions of the digital variety, its real world applications were extremely limited, mostly consisting of computer aided design and dynamic system modeling. Analog computers rode the same efficiency curves as did their digital brethren as tubes gave way to integrated silicon, but their limited uses eventually doomed them in the marketplace.

Having cited these examples, I must say that I have yet to see much evidence that the versatility and adaptability of an innovation is given a lot of consideration in the discussions of analogy that I have seen in forecasting texts. Perhaps if they were, long term forecasts concerning innovations for which no product history exists could be attempted with greater confidence.

Another apparently useful type of analogy that appears to hold true across a wide range of technologies has been identified by one Brian Winston, an English academic and author of the extremely well researched Media Technology and Society. Winston is an historian, not a professional forecaster, but his theories provide what appears to be a sound basis for forecasting the progress of an innovation in the marketplace.

Winston introduced the concept of supervening necessity by which he meant an application that more or less demanded the innovation before the innovation itself appeared in the marketplace. And, while he confines his discussion to communications technologies, the concept would seem to be generally useful across a whole range of technical innovations.

Winston provides numerous examples of supervening necessities for various electrical communications devices over the course of the past century and half, but two are especially exemplary, the telegraph and the radio.
The telegraph was immediately adopted in America to coordinate railroad traffic, a move which quickly resulted in a sharp reduction in accidents. It was also embraced by financial traders who wanted information instantly, and shortly thereafter by the press for the same reason. In other words, it had not one but three supervening necessities. Radio, on the other hand, had only one supervening necessity, but that was compelling. Prior to radio, British naval vessels had no way of communicating with other ships located over the horizon. Radio provided a way of executing precise fleet maneuvers over hundreds of miles of ocean.

Winston’s notion of supervening necessity finds some echo in forecasting literature, notably in Bright’s previously mentioned Practical Technology Forecasting where the author relates how in 1952 the Defense Department, contemplating the steady growth in the number of interconnected electronics systems on Air Force aircraft and determining that the relatively poor reliability of vacuum tubes would set a limit to this trend, decided to seek a replacement for the tube. That replacement was the integrated circuit, invented in 1958, and it obviously met the requirements of a supervening necessity.

On the other hand, a supervening necessity in itself is insufficient to establish an innovation beyond the scope of that particular need. For example, mobile data answered the acute needs of fleet service operations in the early nineties, but that did not lead immediately to a true mass market for mobile data services.

It should also be mentioned that not all historians of technology accept the supervening necessity argument, reasonable though it seems. Such dissenters are partial to the quip “invention is the mother of necessity”, meaning that after an innovation is introduced all sorts of uses will be found for it that were never anticipated. Certainly one can find examples to support this position—the digital computer is a prime example—but even the digital computer had a supervening necessity initially, that involving decryption of military dispatches during World War II. Had the need not been so pressing, the development of the device might have been forestalled indefinitely, and, similarly, without tens of millions of research dollars provided to electronics firms by a U.S. Defense Department frantic to develop more reliable aircraft electronics, the integrated circuit might have been a very long time in coming.

A final basis for deriving valid analogies is provided by Clayton Christensen with his categorization of innovations as either sustaining or disruptive.

Sustaining technologies are improvements in an existing technology such as a faster CPU on a desktop computer. A laptop computer, on the other hand, would constitute a disruptive technology in that it represents a fundamentally new direction rather than an improvement. According to Christensen, and he supplies numerous examples, disruptive technologies frequently displace earlier product categories drawing on sustaining innovations to extend their product lives. This occurs because the disruptive technologies first establish themselves in tertiary markets where cost effectiveness is crucial and then exploit their superiority in that regard within larger markets.

Christensen’s work is known to some forecasters, and at least one, Michael Raynor of Deloitte & Touche Consulting, makes regular use of Christensen’s concepts in his own work. But to say that this type of analogy is in general currency within the profession is simply not the case.

To conclude, the more general sort of analogies discussed above would seem to be more useful than superficial resemblances in picking technology winners, but I must emphasize that no forecaster has established any significant track record based upon the use of these notions. They seem historically valid, but their effectiveness in the field of futuristics remains to be proven.

Still, whatever the limitations of the use of analogies for forecasting, it remains a key technique. One would want to inquire very closely of any individual forecaster just how he makes use of analogy.

A Thought Exercise

In the light of the preceding discussion, let us briefly consider two potential foundational technologies of the present time, fuel cells and carbon nanostructures.

Fuel cells are seen today mainly as replacements for primary batteries or as partial replacements for reciprocating engines in cars, neither of which application would make them foundational technologies, and neither of which can be deemed a certainty. For fuel cells to become foundational, they must be used in applications that exploit their unique properties of very high energy density, high scalability, and lack of polluting exhausts. The probable application would be some kind of motor, and one connected with some kind of robotic function might be a good bet. For robotics to expand beyond its present narrow niches would of course require far more than a fuel cell power source; considerable advances in artificial intelligence would have to occur as well. But fuel cells combined with AI could be highly synergistic, and perhaps truly foundational.

Carbon nanostructures are interesting in that they can assume a very wide range of electrical and structural properties by means of various modifications in their molecular arrangement. Some forecasters, such as the Club of Rome, see significant shortages in many elements such as chrome, copper, silver, and so on occurring in the latter part of this century that would make these chameleon properties highly attractive. Carbon nanostructures have the potential to become a multifarious replacement technology for scarce elements as well as assuming entirely new functions such as room temperature superconducting, which in turn could transform the nature of power distribution, motor design, etc. I’m not saying that either technology will necessarily come to fruition, but examining each within a proper conceptual framework helps in making educated guesses.

Expert Opinion

Forecasters impatient with the task of attempting to make sense of history often fall back on an entirely different technique, seeking out expert opinion, or, when an appearance of investigative rigor is sought, a diversity of expert opinions. Impaneling experts is a standard technique among forecasters today.

Reliance on experts appears to be a well considered strategy but brings with it a whole set of problems. Individual experts or even groups of experts are very often wrong in their assessments and predictions, and of course determinations of expertise are themselves heavily based upon formal professional credentials and to that extent are arbitrary. Then too, experts on one particular technology may be heavily biased in its favor and thus inclined to assess its prospects over-optimistically. Marvin Minsky, one of the fathers of the artificial intelligence movement, predicted in the late fifties that computers would have assumed most human capabilities by the turn of the century. He was probably more knowledgeable on the subject than any man alive, but he was dead wrong. Finally, many experts understand only one aspect of the technology in which they claim expertise, as might be the case with a integrated circuit developer who knows circuit topologies but not the quantum effects that take place in very small structures.

Still, an expert is more likely to be right than a lay person on a technology in which he professes expertise, so the consultation of experts has become routine in the forecasting business. Unfortunately in exercising this function the forecaster himself becomes a mere middleman, a broker of information. Since it is the stated intent of all good businessmen to eliminate the middleman, the forecaster with his stable of experts is in a somewhat precarious position. For which, as it happens, he has an answer--the Delphi technique and its countless variants.

Delphi, devised by Olaf Helmer at RAND in the fifties, consists of an iterative process by which experts are impaneled, questioned on a particular topic via written questionnaire, and then requestioned after each is given a summary of the others’ responses while at the same time anonymity is scrupulously observed. The aim is to draw the group slowly to a position of unanimity or consensus which is taken to represent the best achievable forecast on the matter under consideration. Helmer constructed definite rules by which the process would be conducted, but some critics have charged that the procedural rigidity and formality masks the fact that at base the Delphi practitioner is merely culling opinions, functioning as a pollster rather than an analyst.

Delphi, while frequently mentioned in texts on forecasting methodologies, appears to be little used today, at least in pure form. The process itself is laborious, and many forecasters have expressed doubts as to the value of arriving at consensus in the first place. Much dispute exists as to the accuracy of Delphi forecasts as well, with some published studies supporting the technique and others dismissing it. But the basic approach of consulting anonymous panels of experts is ubiquitous.

And how do the experts themselves arrive at their own conclusions? In some cases by intuition, and in other cases by utilizing the results of internal studies sponsored by their own organizations. Which may use either extrapolation, analogy, or expert opinion, and which may thus introduce a troubling circularity into the overall process.

Even so, expert opinion remains central to forecasting process, and one would want to know how a professional forecaster attempting to sell his services makes use of it in his work.

Intuition

Intuition itself is often recognized as a legitimate technique within the forecasting community, providing us with a clear line of succession back to our hemp smoking Oracle. The claim on the part of a forecaster to rely upon intuition smacks of hubris not to say megalomania and is apt to arouse suspicions in the client, however. “You mean you just guess?”

Forecasters who are self admitted intuiters generally counter by reciting their track records of successful predictions, and omitting, of course, their failures. So unless the client has full record of all of the intuitive predictions made by an individual forecaster, he’s simply in no position to determine the acuity of the forecaster’s intuition.

Intuition is the special province of the pundit class of forecaster, of which George Gilder, Nicholas Negroponte, and Esther Dyson are representative types. The forecaster’s claims are based upon what is essentially an appeal to authority, and the specifics of the intuitive process are undisclosed and deliberately made mysterious.
I should also point out that claims of highly developed intuitive powers confer on the claimant a kind of charisma and set him or her apart from the rest of us. Anyone, after all, can learn a technique like linear extrapolation, but the mantle of prophecy descends upon the shoulders of only the blessed few.

Anyone, of course, can claim pronounced intuitive abilities as well, but how to establish such claims in the marketplace? The usual answer is score one really big hit and publicize it for years thereafter. Gilder in Life After Television (1992) seemed to predict the growth of the public Internet. Negroponte during the same period predicted the ascendancy of digital television and media convergence over digital networks. They can point to those predictions today with justifiable pride and can continue to sell their services based upon such successes while conveniently shunting aside their mistakes. (George Gilder's reputation appears to have been irredeemably tarnished by his frequently erroneous stock recommendations during the collapse of the great nineties tech boom.)

But does any pundit or intuitive forecaster really have a high ratio of correct to incorrect guesses?

Only one comes to my mind, namely Douglas Englebart, formerly of the renowned Stanford Research Institute (now SRI International), one of the oldest forecasting firms in America. Englebart, in a book length paper entitled Augmenting Human Intellect: a Conceptual Framework, prepared for the Institute in 1962, predicted the Internet, the personal computer, the PDA, the personal digital camera, low cost computer aided design, three dimensional graphics programs, hyperlinks, mouse type interfaces, voice recognition programs—the number of correct predictions is simply staggering, and there are no egregious blunders. Today, Englebart, 78, makes a good living on the lecture circuit based on the realization of those predictions during the last decade, though, as he told me in an interview, his predictions were treated with utter disdain at the time he made them and for many years thereafter.

How did he do it? I don’t think Englebart himself really knows, but some biographical facts are suggestive. Englebart is a computer scientist as well as a forecaster, an individual who was actively involved in the then embryonic artificial intelligence movement and one of the fathers of ARPANET, the original Internet. Quite literally he was helping to create the world he envisioned. He was also supremely pragmatic. Englebart’s imagined inventions were all addressed at solving individual problems faced by individuals in the workplace, that is, they were intended to make people more productive, and they were devised with a keen understanding of how administrative and knowledge work is organized. In a sense they arose out of a supervening necessity of just the sort discussed by Winston. Englebart himself was widely read in the social sciences and had a comprehensive understanding of the state of computer science at the time and its reasonable prospects for the future, sans any tremendous breakthroughs. He didn’t have to resort to Gompertz curves or the false precision of statistical poling because he knew in a very general way where the industry was likely to go.

Which brings us to another point. Most academic forecasters insist that successful forecasting has to be interdisciplinary, taking into account everything from the underlying physics of a new technology to its ethical implications. At the same time, intuitions can only occur in the mind of a single individual. Englebart was that signal rarity, an individual who was highly intelligent, extremely imaginative, very analytical, and knew a tremendous lot about a whole range of fields. And as far as I’m concerned no forecaster even approaches his record of success.

It is also interesting to note that Englebart set fairly modest goals for the new technology of artificial intelligence, namely augmenting human intelligence. While Marvin Minsky and his cohorts were preoccupied with computer programs for chess playing and music composition, Englebart was dealing with practical information retrieval systems.

But could he do it again today? I don’t know. He currently works at the Bootstrap Institute in Palo Alto where he is busy devising software to take the Internet to another dimension. You can ask him.

Monitoring

Yet another gambit in the forecaster’s bag of tricks is a method variously termed monitoring, tracking, or, somewhat less accurately, content analysis. It is fairly widely used today, although it is labor intensive, and it seems plausible though I’ve not seen any studies confirming its ultimate validity.

What’s involved here is a simple process of determining the sheer amount of activity in a given area where one is attempting to make predictions. If, for instance, one were venturing predictions concerning the future market penetration of fuel cells and one wanted to employ this technique, one might count patent applications in the area, enumerate the number of technical papers published, and count the investment dollars going to companies in the field. One might also simply list the number of companies and compare that list with totals from the years prior. The idea is not to generate a trend line but simply to assess the amount of activity in a given area and the amount of resources devoted to the development of the technology.

Battelle, which manages four national laboratories as well as doing forecasting, is in an especially good position to do monitoring because its forecasters can examine research at close range rather than simply scour publications for reports. Nicholas Negroponte, who heads MIT’s Media Lab, is in a similarly enviable position.

Is monitoring reliable? In many cases it is, but arriving at percentages is difficult. Monitoring can and does reveal the presence of what forecasters call accelerated development in a given field where some kind of collective government or industry wide determination has been made to create a new technology or improve an existing one fundamentally. Prime examples would be atomic energy and rocketry in the forties, integrated circuits in the fifties and sixties, satellite communication in the sixties and seventies, and, yes, fuel cells, at the present time.

The problem with relying on the technique is that the allocation of resources to a given field of studies doesn’t guarantee successful inventions coming out of it. Consider the quest for an AIDS vaccine, the attempt to develop a fusion reactor for electrical generation, the U.S. missile defense program, and the Japanese Fifth Generation computing project which attempted to achieve breakthroughs in artificial intelligence. Collectively these efforts have probably consumed trillions of research dollars and enlisted the efforts of armies of scientists and engineers, but they haven’t been marked by success. There is no question that accelerated research and development efforts frequently do result in accelerated product introductions. But unfortunately not always, and so research techniques intended to identify accelerated development cannot be considered entirely reliable.

Nevertheless, monitoring has to be considered one of the sounder forecasting techniques extant today. And it is becoming steadily more powerful thanks to the growth of large data bases that may be searched for evidence and the development of data mining techniques for performing such searches automatically.

Less Used Methodologies

Market based forecasting: SRI Consulting Business Intelligence has built a consulting practice around determining market reactions to new technologies. While this may not be technology forecasting in the strictest sense, it is certainly of interest to purveyors of new technology. SRI staffer, Brock Hinzmann, told me that the company’s technique, termed VALS, an acronym for some phrase which no one in the organization seems to remember, segregates consumers into eight basic categories based upon their financial resources and the degree to which they ascribe importance to thought, achievement, or experience. Each basic type of consumer is said to have distinct buying preferences that predispose him or her to purchase some products but not to others. Curiously, the methodology was first developed back in the sixties for the armed forces who, according to Hinzmann, were at a loss to understand why American youth were reluctant to die in Vietnam when they had been perfectly willing to die in Korea (I used to wonder about that myself).

How well validated is the technique? Other than the fact that SRI has a lot of customers, there’s no way of knowing.

Scenarios: A scenario in futuristics terminology is a version of the future constructed in the form of a fictional narrative. It is intended to represent a possible rather than a certain future and to serve as dramatic tool for planning sessions. A related technique is gaming where planners assume roles and then play at pursuing long term goals within a competitive context. The futures projected in a scenario or game are generally constructed through intuition, expert opinion, analogy, tracking, or trend extrapolation.

Cross impact analysis: this is basically an extension of Delphi. The panel is asked to arrange a number of possible future events in a grid with the same events occupying the first vertical and horizontal rows. The panelists are then asked to weigh the influence of each event on every other event. Like scenarios this is basically a planning tool rather than a pure forecasting methodology.

Leading indicators: this is a technique borrowed from econometrics, and is of very dubious utility in respect to technology forecasting. The forecaster looks for sequences of events that together will presage other events; the problem here is that no one has ever demonstrated convincingly that innovation is cyclic in nature.

Systems Analysis – the Last Frontier

The systems analytical approach to forecasting, sometimes referred to as dynamic system modeling, is in many ways the most intriguing, and may possibly provide a better methodology for forecasting not only new technology but other trends as well.

This approach is based upon the notion that certain large scale systems, whether human societies, weather systems, or ecosystems, exhibit overall behaviors that cannot be reduced to the sum of individual behaviors of those entities comprising the system. Instead such complex dynamic systems tend to respond in a unified manner to certain inputs, and their behaviors can be modeled according to relatively simple relationships between given values of input and output while ignoring the complex interactions of the constituents of the systems. Furthermore, such systems tend to be characterized by internal feedback loops that change their overall behavior, with positive loops engendering nonlinear increases in system output, and negative feedback loops tending to restore the system to equilibrium.

Technologies themselves can sometimes be modeled as dynamic systems. Railroads are a good example, with rail construction encouraging settlement in remote areas which in turn encouraged further construction of rails in a positive feedback arrangement. In most cases the model will have a social component and the technology in question will not be considered in isolation.

Unfortunately, such simplified dynamic models may ultimately be inadequate for describing human societies. Single loop feedback systems either tend toward equilibrium or exhibit a periodic disequilibrium consisting of alternating cycles of overload and recovery. Human societies, on the other hand, exhibit continuous nonrepetitive changes and long term purposeful behavior that is difficult to subsume within a basic input/output model. They also tend to change both goals and strategic conduct over time, making modeling even more difficult. And because major technological changes are almost always associated with social disruption, predicting them would of necessity require models that can encompass growth and change, including abrupt, radical change.

For this reason some forecasters attempt to place technological change within a larger social or environmental system. The Santa Fe Institute, a New Mexico based think tank, has devoted considerable effort to creating complex sociotechnical models that incorporate the social dimension.

Forecasters have been playing around with systems models of various sorts for decades, with the Club of Rome’s The Limits of Growth representing perhaps the best known published model, and one which, for all of the care taken in its construction, has not proven very accurate. More recently some investigators have been resorting to chaos theory, complexity theory, and catastrophe theory in an attempt to find regularities within seemingly chaotic behaviors in dynamic systems. The problem is that no one to date has developed a theory for modeling nonlinear systems that offers any high degree of predictive accuracy. Predicting the weather is an excellent case in point. A combination of comprehensive satellite photos and enormous computing resources only permits the forecaster to project a week into the future with a fair degree of accuracy and three weeks with any accuracy at all. Greater precision is difficult or impossible to achieve because in weather system very small disturbances can rapidly grow into big ones, an inherent nonlinearity that defies prediction even in the midterm. One might suppose that human society is similarly afflicted, and the many failures of forecasters suggest that indeed it is.

Prognosis Guarded

Companies will continue to buy forecasts because the consequences of missing or misgauging a new technology can be disastrous. Firms that don’t anticipate changing markets may decline or even perish. No company has ever led in the introduction of seminal new technologies over an extended period of time, however, nor sustained the growth rates associated with disruptive technologies, regardless of its command of forecasting techniques. So don’t count on forecasters to save your bacon.

Technological forecasting at the present level of development is inadequate as a basis for long term strategic planning. It is worth using to assess the market potential of innovations already in the marketplace, but can only provide a few general principles to guide efforts to develop and exploit radically new technologies. Forecasting itself is in need of a technological breakthrough. Sadly we lack any methodology for predicting when and if that breakthrough will occur.

Addendum: Our Approach

We utilize extrapolation, monitoring, expert opinion, and certain forms of systems analysis, but we emphasize analogy. We believe that there are certain constants in technology innovation and diffusion, constants having to do with regularities in human behavior rather than superficial resemblances between or among the technologies being compared. We also believe that fluidity in social structures is a key component in technological revolutions and that they generally occur within what are essentially frontier societies, though not necessarily settler societies. We will explore such topics in additional treatises devoted to the subject of technology forecasting.