Part 3 (1/2)
After coming to a conclusion, Sir Francis Bacon wrote, ”the human understanding . . . draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.” Heaps of research conducted in the twentieth and twenty-first centuries have only confirmed Bacon's wisdom. Dubbed ”confirmation bias” by psychologist Peter Wason, it is as simple as it is dangerous: Once we form a belief, for any reason, good or bad, rational or bonkers, we will eagerly seek out and accept information that supports it while not bothering to look for information that does not-and if we are unavoidably confronted with information that doesn't fit, we will be hypercritical of it, looking for any excuse to dismiss it as worthless.
One famous experiment was conducted in 1979, when capital punishment was a hot issue in the United States. Researchers a.s.sembled a group of people who already had an opinion about whether the death penalty was an effective way to deter crime. Half the group believed it was; half did not. They were then asked to read a study that concluded capital punishment does deter crime. This was followed by an information sheet that detailed the methods used in the study and its findings. They were also asked to read criticisms of the study that had been made by others and the responses of the studies' authors to those criticisms. Finally, they were asked to judge the quality of the study. Was it solid? Did it strengthen the case for capital punishment? The whole procedure was then repeated with a study that concluded the death penalty does not deter crime. (The order of presentation was varied to avoid bias.) At the end, people were asked if their views about capital punishment had changed.
The studies were not real. The psychologists wrote them with the intention of producing two pieces of evidence that were mirror images of each other, identical in every way except for their conclusions. If people process information rationally, this whole experience should have been a wash. People would see a study of a certain quality on one side, a study of the same quality on the other, and they would shrug, with little or no change in their views. But that's not what happened. Instead, people judged the two studies-which were methodologically identical, remember-very differently. The study that supported their belief was deemed to be high-quality work that got to the facts of the matter. But the other study? Oh, it was flawed. Very poor stuff. And so it was dismissed. Having processed the information in a blatantly biased fas.h.i.+on, the final outcome was inevitable: People left the experiment more strongly convinced than when they came in that they were right and those who disagreed were wrong.
”If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others,” wrote psychologist Raymond Nickerson, ”the confirmation bias would have to be among the candidates for consideration.” In Peter Wason's seminal experiment, he provided people with feedback so that when they sought out confirming evidence and came to a false conclusion, they were told, clearly and unmistakably, that it was incorrect. Then they were asked to try again. Incredibly, half of those who had been told their belief was false continued to search for confirmation that it was right: Admitting a mistake and moving on does not come easily to h.o.m.o sapiens.
Like everyone else, experts are susceptible to confirmation bias. One study asked seventy-five social scientists to examine a paper that had been submitted for publication in an academic journal. This sort of peer review is routine and is intended to weed out work that is uninformative or methodologically weak. What it's not supposed to do is screen papers based on their conclusions. Research is either solid or not. Whether it happens to confirm the reviewer's beliefs is irrelevant. At least, it's supposed to be irrelevant. But it's not, as this study demonstrated. One version of the paper sent out for peer review came to conclusions that were in line with the commonly held view in the field; a second version of the paper was methodologically identical but its conclusions contradicted the conventional wisdom. Reviewers who got the paper that supported their views typically judged it to be relevant work of sound quality and they recommended it be published; those who got the paper that contradicted their views tended to think it was irrelevant and unsound and they said it should be rejected. ”Reviewers were strongly biased,” the researcher concluded. Not that they were aware of their bias, mind you. In fact, they would have been offended at the very suggestion.
Perhaps we should call this the ”Toynbee phenomenon,” because there is no more spectacular example than Arnold Toynbee's A Study of History. By 1921, Toynbee's vision was locked in. He was certain there was a pattern in cla.s.sical and Western histories. That pattern became the outline of A Study of History. Then Toynbee started rummaging through the histories of other civilizations and found that they, too, followed the same pattern-not because the pattern was real but because Toynbee's information processing was profoundly biased. To paraphrase Sir Francis Bacon, Toynbee energetically searched for and collected information that supported his convictions while ”neglecting or despising” information that did not-and when contrary evidence was too big to dismiss or ignore, he cobbled together ingenious stories that transformed contradiction into confirmation. ”His whole scheme is really a scheme of pigeon-holes elaborately arranged and labelled, into which ready-made historical facts can be put,” wrote the philosopher and historian R. G. Collingwood. A.J.P. Taylor's judgment was even more severe. ”The events of the past can be made to prove anything if they are arranged in a suitable pattern, and Professor Toynbee has succeeded in forcing them into a scheme that was in his head from the beginning.”
BETTER A FOX THAN A HEDGEHOG.
It is a heartening fact that many experts saw through the delusions of Arnold Toynbee. In a phrase, they showed better judgment. That's worth emphasizing because it's tempting to become cynical about experts and their opinions. We should resist that temptation, for all experts are not alike.
In Philip Tetlock's research, he was careful to have experts make predictions on matters both within and beyond their particular specialty. Only when they were operating within their specialty were experts really predicting as capital-E Experts. Otherwise, they were more like smart, informed laypeople. a.n.a.lyzing the numbers, Tetlock found some experts making predictions as Experts were more accurate than when they made them as laypeople. No surprise, they know more. They should be more accurate. More surprising is that others were actually less accurate.
As the reader should be able to guess by now, the experts who were more accurate when they made predictions within their specialty were foxes; those who were less accurate were hedgehogs. Hedgehogs are bad at predicting the future under any circ.u.mstances, but it seems the more they know about what they're predicting, the worse they get. The explanation for this important and bizarre result lies, at least in part, in the psychological mechanisms discussed here.
Expertise means more knowledge, and more knowledge produces more detail and complication. More detail and complication make it harder to come to a clear and confident answer. At least it should make it harder. Say the question is ”How will the economy do next year?” Someone who has only a few facts to go by may find they all point in one direction. But someone who has ma.s.ses of information available-facts about economic history and theory, about finance, bonds and stocks, production and consumption trends, interest rates, international trade, and so on-won't find all the facts neatly lined up and pointing like an arrow in one direction. It's far more likely the facts will point to boom, and bust, and lots of places in between, and it will be a struggle to bring even modest clarity to the whole chaotic picture.
Foxes are okay with that. They like complexity and uncertainty, even if that means they can only draw cautious conclusions and they have to admit they could be wrong. ”Maybe” is fine with them.
But not hedgehogs. They find complexity and uncertainty unacceptable. They want simple and certain answers. And they are sure they can get them using the One Big Idea that drives their thinking. With this mindset, the hedgehog's greater knowledge doesn't challenge the psychological biases we're all p.r.o.ne to. Instead, it supercharges them. As Arnold Toynbee demonstrated so well, expertise boosts the hedgehog's ability to see patterns that aren't there and to deal with contradictory evidence by rationalizing it away or twisting it so it supports what the hedgehog believes. In this way, the hedgehog gets an answer that will almost certainly be-to quote H. L. Mencken-clear, simple, and wrong. Of course the hedgehog isn't likely to accept that he may be wrong. Confidence is a defining feature of the species: Not only are hedgehogs more overconfident than foxes, they are far more likely to declare outcomes ”certain” or ”impossible.” Could they be wrong? Never!
In his cla.s.sic 1952 examination of pseudoscience, Fads and Fallacies in the Name of Science, Martin Gardner took a fascinating look at the work of late nineteenth- and early twentieth-century ”pyramidologists.” These obsessive investigators measured every nook and cranny of the pyramids, inside and out, using every imaginable unit and method. They then tried to prove ”mathematically” that the pyramid's dimensions were encoded with a vast trove of knowledge, including a complete record of all the great events of the past and future. With ma.s.ses of data at hand, and an unrestrained desire to prove what they were certain was right, they succeeded. In a sense. And only up to a point. ”Many great books have been written on this subject, some of which have been presented to me by their authors,” Bertrand Russell dryly observed. ”It is a singular fact that the Great Pyramid always predicts the history of the world accurately up to the date of publication of the book in question, but after that date it becomes less reliable.” As Gardner demonstrated, the pyramidologists were filled with pa.s.sionate belief. Almost without exception, they were devout Christians, and by picking the numbers that fit, while ignoring the rest, they made the pyramid's dimensions align with past events. Projecting forward, they then ”discovered” that the events described in the Book of Revelation would soon unfold. One of the earliest pyramidologists claimed 1882 would mark the beginning of the end. Later investigators predicted it would come in 1911, 1914, 1920, or 1925. When those predictions failed to pan out, claims were made for 1933 and 1936. As one prediction after another pa.s.sed without Jesus descending from the clouds, interest in this first wave of pyramidology slowly faded.
By the time Gardner wrote his book in 1952, most people had forgotten pyramidology. Or they thought it was silly. In 1952, smart people knew the future was written in the pages of Toynbee.
Gardner wasn't so sure. The same tendency to fit data to belief can be seen, he wrote, ”in the great cyclical theories of history-the works of men like Hegel, Spengler, Marx, and perhaps, though one must say it in hushed tones, the works of Toynbee. The ability of the mind to fool itself by unconscious 'fudging' on the facts-an overemphasis here and an underemphasis there-is far greater than most people realize. The literature of Pyramidology stands as a permanent and pathetic tribute to that ability. Will the work of the prophetic historians mentioned above seem to readers of the year 2000 as artificial in their constructions as the predictions of the Pyramidologists?”
It's fitting that Gardner made his point by asking a question about the future rather than making a bold and certain claim. Martin Gardner was a cla.s.sic fox. So were the historians who scoffed when so many other smart people were venerating Toynbee as a prophet. History is immensely complex, they insisted, and each event is unique. Only the delusional see a simple pattern rolling smoothly through the past, present, and future. ”He dwells in a world of his own imagining,” wrote Pieter Geyl in one of his final attacks on Arnold Toynbee, ”where the challenges of rationally thinking mortals cannot reach him.”
The foxes were right. About history. And about Arnold Toynbee. That brilliant hedgehog never understood how badly he deceived himself and the world, which makes his life story, for all the man's fame and wealth, a tragedy.
4.
The Experts Agree: Expect Much More of the Same.
[Against the menace of j.a.panese economic power] there is now only one way out. The time has come for the United States to make common cause with the Soviet Union.
-GORE VIDAL, 1986.
”We are definitely at war with j.a.pan,” says the American hero of Rising Sun, Michael Crichton's 1992 suspense novel. Americans may not know it; they may even deny it. But the war rages on because, to the j.a.panese, business is war by other means. And j.a.pan is rolling from victory to victory. ”Sooner or later, Americans must come to grips with the fact that j.a.pan has become the leading industrial nation in the world,” Crichton writes in an afterword. ”The j.a.panese have the longest lifespan. They have the highest employment, the highest literacy, the smallest gap between rich and poor. Their manufactured goods have the highest quality. They have the best food. The fact is that a country the size of Montana, with half our population, will soon have an economy equal to ours.”
More op-ed than potboiler-not many thrillers come with bibliographies-Rising Sun was the culmination of a long line of American jeremiads about the danger in the East. j.a.pan ”threatens our way of life and ultimately our freedoms as much as past dangers from n.a.z.i Germany and the Soviet Union,” wrote Robert Zielinski and Nigel Holloway in the 1991 book Unequal Equities. A year earlier, in Agents of Influence, Pat Choate warned that j.a.pan had achieved ”effective political domination over the United States.” In 1988, the former American trade representative Clyde Prestowitz worried that the United States and j.a.pan were ”trading places,” as the t.i.tle of his book put it. ”The power of the United States and the quality of American life is [sic] diminis.h.i.+ng in every respect,” Prestowitz wrote. In 1992, Robert Reich-economist and future secretary of labor-put together a list of all the books he could find in this alarming subgenre. It came to a total of thirty-five, all with t.i.tles like The Coming War with j.a.pan, The Silent War, and Trade Wars.
j.a.pan blocked American companies from selling in its domestic market, these books complained, while it ruthlessly exploited the openness of the American market. j.a.pan planned and plotted; it saved, invested, and researched; it elevated productivity. And it got stronger by the day. Its banks were giants, its stock markets rich, its real estate more valuable than any on earth. j.a.pan swallowed whole industries, starting with televisions, then cars. Now, with j.a.pan's growing control of the semiconductor and computer markets, it was high tech. In Crichton's novel, the plot revolves around a videotape of a murder that has been doctored by j.a.panese villains who are sure the American detectives-using ”inferior American video technology”-will never spot the fake. Meanwhile, American debt was piling up as fast as predictions of American economic decline; in 1992, a terrifying book called Bankruptcy 1995 spent nine months on the New York Times best-seller list. American growth was slow, employment and productivity were down, and investment and research were stagnant.
Put it all together and the trend lines revealed the future-the j.a.panese economy would pa.s.s the American, and the victors of the Second World War would be defeated in the economic war. ”November, 2004,” begins the bleak opening of Daniel Burstein's Yen!, a 1988 best seller. ”America, battered by astronomical debts and reeling from prolonged economic decline, is gripped by a new and grave economic crisis.” j.a.panese banks hold America's debt. j.a.panese corporations have bought out American corporations and a.s.sets. j.a.panese manufacturers look on American workers as cheap overseas labor. And then things get really bad. By the finish of Burstein's dramatic opening, the United States is feeble and ragged while j.a.pan is no longer ”simply the richest country in the word.” It is the strongest.
Less excitable thinkers didn't see j.a.pan's rise in quite such martial terms but they did agree that j.a.pan was a giant rapidly becoming a t.i.tan. In the 1990 book Millennium, Jacques Attali, the former adviser to French president Francois Mitterrand, described an early twenty-first century in which both the Soviet Union and the United States ceased to be superpowers, leaving j.a.pan contending with Europe for the economic leaders.h.i.+p of the world. Moscow would fall into orbit around Brussels, Attali predicted. Was.h.i.+ngton, DC, would revolve around Tokyo. Lester Thurow sketched a similar vision in his influential best seller Head to Head, published in 1992. The recent collapse of the Soviet Union meant the coming years would see a global economic war between j.a.pan, Europe, and the United States, wrote Thurow, a famous economist and former dean of the MIT Sloan School of Management. Thurow examined each of the ”three relatively equal contenders” like a punter at the races. ”If one looks at the last 20 years, j.a.pan would have to be considered the betting favorite to win the economic honors of owning the 21st century,” Thurow wrote. But Europe was also expanding smartly, and Thurow decided it had the edge. ”Future historians will record that the 21st century belonged to the House of Europa!” And the United States? It's the weakest of the three, Thurow wrote. Americans should learn to speak j.a.panese or German.
The details varied somewhat from forecast to forecast, but the views of Thurow and Attali were received wisdom among big thinkers. ”Just how powerful, economically, will j.a.pan be in the early 21st-century?” asked the historian Paul Kennedy in his much-discussed 1987 best seller The Rise and Fall of the Great Powers. ”Barring large-scale war, or ecological disaster, or a return to a 1930s-style world slump and protectionism, the consensus answer seems to be: much more powerful.” As they peered nervously into the future, the feelings of many Americans were perfectly expressed by an ailing President George H. W. Bush when he keeled over and vomited in the lap of the j.a.panese prime minister.
They needn't have worried. The experts were wrong.
By the time the Hollywood adaptation of Rising Sun was released in 1993, j.a.pan was in big trouble. Real estate had tanked, stocks had plunged, and j.a.pan's mammoth banks staggered under a stupendous load of bad debt. What followed would be known as ”the lost decade,” a period of economic stagnation that surprised experts and made a hash of forecasts the world over.
Europe did better in the 1990s but it, too, failed to fulfill the forecasts of Lester Thurow and so many others. The United States also surprised the experts, but in quite a different way. The decade that was so widely expected to see the decline, if not the fall, of the American giant, turned into a golden age as technology-driven gains in productivity produced strong growth, surging stocks, rock-bottom unemployment, a slew of social indicators trending positive, and-miracle of miracles-a federal budget churning out huge surpluses. By the turn of the millennium, the United States had become a ”hyperpower” that dominated ”the world as no empire has ever done before in the entire history of humankind,” in the purple words of one French observer. The first decade of the twenty-first century was much less delightful for the United States-it featured a mild recession, two wars, slow growth, the crash of 2008, a brutal recession, and soaring deficits-but Europe and j.a.pan still got smaller in Uncle Sam's rearview mirror. Between 1991 and 2009, the American economy grew 63 percent, compared to 16 percent for j.a.pan, 22 percent for Germany, and 35 percent for France. In 2008, the gross national income of the United States was greater than that of Germany, the United Kingdom, France, Italy, and Spain combined. And it was more than three times that of j.a.pan.
How could so many experts have been so wrong? A complete answer would be a book in itself. But a crucial component of the answer lies in psychology. For all the statistics and reasoning involved, the experts derived their judgments, to one degree or another, from what they felt to be true. And in doing so, they were fooled by a common bias.
In psychology and behavioral economics, status quo bias is a term applied in many different contexts, but it usually boils down to the fact that people are conservative: We stick with the status quo unless something compels us otherwise. In the realm of prediction, this manifests itself in the tendency to see tomorrow as being like today. Of course, this doesn't mean we expect nothing to change. Change is what made today what it is. But the change we expect is more of the same. If crime, stocks, gas prices, or anything else goes up today, we will tend to expect it to go up tomorrow. And so tomorrow won't be identical to today. It will be like today. Only more so.
This tendency to take current trends and project them into the future is the starting point of most attempts to predict. Very often, it's also the end point. That's not necessarily a bad thing. After all, tomorrow typically is like today. Current trends do tend to continue. But not always. Change happens. And the farther we look into the future, the more opportunity there is for current trends to be modified, bent, or reversed. Predicting the future by projecting the present is like driving with no hands. It works while you are on a long stretch of straight road, but even a gentle curve is trouble, and a sharp turn always ends in a flaming wreck.
In 1977, researchers inadvertently demonstrated this basic truth when they asked eight hundred experts in international affairs to predict what the world would look like five and twenty-five years out. ”The experts typically predicted little or no change in events or trends, rather than predicting major change,” the researchers noted. That paid off in the many cases where there actually was little or no change. But the experts went off the road at every curve, and there were some spectacular crashes. Asked about Communist governments in 2002, for example, almost one-quarter of the experts predicted there would be the same number as in 1977, while 45 percent predicted there would be more. As a straight-line projection from the world of 1977, that's reasonable. As insight into the world as it actually was in 2002-more than a decade after most Communist governments had been swept away-it was about as helpful as a randomly selected pa.s.sage from Nostradamus.
Similar wrecks can be found in almost any record of expert predictions. In his 1968 book The End of the American Era, for example, the political scientist Andrew Hacker insisted race relations in the United States would get much, much worse. There will be ”dynamiting of bridges and water mains, firing of buildings, a.s.sa.s.sination of public officials and private luminaries,” Hacker wrote. ”And of course there will be occasional rampages.” Hacker also stated with perfect certainty that ”as the white birth rate declines,” blacks will ”start to approach 20 or perhaps 25 percent of the population.” Hacker would have been bang on if the trends of 1968 had continued. But they didn't, so he wasn't. The renowned sociologist Daniel Bell made the same mistake in his landmark 1976 book The Cultural Contradictions of Capitalism. Inflation is here to stay, he wrote, which is certainly how it felt in 1976. And Bell's belief was widely shared. In The Book of Predictions-a compilation of forecasts published in 1981-every one of the fourteen predictions about inflation in the United States saw it rising rapidly for at least another decade. Some claimed it would keep growing until 2030. A few even explained why it was simply impossible to whip inflation. And yet, seven years after the publication of Bell's book, and two years after The Book of Predictions, inflation was whipped.
Some especially breathtaking examples of hands-free driving can be found in The World in 2030 A.D., a fascinating book written by F. E. Smith in 1930. Smith, also known as the earl of Birkenhead, was a senior British politician and a close friend of Winston Churchill. Intellectually curious, scientifically informed, well-read, and imaginative, Smith expected the coming century to produce astonis.h.i.+ng change. ”The child of 2030, looking back on 1930, will consider it as primitive and quaint as the conditions of 1830 seem to the children of today,” he wrote. But not even Smith's adventurous frame of mind could save him from the trap of status quo bias. In discussing the future of labor, Smith noted that the number of hours worked by the average person had fallen in recent decades and so, he confidently concluded, ”by 2030 it is probable that the average 'week' of the factory hand will consist of 16 or perhaps 24 hours.” He was no better on military matters. ”The whole question of future strategy and tactics pivots on the development of the tank,” he wrote. That was cutting-edge thinking in 1930, and it proved exactly right when the Second World War began in 1939, but Smith wasn't content to look ahead a mere nine years. ”The military mind of 2030 will be formed by what engineers accomplish in this direction during the next 60 or 70 years. And, in view of what has been accomplished since 1918, I see no limits to the evolution of mobile fortresses.” Smith was even worse on politics, his specialty. ”Economic and political pressure may make it imperative that the heart of the [British] Empire should migrate from London to Canada or even to Australia at some date in the next century or in the ages which are to follow,” he wrote. But no matter. ”The integrity of the Empire will survive this transplantation without shock or disaster.” And India would still be the jewel in the crown. ”British rule in India will endure. By 2030, whatever means of self-government India has achieved, she will still remain a loyal and integral part of the British Empire.”
Among literary theorists and historians, it is a truism that novels set in the future say a great deal about the time they were written and little or nothing about the future. Thanks to status quo bias, the same is often true of expert predictions, a fact that becomes steadily more apparent as time pa.s.ses and predictions grow musty with age. The World in 2030 is a perfect demonstration. Read today, it's a fascinating book full of marvelous insights that have absolutely nothing to do with the subject of its t.i.tle. In fact, as a guide to the early twenty-first century, it is completely worthless. Its value lies entirely in what it tells us about the British political cla.s.s, and one British politician, in 1930. The same is true of Rising Sun and so many of the other books written during the panic about j.a.pan Inc. They drew on the information and feelings available to the authors at the time they were written, and they faithfully reflect that moment, but the factors that actually made the difference in the years that followed seldom appear in these books. The Internet explosion was a surprise to most, as was the rise of Silicon Valley, abetted by the s.h.i.+ft in high-tech development from hardware to software. The turnaround in the American budget and the decline of j.a.pan's banks were dramatically contrary to current trends. A few observers may have spotted some of these developments coming but not one foresaw them all, much less understood their c.u.mulative effect. And perhaps most telling of all, these books say little or nothing about one of the biggest economic developments of the 1990s and the first decade of the twenty-first century: the emergence of China and India as global economic powers. In Clyde Prestowitz's Trading Places, the Asian giants are ignored. In Jacques Attali's Millennium, the existence of China and India was at least acknowledged, which is something, but Attali was sure both would remain poor and backward. Their role in the twenty-first century, he wrote, would be to serve as spoils in the economic war waged by the mighty j.a.panese and European blocs; or, if they resisted foreign domination, they could instigate war. Attali did concede that his forecast would be completely upended if China and India ”were to be integrated into the global economy and market”-but ”that miracle is most unlikely.” Lester Thurow did even worse. In Head to Head, he never mentioned India, and China was dismissed in two short pages. ”While China will always be important politically and militarily,” Thurow wrote, ”it will not have a big impact on the world economy in the first half of the 21st century. . . .”
Why didn't the experts and pundits see a problem with what they were doing? Trends end, surprises happen-everyone knows that. And the danger of running a straight-line projection of current trends into the future is notorious. ”Long-term growth forecasts are complicated by the fact that the top performers of the last ten years may not necessarily be the top performers of the next ten years,” noted a 2005 Deutsche Bank report. ”Who could have imagined in 1991 that a decade of stagnation would beset j.a.pan? Who would have forecast in the same year that an impressive rebound of the U.S. economy was to follow? Simply extrapolating the past cannot provide reliable forecasts.” Wise words. Curiously, though, the report they are found in predicts that the developed countries whose economies will grow most between 2005 and 2020 are, in order, Ireland, the United States, and Spain. That looks a lot like a straight-line projection of the trend in 2005, when those three countries were doing wonderfully. But three years later-when Ireland, the United States, and Spain led the world in the great cliff-plunge of 2008-it looked like yet another demonstration that ”simply extrapolating the past cannot provide reliable forecasts.”
Daniel Burstein's Yen! has a chart showing j.a.panese stock market levels steadily rising for the previous twenty years under the headline ”Tokyo's One-Way Stock Market: Up.” That's the sort of hubris that offends the G.o.ds, which may explain why, three years later, j.a.panese stocks crashed and the market spent the next twenty years going anywhere but up. One would think it's obvious that a stock market cannot go up forever, no matter how long it has been going that way, but the desire to extend the trend line is powerful. It's as if people can't help themselves, as if it's an addiction. And to understand an addiction, it's back to the brain we must go.
PICK A NUMBER.
You are in a university lab where a psychologist is talking. Already you are on high alert. There's a trick question coming because that's what they do, those psychologists. But you're ready.
The psychologist shows you a wheel of fortune. He gives it a spin. The wheel whips around and around, slows, and finally the marker comes to rest on a number. That number is randomly selected. You know that. It means nothing. You know that too.
Now, the psychologist says, What percentage of African countries are members of the United Nations?