Earlier this month, New York magazine published a riveting and frightening look at the future of the planet we call home.
Now that global warming is well underway, we are in for an apocalyptic awakening, and "parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century," the writer, David Wallace-Wells, argues.
The article captured the public's attention, quickly becoming the most-read piece in the magazine's history. But many critics, including several climate scientists, argued that it was flawed because Wallace-Wells focused on the worst-case scenario, a pessimist's take.
Why feed the public a too-bleak picture of the future? Why frighten people into action, rather than inspire them?
Because sometimes, the worst case is the only thing that prompts us to get anything done.
I know this because I've studied the last time that governments, businesses and ordinary citizens joined together to combat a complex, man-made problem that threatened to wreak global havoc in the distant future.
It was a problem that would cost hundreds of billions of dollars to fix, whose technical basis was not immediately obvious to most non-specialists and which some even doubted was real at all.
It was also a fight that we won - and that we ought to be proud of winning, since it offers a blueprint for combating the many catastrophes that may arise from the technologies underpinning civilisation, including a warming planet.
I speak, of course, of Y2K.
Don't laugh. There are important lessons in the unlikely story of how the world came to mitigate the effects of a ticking time bomb under modern civilisation.
The primary lesson is this: If you want to prompt expensive, collective global action, you need to tell people the absolute worst that could happen. We humans do not stir at the merely slightly uncomfortable. Only the worst case gets us going.
People tend to remember Y2K as a joke, and not a good one. Way back in the last century, computer scientists and IT guys began warning that a strange computer bug lay dormant in just about every computer in the world.
When the date turned over from 1999 to 2000, computers would go haywire, they said, leading to all manner of annoyances, if not global catastrophe.
At first, no one believed them. As I discovered when I investigated Y2K for its 10th anniversary, the technicians who discussed the problem in the early 1990s were often mocked for their alarmism. The year 2000 was a long time away, and people shrugged.
'Curse of the age'
But then, in the mid-1990s, a sense of urgency took hold. The tech industry was booming and the worldwide web was becoming the white-hot centre of innovation. So it began to make sense that a computer bug could take down the world.
But mostly, what happened was that the narrative changed. Instead of couching the problem in the anodyne language of software, proponents of action began to describe in concrete and frightening terms how the bug could alter modern life. They painted the worst-case picture. And the worst case started to sound pretty darned bad.
A letter that New York Senator Daniel Patrick Moynihan sent to President Bill Clinton in 1996 illustrates this tack. Pointing to a government study that "substantiates the worst fears of the doomsayers," he warned that the bug could cripple the IRS [US tax office] and the Social Security Administration, prompting economic chaos.
After outlining a series of recommendations - involving enormous organisational and financial costs - Moynihan ended with a stark warning: "The computer has been a blessing; if we don't act quickly, however, it could become the curse of the age."
Prompted by media coverage of potential devastation, governments and businesses across the globe got in gear.
The US spent $US100 billion to address the bug, according to a 2000 report by a Senate committee that studied the effort. (All but $US8.5 billion was spent by companies, not the government.) Across the globe, about $US580 billion went to fixing Y2K.
Working overtime
The effort was monumental. In the two years before the turn of the century, most of the United States' large companies and government agencies - many of which had been running on software that was decades old - worked overtime to examine and rid their code of the software bug.
The alarm proved useful. When companies looked at their code, many found they were more vulnerable to Y2K than they'd previously thought, the Senate report found. Many also came up with ways to mitigate disaster in case their fixes didn't work: Local governments rebuilt and tested emergency management systems, which later proved crucial for New York after the September 11 terrorist attacks.
The fight against Y2K was also close to unprecedented. Throughout US history, Americans have been good at getting things done after the stakes have become clear.
They moved mountains after the Great Depression and Pearl Harbor. But Y2K is one of the precious few examples where they mobilised to fight something looming on the horizon - the same kind of mobilisation we now need for climate change.
On December 30, 1999, Long Island Power Authority employees prepared for possible outages. Photo: Vic DeLucia/The New York Times
One popular misconception about Y2K is that it was a wasted effort. After all, when the clocks turned over on January 1, 2000, there were scattered problems, but the world didn't end. And there is some evidence that money was misspent.
But several of the government and outside analysts who have studied the response - including the Senate task force - concluded that on the whole, the effort was justified, given what we knew about the bug beforehand, and especially considering the United States' particular vulnerability to tech problems.
Precautionary principle
The best analysis of the effort I've read came from two Australian researchers, John Phillimore and Aidan Davison, who argued in a 2002 paper that fighting Y2K was an example of the "precautionary principle," an idea well-known in the environmental movement.
It essentially boils down to this: It's better to be safe than sorry, especially if the sorry end of the spectrum involves the end of the world as we know it.
And the way to get people to understand that, Phillimore and Davison wrote, is to explain the worst case.
"Y2K shows that the way problems are portrayed is crucial to how solutions are approached," the researchers wrote in their University of Tasmania paper. "Small, discrete problems are easier to understand than 'slow-burn', incremental ones. Providing people with specific examples of things that might go wrong is more effective than general warnings."
They added: "This might be particularly pertinent to debates on global warming." Indeed.
The New York Times