On the day (perhaps not long from now) when the entire internet crashes, no one will be able to say that we didn’t see it coming. The denial-of-service attack on the morning of Oct. 21—which shut down Twitter, Spotify, Netflix, and a dozen other websites—offers a preview, in miniature and against relatively trivial targets, of how the day of doom might unfold.
Fred Kaplan Fred Kaplan
Fred Kaplan is the author of Dark Territory: The Secret History of Cyber War.
In the attack, someone (identity as yet unknown) flooded Dyn DNS—a New Hampshire–based firm that operates as the internet’s switchboard—with so many online messages that its circuits overloaded, shutting down not only its own services but those of the other sites as well, at least for several hours.
The weapons amassed for this attack were, literally, toys—baby monitors, music servers, web cameras, and other home devices that connect to one another, automatically sending and receiving data through the internet. Hence the name of this emerging network—the Internet of Things. The saboteur had hacked into hundreds of thousands of these devices and infected them with malware, so that, at a designated moment, all them sent messages to the real target—in this case, Dyn DNS—and shut it down.
The malware was simple: a program called Mirai, which, in the words of an alert sent out by the Department of Homeland Security, “uses a short list of 62 common default usernames and passwords to scan for vulnerable devices.”
Get Slate in your inbox.
This is what few consumers have understood about the Internet of Things: All of these nifty devices are computers with, in some cases, quite powerful data processors. And, like all computers, their operating systems are preprogrammed with usernames and passwords. The default usernames and passwords tend to be obvious: 12345, username, password—more than covered by the 62 words on Mirai’s scan-list.
However, unlike most computers, the Internet of Things devices are on all the time, and there’s no user interface for even tech-savvy consumers to monitor the machines’ activities. As one Silicon Valley technologist (who requested anonymity because he works for a firm that makes some of these devices) put it, “My TiVo needs an internet link only to download TV guide metadata every fortnight, but as far as I know it’s also working overtime serving viruses or DNS attacks.”
The technologist went on: “Who’s to know what’s running on your interlinked Nest thermostat or your refrigerator? Borderline impossible. And all that stuff is interconnected to websites and accounts with credit cards and other attractive targets for hackers. Given the radical increase in traffic that these devices generate, it will get easier to hide malicious streams of network traffic in the noise.”
There are now about 10 billion IoT devices in the world. (The estimates range from 6.4 billion to 17.6 billion, depending how the term is defined.) Some estimate that, by 2020, the figure will climb to 50 billion. That’s a lot of bots that a hacker can enslave for an attack.
Back in 1996, Matt Devost, Brian Houghton, and Neal Pollard wrote an eerily prescient paper called “Information Terrorism: Can You Trust Your Toaster?” They foresaw an era when household appliances would all be wired to the internet. Life would be more convenient, time would be saved—and everything you own would be vulnerable to hacking.
Devost, who went on to run Red Team operations in NATO war games and is now managing director of Accenture Security, says that, if anything, he understated the threat. He saw the phenomenon—and people today continue to see the Internet of Things—as posing “microthreats”: hackers messing with our personal stuff, turning our lives upside down, possibly even killing us. See, for instance, the experiment, just last year, when a former National Security Agency employee named Charlie Miller hacked into the onboard networks of a Jeep Cherokee and commandeered its steering wheel, accelerator, brakes—everything in the vehicle.
But in their paper of 20 years ago, Devost and his co-authors did not foresee “macrothreats”: hackers aggregating “smart” devices to mess with society. “Imagine it’s one of those mid-August days,” Devost said, “100 degrees with roaming brown-outs. What if a hacker ordered the IoT devices in a few large commercial buildings to turn up their air conditioners to max level? He could do real damage to the power grid.” And even this scenario is minor compared to the sort of attack presaged in last week’s incident—a hacker enslaving hundreds of thousands (or even millions or billions) of IoT devices to launch a massive denial-of-services attack that shut down, say, a whole city’s power generators or some other facility in the nation’s critical infrastructure.
That phrase “critical infrastructure” came into vogue in the late 1990s—to refer to power grids, banking and finance, oil and gas, transportation, water, emergency services, and other sectors on which a modern society depends—when a presidentially appointed panel, known as the Marsh Commission, discovered that all of those sectors were vulnerable to hackers.
Over the previous decade, the private corporations controlling these sectors all started to realize the enormous savings involved in hooking up their control systems to this new thing called the internet. Money transfers, energy flows, train switches, dam controls—they could all be monitored and managed swiftly, automatically, efficiently. No one considered the possibility that bad guys could hack into those networks and route the money, energy, trains, or water for criminal or destructive purposes.
The dangers should have been clear even then. As far back as 1967, at the very dawn of the internet, when its military precursor known as the ARPANET was about to roll out, a man named Willis Ware—head of the Rand Corporation’s computer science department and member of the NSA’s scientific advisory board—wrote a paper warning of its implications. Once you put information on a network—once you make it accessible online from multiple, unsecure locations—you create inherent vulnerabilities, Ware concluded. You won’t be able to keep secrets anymore.
When I was researching my book Dark Territory: The Secret History of Cyber War, I asked Stephen Lukasik, the person running the ARPANET program at the Pentagon’s Advanced Research Projects Agency, whether he’d read Ware’s paper. Sure, Lukasik told me. He told me that he took the paper to his team members, who also read it and begged him not to saddle them with a security requirement. It would be like telling the Wright brothers that their first plane at Kitty Hawk had to fly 50 miles while carrying 20 passengers. Let’s do this step by step, they said. It had been hard enough to get the system to work. Besides, the Russians wouldn’t be able to build something like this for decades.
It did take decades—about three decades—but, by then, vast systems and networks had sprouted up in the United States and much of the world with no provision for security. This was the bitten apple in the digital Garden of Eden. The sin was built into the system from its conception.
Corrections could have been made, security provisions could have been built in, once the utilities started hooking up the nation’s critical infrastructure to the internet—or, if they’d known of the risks, they might have decided not to get wired in the first place. And now, with the Internet of Things, we’ve begun to extend the mistake into our homes, into the stuff of our everyday lives.
Some remedies have been taken even since this past Friday. The Chinese firm Hangzhou Xiongmai Technology Co., Ltd., which makes components for some of the surveillance cameras hacked in last week’s denial-of-service attack, announced that it was recalling products from the United States. Dahua Technology, another Chinese company, offered firmware updates on its websites for customers who had bought its cameras and video recorders. But these are small measures, not likely to have much effect even on these specific products, much less those made in the past several years or in the years to come.
In the late 1990s, when the utilities’ vulnerabilities first came to light, Richard Clarke, then the White House counterterrorism chief, proposed imposing mandatory cybersecurity requirements on all industries connected to critical infrastructure. The companies lobbied against his plan, as did President Bill Clinton’s economic advisers, who warned that the measures would cripple these companies’ competitiveness in the global market. Clarke also suggested putting the government and critical-infrastructure
industries on a parallel internet, which would be wired to certain agencies that could detect intrusions. This plan was leaked and denounced as “Orwellian.”
“If we could go back 30 years, we would probably do things differently,” Matt Devost reflected. We shouldn’t wait till it’s too late, he added, to put some limits on the Internet of Things. For instance, he suggested, the United States should impose regulations requiring all IoT devices to come with locks, so that consumers can’t activate them without first changing the default password—and maybe requiring the new password to be sufficiently long and complex to make it resist simple password-scanning malware, like Mirai.
When companies started putting power grids on the internet, the net itself was new and the art of hacking hadn’t spread. Maybe a few hundred people in the world knew how to exploit its vulnerabilities; now a few million do.
It’s important to understand that much more is at stake than a brief shutdown of Twitter. As Bruce Schneier, a prominent cybersecurity analyst, put it in a blog post that he published in September, a month before this recent attack, “Someone is learning how to take down the Internet.”
He noted that several attacks of precisely this sort—but smaller, the kind of incidents that specialists see but that elude mainstream notice—have been occurring in the past couple years. This probably isn’t the work of criminals or mischievous researchers; they wouldn’t be interested in the targets or capable of mounting attacks of such scope. Rather, Scheier wrote, the whole trend “feels like a nation’s military cyber-command trying to calibrate its weaponry in the case of cyberwar. It reminds me of the U.S.’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on,” so the U.S. Air Force could map the capabilities of Soviet radars and figure out how to elude them.
Is that what’s happening now? Is some nation-state figuring out how many IoT devices it takes to shut down larger chunks of the internet, and thus our society, as a whole? It sounds like paranoid science fiction from the 1960s, but the writers of that stuff were trying to scan the future as an extension of what was happening at the time, and in this case, they might have been on target.