Published:

Gregory Falco, assistant professor of civil and systems engineering and a member of the Johns Hopkins Institute for Assured Autonomy, speaks to his research efforts on spotting cyber vulnerabilities and defending against them in the Summer 2022 issue of JHU Engineering magazine. The full text of the article appears below.

BY 
Threats to cybersecurity loom large in today’s world, putting us all at risk of being exploited by bad actors. Whiting School experts are focused on spotting cyber vulnerabilities and defending against them—a never-ending task, where the villains’ tactics are evolving just as rapidly as the technology they exploit.

German Power company Enercon operates an array of 5,800 wind turbines that can generate up to 11 gigawatts of power when operating at full capacity. But on the morning of Feb. 24, 2022, those turbines went silent.

The timing—the same day Russia began its invasion of Ukraine—was not a coincidence. “They’re all connected to a satellite station that was interfered with by Russia as part of this conflict,” says Gregory Falco, an assistant professor of civil and systems engineering and a member of the Johns Hopkins Institute for Assured Autonomy. The evidence suggests that the attack was primarily focused on disrupting Ukrainian lines of communication and that Enercon’s turbines—which were controlled by the same satellite—were merely collateral damage. “The communication window was shut down, so it couldn’t communicate with the turbines, and the turbines died,” says Falco.

These kinds of cyberattacks are now part and parcel of modern espionage and conflict, notes Anton Dahbura PhD ’84, co-director of the IAA and executive director of the Johns Hopkins Information Security Institute. “It’s pretty easy for a country to build offensive cyber capability,” he says. “It’s also easy to make attribution murky—if they just want to damage their neighbor’s banking system but then disavow responsibility.” Such attacks can be part of a military offensive, as is now being seen in the Russia-Ukraine war, but they can also take the form of lesser incursions intended to probe a rival’s weaknesses or sow chaos.

High-profile, national-scale cybersecurity threats may grab headlines, but there are also myriad ways in which the general public can potentially fall prey to exploitation by bad actors online. Although some of these attacks may be delivered through predictable routes, such as our phones or laptops, we also live our lives surrounded by less obvious—but equally vulnerable —gateways to the internet. “Pretty much anything electronic that you buy nowadays comes with an app for it,” says Avi Rubin, professor of computer science and technical director of the ISI.

Research at the ISI and IAA is focused on identifying and defending against such vulnerabilities at every level of America’s digital infrastructure—but this is a challenging and never-ending task, where the villains’ tactics are evolving just as rapidly as the technology they exploit.

 

 

 

 

Long before Enercon’s turbines went offline, Dahbura was keeping a close eye on the simmering tension between Ukraine and Russia. In collaboration with Johns Hopkins cybersecurity specialist Terry Thompson, his group runs the Cyber Attack PredictiveIndex, an online “leaderboard” that ranks the likelihood of one nation dispatching a hacker-led offensive against another.“That conflict was at the top or close to the top of our index for quite a while,” says Dahbura, pointing out prior incursions such as Russia’s high-profile, debilitating attack on Ukraine’s power grid back in 2015.

There are numerous other international disputes that have the potential to play out in the cybersphere rather than as conventional warfare. For example, Egypt is currently at odds with Ethiopia over a dam project that it believes will interfere with Egypt’s water access, and in 2020, Egyptian hackers took over various Ethiopian government websites to issue a series of pointed threats.The CAPI team assesses the likelihood of each of these conflicts erupting into a cyberattack based on five factors for any given pair of potential “aggressor” and “defender” states. These include the aggressor’s motivation and capacity to mount such an attack, their fear of retribution, whether cyberwarfare is part of a broader national security strategy, and the vulnerabilities of the defender. Each of these factors is given a score from one to five, producing a total that reflects the relative likelihood of a future incident. Higher scores indicate higher risk, and the Russia-Ukraine dyad recently achieved the dubious honor of receiving the first “25” score since CAPI’s inception in late 2020.

The index is primarily intended as a public resource, to inform and educate general audiences about this rapidly evolving component of international relations and security, but Dahbura also sees the CAPI program as an important educational opportunity. All the rankings are generated by a review board composed of students from the Whiting School’s computer science program and the Krieger School of Arts and Sciences’ international studies program, with each student assigned a particular region of the world to monitor.

“I’m really big on empowering undergraduates,” Dahbura says. “They have so much talent, so much potential, and really relish the opportunity to be involved in these kinds of efforts.” After the students present their findings at a weekly meeting with Thompson and Dahbura, the group revises the CAPI rankings accordingly. “Sadly, there are many more additions being made to the list than deletions,” says Dahbura.

 

 

 

 

 

One commonly used weapon in the cyberattack arsenal is ransomware, which employs malevolent code to lock up or steal a victim’s files. Depending on the nature and sensitivity of the ensnared data, the attackers may threaten to either erase or broadly disseminate their ill-gotten goods unless paid a sizable bounty—typically in some form of cryptocurrency.

Falco’s group is focused on limiting the impact of such attacks in the context of aerospace systems, utilities, and other essential services. “We try to make sure that things that are operationally critical to different infrastructures are secure,” he says, citing such examples as energy providers and the aerospace industry. Early 2021 saw one such attack, when a group of Russia-based hackers known as DarkSide managed to infiltrate and essentially shut down the computer network of the Colonial Pipeline, which provides 45% of the oil supply for the eastern U.S.

In this particular attack, the company promptly paid the ransom—$4.4 million worth of Bitcoin—and received a decryption tool to recover its lost data. But Falco warns that modern ransomware attacks have taken a darker and more nihilistic turn, at least at the level of state-sponsored or -approved incursions. “They’re not trying for the money,” he says. “They’re really going after the control, and they’re trying to shut you down and make chaos—and they’re pretty good at it.” Even a day or two without service could be disastrous for a financial service company, air traffic control system, law enforcement agency, or energy provider.

A key challenge with fending off ransomware is that it primarily exploits human vulnerabilities, such as an employee being tricked into clicking a link that allows malware to install. Smarter network design could help limit the damage, however. Falco highlights “zero trust” network architectures as one solution.

“That basically means that you should always assume that someone’s in your system when you’re doing something and act with the knowledge that you can’t trust even your own systems for things,” he says. This is in contrast to conventional architectures, where trust is baked in and infiltration of one node can give a bad actor ready access to the rest of the network.

But Falco also warns that there is no single strategy that guarantees protection and that vulnerable organizations should pursue multiple parallel strategies and backup plans that evolve along with the threats they encounter. “You have to just assume you’re going to get hit, with a lot of cuts over a long period of time,” he says. “And you just have to have a whole bunch of ways around the way you’re going to get hit.”

From time to time, random civilians might fall prey to a ransomware attack, and Falco notes that the ability to purchase prewritten ransomware code on the so-called dark web can enable attacks of opportunity by dilettante hackers. These are the exception rather than the rule, however. “The days of ransomware gangs attacking single individuals are probably behind us,” says Joseph Carrigan, senior security engineer at the IAA and the ISI.

But individuals must be mindful of other vulnerabilities that could expose them to risk from hackers in their day-to-day lives. The rapid proliferation of web-enabled Internet of Things devices is of particular concern.

“There are all these devices that we just buy and plug in, and we don’t really think about what constitutes a ‘thing’ in the Internet of Things,” says Carrigan. A particularly savvy and privacy-minded individual might be aware of the vulnerabilities associated with a “smart” security camera or baby monitor while also forgetting about their smart TV, humidifier, and meat thermometer. “Early on, a lot of these things were just pushed out without any consideration for security, creating ample opportunities for exploitation,” says Carrigan.

Some are simply direct violations of privacy, like hijacking device microphones or cameras to record individuals without their knowledge. But Rubin also notes that attacks on these vulnerable devices can expose every other device that happens to be on that same Wi-Fi network, including computers, tablets, or phones with sensitive data.

“If someone compromises a device that’s on the inside of a network, like an IoT coffee maker or something, now they have the access to the network that an insider would have,” he says. These intrusions can even be used to quietly rally armies of internet-enabled “bots,” which can then be used to launch far more aggressive “distributed denial of service” attacks that knock entire businesses or even government institutions offline.

“We think we own our own devices, but maybe the device is completely under the control of the Russians or Chinese or someone else,” says Rubin. “That’s the type of attack that we’ve seen.”

Rubin’s group is part of a multi-institutional,$10 million research initiative called Security and Privacy in the Lifecycle of IoT for Consumer Environments, which aims to identify and counter vulnerabilities in these increasingly ubiquitous smart devices. One of the initiative’s current priorities is the development of tools to assist in the detection and discovery of networked devices in a given environment—something that can be particularly important with regard to privacy and security in shared living spaces.

In some cases, the threat to your privacy could literally be staring you in the face—or perhaps hovering over your backyard. There are well over half a million consumer-operated drones in the U.S., and although most are engaged in harmless hobby videography, some are being deployed for more nefarious and invasive purposes.

Lanier Watkins, an associate research scientist at the ISI, an instructor in Johns Hopkins’ Engineering for Professionals programs, and a member of the senior professional staff at APL, cites the hypothetical example of a backyard pool party where teenagers are lounging and having fun—but a neighbor’s drone is surreptitiously recording the proceedings from the adjacent airspace. Watkins notes that the current market-leading manufacturer of consumer drones, DJI, offers models with an “active track” mode, which allows them to be trained on and autonomously follow a subject of interest without requiring Wi-Fi support or human intervention. “The drones are controlling themselves,” he says. This is a great feature for recording a wedding or capturing skateboarding tricks—but also, unfortunately, for would-be stalkers.

In a 2020 study, Watkins and colleagues set about identifying countermeasures against such unwanted aerial snooping. One strategy that proved remarkably effective was a blast of bright light from an LED spotlight. “If that spotlight is shone directly at the drone for three to five seconds, that causes the drone to kick out of autonomous mode … and it just sits there hovering,” says Watkins. He adds that a similar effect could probably be achieved with a very bright flashlight.

As an alternative, his team was also able to exploit a restraining mechanism built into consumer drones that prevents them from entering airspace in the vicinity of airports or high-security installations, like military bases or the White House.

“It’s called geofencing,” says Watkins. “And if you try to fly there, it will land or it won’t respond.” Using a device called a Hack RF One, his team was able to send signals that tricked the onboard GPS systems of DJI drones into thinking they had entered forbidden territory, bringing their autonomous surveillance to an end. Working in collaboration with students at the U.S. Naval Academy, Watkins has also assembled a prototype device that can both detect and immobilize autonomous drones using this kind of GPS “spoofing” attack.

However, Watkins also cautions that the same tactics that defend against improperly used drones could also be used to knock out and steal an innocent bystander’s expensive hardware. And as more and more autonomous systems enter the consumer marketplace, cybersecurity researchers will need to be prepared for increasingly sophisticated attacks that either subtly manipulate or overtly sabotage those systems.

Machine learning has evolved from being just another flashy buzzword to become the backbone of software tools employed in diverse sectors, including health care, finance, security, and transportation. These algorithms are fed huge amounts of training data, which allow them to identify complex patterns that can then be used to analyze and interpret input collected in “real-world” settings. This could include teaching programs to suggest appropriate therapeutic strategies based on a patient’s diagnostic data or educating autonomous vehicles in how to safely follow the rules of the road.

But there are also numerous ways to game these systems, says Yinzhi Cao, an assistant professor of computer science and member of the ISI, whose work is focused on identifying and learning how to counter such “adversarial machine learning” strategies. For example, one can “pollute” the training data in a way that skews how the algorithm responds. Cao cites the example of Microsoft’s Tay chatbot experiment from 2016, which was deliberately trained by ill-intentioned Twitter users to spew racist and anti-Semitic abuse. As unpleasant as this experience was, the same style of attack could have far worse consequences in the context of medical software, for example. “If your diagnosis is wrong, then that could be catastrophic,” says Cao.

Other attacks take advantage of how machine learning algorithms perform pattern recognition. For example, one can use “patches” to manipulate images in ways that confuse computer vision software, leading the algorithm to interpret those images incorrectly. Even subtle tricks can have surprising effects; in a 2017 study, Cao and colleagues found that changes in lighting conditions could cause the image analysis algorithms used by an experimental autonomous vehicle to make a potentially deadly mistake. “You could make a car crash,” says Cao. “Like if it was going left, but then you make it go right.”

One of the best ways to defeat adversarial behavior is to think like your adversary. For example, Cao’s team has found that it can make machine learning algorithms more robust by doing its best to deceive and mislead the algorithms. But it is difficult to anticipate every failure mode for a complex system. “As recently as one year ago, we were up to 40- to 50% accuracy in terms of defending against adversarial examples,” says Cao. “That’s not very high, and it’s still an open problem that we need people to solve.”

Similarly, as new technologies move to the fore, experts already need to begin thinking about what vulnerabilities they might contain. “The issue is whenever there’s something new and everybody goes, ‘Ooh, that’s cool,’ malicious actors say the same thing,” says Carrigan. As an example, he cites Silicon Valley’s growing enthusiasm for the so-called metaverse, and virtual and augmented reality interfaces in general. “Whatever the metaverse turns out to be, there will be scams,” he says. And just like with today’s cyberattacks, the stakes could potentially range from violations of personal privacy to actual threats to national security.

But perhaps the most fundamental issue for the cybersecurity experts is that no matter how sophisticated a piece of technology might be, it’s only as secure as the people who operate it. “The first kinetic action in 90% of the breaches we see is an email going ‘Hey, take a look at this’ or ‘log into this site,’ and then it’s all just credential harvesting or malicious attachments,” says Carrigan. “It’s a fairly standard list of first steps.”