5/09/2018 Nina A. Kollars

In the Cyber Trenches

This magazine article is part of Spring 2018 / Issue 91

How do independent cybersecurity researchers contribute to a better, stronger internet defense? 

  • Nina Kollars, assistant professor of government at F&M Nina Kollars, assistant professor of government at F&M Image Credit: Eric Forberger

It isn’t an easy thing to convince people that national security is produced in the garages of computer hardware tinkerers, or on the desktops of curious coders. When you say national security and cyber defense, people tend to look upward toward traditional models of highly centralized, government-planned systems of security provision. But the bleeding edge of cyber security defense research is produced—not by the NSA, DHS, CyberCommand or really any U.S. military or intelligence community organization—but by a decentralized global community of technically savvy independent researchers and security firms. This mixed crew of research firms, organizations and individuals is often referred to as “the community.”

The “white hat” hacker community is the front line of the U.S. national cyber defense system. Or at least, that’s what I am trying to determine in my work. For the most part, the world is obsessed with the ‘black hats’ that seek to disrupt, steal or interrupt data flows. No one doubts that they constitute a fundamental danger to national security—both as proxies of states and as individual non-state actors. And yet, almost no one considers the other side of the phenomenon—the technical communities that work to protect users and their systems.

Consider that when the WannaCry ransomware created a global hospital crisis, locking emergency rooms and medical centers out of their systems, Marcus Hutchins (@MalwareTech) discovered the kill switch that stemmed the tide of lockouts. Months later, when NotPetya/Petya (a variant of WannaCry) again threatened global systems, Amit Serper of Cybereason (@0xAmit) discovered and published the workaround. Or, most recently, few are aware that the Meltdown and Spectre vulnerabilities inherent in most computers in the United States, was discovered by four independent groups of researchers within weeks of one another.

I’m attempting to show that much, if not most, of the U.S. front line technical cybersecurity efforts—the detection, prevention and mitigation of cyberattacks—is managed through the work of the community, whether as employees of technologies firms such as Google and Microsoft, in underground anonymous hack collectives, organizations such as Citizen Lab, as employees in security research firms such as Mandiant, or as freelance independent researchers working in teams or solo. Many researchers, particularly those in the U.S., will move among these spaces, or be double- or triple-titled across all of them. What this means is, at any given moment in time, there exists a global footprint of tens of thousands—if not hundreds of thousands—of security researchers hunting for vulnerabilities in new internet devices and software; following the trail of malicious actors through the dark net; notifying companies and organizations that their data has been breached; and conducting the forensics on malware to reverse its capacity to do harm.

Much like malicious hack-types, the incentives to hack vary widely, including prestige, money, curiosity or a sense of purpose. But unlike the malicious hack-types, most of these agents rely upon a public identity to be recognized for their contribution. That is, while most tend to value their privacy, they do have public identities. They appear in talks at the yearly hacker conventions, and invite others to download their code projects from GitHub to develop them more fully. You may not know their real name, but what’s in a name anyway? As long as the hacker’s pseudonym is distinctive, it is enough. Valuing privacy and being recognized are simultaneously possible in a world where much of the recognition and information sharing occurs between Twitter handles. What matters is that this community, by virtue of its loose structure and disaggregated research pattern, ends up conducting research across a much broader spectrum of cyber threat, more rapidly, and quite economically. Not only would this be incredibly expensive for any centralized government agency to do on its own, but any attempts to direct and organize this community could end up with suboptimal effects—too much emphasis in one area, not enough in another.

One of the most common questions I get with my work is whether there are dual-hatted hackers—white hats by day and black hats by night. I think that while this is likely occurring, it isn’t clear how common this is, the prestige factor of being a white hat comes with contributing good code, doing good research, and sharing it with everyone else. People who do this much work to gain recognition as a top-class hacker or hacker team stand to lose a great deal by playing both sides both in money, respect of their peers, and—probably most importantly—future work.

Editor’s note: Nina Kollars is an assistant professor of government at F&M. In November, she delivered a cybersecurity lecture to former National Security Adviser H.R. McMaster and his staff in Washington, D.C.

Story 5/9/2018

Answering the Call of Service

During World War I, it was common to see Franklin & Marshall students exercising on Hartman Oval in...

Read More
Story 5/9/2018

Object Lessons

The walls, shelves and desks in the offices of Franklin & Marshall faculty and professional staff...

Read More
Story 5/9/2018

Back to Vietnam

There were 509 of them, in the fall of 1964, finding their dorms and meeting their roommates. Which...

Read More