Interview with Dr. Arati Prabhakar

Dr. Arati Prabhakar is the former director of the Defense Advanced Research Projects Agency (DARPA), a branch of the US Department of Defense tasked with researching and developing new technologies for the American military. Dr. Prabhakar is an electrical engineer by training and has worked in the public and private sector throughout her distinguished career. PPR spoke to her about emerging technologies and DARPA’s role in national security.

Conducted and transcribed by Michaela Palmer.

Penn Political Review: How does DARPA play a role in developing technology for the changing national security and defense needs of the US government?

Arati Prabhakar: We have a very specific role in the business of technology for national security. And that role – probably the easiest way to understand it is how DARPA started. In 1957, the first time human beings ever put a satellite in orbit, it was Sputnik. And you would have thought that was good news, but it turns out a lot of people were very concerned because it was the Soviets doing it at the height of the Cold War. At that time, we already had the National Science Foundation, National Institutes of Health, we had research and development in the Army and the Navy and the Marine Corps and the Air Force. But DARPA was created in the wake of Sputnik specifically to have a place whose day job was to prevent that kind of surprise. So that meant then and it means today that our job is specifically to focus on breakthrough technologies for national security. Because we think the best way to prevent surprises is to create surprises. What we are always looking for is the things that can have really disruptive impact.

PPR: How threatened is the US today by the potential for cyber attacks? What role does DARPA have in developing technology to either prevent them or help us recover after?

AP: Cybersecurity is a very big issue that’s on everybody’s plate, whether you’re the Defense Department worried about military systems, any other part of government, or any company or university or even an individual. And I think we all have this sense that really what’s happening is… information and information systems are so valuable – that’s why we use our smartphones more and more and every enterprise uses more data and we’re now putting out more and more sensors and we’ve got the Internet of Things. And so we keep grabbing this technology because it’s so useful but every time we grab more of it, it comes with more attack surface. It’s just more opportunity for people to cause mischief or for nation states to undertake an attack. So I think there’s very much the sense of this problem continually growing and we’re chasing behind it and not really able to keep up with it. Lots of people are working on cyber security. Again, DARPA’s specific objective is – and this is an area where we have some significant investments – to create a future where we are ahead of that explosion of capability and vulnerability. So what does that mean? We have programs that are about taking whole classes of vulnerabilities off the table. We’re scaling formal methods to using deep math and computer science to build systems that are secure from the get-go so they simply can’t be attacked. That helps enormously but you can’t build everything you want that way. So the second part of our strategy is to build systems that can operate in the cyber domain and protect our systems and our data through the use of  automated tools that can scale at the rate at which the exposure is growing. We just did a really fun project called Cyber Grand Challenge. It was one of DARPA’s big prize challenges, which we do once in a while. This was a competition. We challenged people to build machines that could find vulnerabilities in code and automatically patch them with no humans touching anything. So the way the competition worked was we built basically a cyber environment in a box with fresh code that no one had ever seen and the seven competitors turned on their machines and then they couldn’t touch anything and the machines actually had to examine the code, try to find vulnerabilities and patch it. Now, they couldn’t do things at the high end of sophistication, but they did some things that were pretty hard for humans. Things that typically take human teams months or even years, they were able to do in fifteen minutes. It really starts changing our ability to grapple with cyber security when you see that kind of new technology.

PPR: What are your thoughts on artificial intelligence? Are you developing technology in the AI arena for the government?

AP: DARPA’s had a very long history of funding the core technologies of artificial intelligence. If you use Siri, you’re using a product from Apple that traces back to a company called Siri that spun out of SRI, a research institution where we were funding a project that had a little effort called “Siri.” And all of those companies and Apple did a lot more work after our research, but the AI that we’re surrounded with today often has a lot of deep research roots with what DARPA has funded in the past. AI is in a period of great excitement and activity, and as consumers we see it when we go on Facebook and there’s a photo that we didn’t even know someone had taken and we’ve been automatically identified, or the fact that we’re getting cars that are closer and closer now to being able to drive themselves. It’s showing up in micro-trading and finance. It’s showing up in lots of places. We are doing two things with artificial intelligence today. One is to apply those technologies for national security problems. So, as an example, one of my program managers, a few years ago went to a meeting on human trafficking and he learned that in human trafficking sometimes it’s possible to buy a human being for $90. He was heartbroken and he was infuriated by that. He started talking to our colleagues in law enforcement and found that the only way they have to look at all of the online ads for human trafficking is sort of the way anyone searches. You use Google or whatever your favorite search engine is. And the problem is that those search engines only index a really small fraction of all the web – the part that’s commercially valuable to index. So Chris White, our program manager, set out to build tools for law enforcement that would allow them to search the dark web and the deep web, and do that in specific domains where they could look at the kinds of ads that people were posting but that they were hiding from the conventional search engines. The net of all of that is those tools are now being used by District Attorneys’ offices around the country. They’ve been used in hundreds of investigations already. They’ve just started, but already they’ve had 27 indictments and several convictions. And I think it’s a great example of using the tools of AI that aren’t used commercially and in our consumer lives, and applying them in a place where commercial tools weren’t going to solve the problem. There are many, many other examples … but I think it’s important to say that as powerful as AI is today, it’s still very, very limited. It’s statistically very good at identifying what’s in an image, for example, but when these systems make mistakes, they make mistakes that no human being would ever make. They lack common sense and context. And so we have new programs in areas like “explainable AI” – AI that would explain why it came to the weird conclusion it came to so we can start doing something about it. We’re also looking at how you could embed AI within systems rather than always going out to the Cloud. So I think there’s quite a big research agenda out ahead of us.

This interview has been condensed and contains minor edits for clarity and grammar.