Security Ledger Podcast: Security Automation Is (And Isn't) The Future Of InfoSec

Paul Roberts
October 9, 2019
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Every so often, a technology comes along that seems to perfectly capture the zeitgeist: representing all that is both promising and troubling about the future.

In the 1960s, you think of plastic, which was a pillar of a massively expanding consumer culture in the United States that put “convenience” above all else. That’s the joke behind the now-famous “advice” given to Dustin Hoffman’s Benjamin Braddock in the 1967 movie “The Graduate” by the older Mr. McGuire: “I’ve got just one word for you Benjamin…’plastics.'”

McGuire was on to something: the use of plastic did indeed mushroom in the decades that followed. Advances in the use of polymers revolutionized everything from food packaging to electronics, telecommunication and medicine. That’s undoubtedly been a benefit to billions of people on the planet. It has also made some smaller number of those people fantastically rich. But there is a downside to plastics and the throw-away culture they engendered, as we now know. Plastic trash now clogs our rivers and streams and micro plastics seep into our water and food and, borne on the winds, make their way to the earth’s most remote places.

That same L.A. pool party in 2019 might have young Benjamin being advised to look into “AI” – artificial intelligence. Like plastics in the 1960s, AI and machine learning are already big and getting bigger. Machine learning algorithms are already being used in transportation to ease road congestion, in healthcare to spot medical errors and improve patient care and in retail to improve the customer shopping experience. The technology is poised to change just about everything else …at least eventually. By 2030 AI could deliver additional global economic output of $13 trillion per year to the global economy according to McKinsey Global Institute research. 

One industry where there is plenty of speculation about the potential applications and benefits of machine learning and artificial intelligence is information security, where high demand and an acute shortage of talent have executives, entrepreneurs and industry analysts argue that the adoption of machine learning and AI is unavoidable, especially if companies hope to stay on top of multiplying and fast-evolving cyber threats without breaking the bank. 

But how exactly will artificial intelligence help bridge the information security skills gap? And even with the help of machine learning algorithms, what kinds of security work is still best left to humans?

For their latest Security Ledger Spotlight podcast, Paul Roberts sat down with David Brumley, Chief Executive Officer at ForAllSecure and a professor of Computer Science at Carnegie Mellon University. He’s on the cutting edge of and a team of students from CMU were victorious in DARPA’s Cyber Grand Challenge with Mayhem, a next-generation fuzzing solution.

In this interview, David and I talk about the potential and pitfalls of using machine learning and artificial intelligence in cyber security. We also talk about what’s driving the adoption of AI and machine learning technologies in the information security field. Namely: a chronic cyber security talent shortage globally and especially in North America, the EU and other advanced economies.

When it comes to what can you do today? Brumley shares, "Start thinking about what the best people do and how you can mimic that. It's not as hard as you think to leverage the bleeding-edge tools well-resourced organizations rely on for some of the largest, most popular third-party applications."

Listen here: https://securityledger.com/2019/09/spotlight-podcast-security-automation-is-and-isnt-the-future-of-infosec/

Transcript

Paul Roberts: This Security Ledgers Spotlight podcast is sponsored by ForAllSecure. ForAllSecure was founded with the mission to make the world's critical software safe. The company's patented technology is the product of over a decade of research into solving the difficult challenge of making software safer. ForAllSecure has partnered with fortune 1000 companies in aerospace, automotive, and high tech. Agencies with the US Department of Defense integrate ForAllSecure's next-generation fuzzing technology, Mayhem, into software development cycles for continuous security. Check them out at forallsecure.com.

This is a spotlight edition of the Security Ledger podcast and I'm Paul Roberts, editor in chief at the Security Ledger.

Speaker 1: I just want to say one word to you, just one word.

Speaker 2: Yes sir.

Speaker 1: Are you listening?

Speaker 2: Yes, I am.

Speaker 1: Plastics.

Speaker 2: Exactly how do you mean?

Speaker 1: There's a great future in plastics. Think about it. Will you think about it?

Speaker 2: Yes, I will.

Speaker 1: Shh. Enough said. That's a deal.

Paul Roberts: Plastics may have been a hot tip in 1967 when the movie The Graduate came out, but in 2019, young Benjamin might be advised to look into AI or artificial intelligence. By 2030, it's estimated artificial intelligence could deliver additional global economic output of some $13 trillion annually according to research by the McKinsey Global Institute. The benefits of artificial intelligence are already upon us, and nowhere is that more evident than in the cybersecurity space where high demand for services and an acute shortage of talent have executives, entrepreneur and industry analyst predicting that artificial intelligence and machine learning technology will be critical to allowing companies to stay on top of fast evolving cyber threats without breaking the bank.

How exactly will artificial intelligence help bridge the infosec skills gap and what kinds of security work are still best left to humans? Our guest this week has a unique perspective to offer on those questions. David Brumley is the chief executive officer at the firm for ForAllSecure and a professor of computer science at Carnegie Mellon University. In 2016, Professor Brumley and a team of students from CMU were victorious in DARPA's first ever Cyber Grand Challenge that pitted automated cyber defense technologies against one another. They won with Mayhem, an assisted intelligence Application Security Testing solution.

In this interview, David and I talk about the potential that artificial intelligence, machine learning and automation hold in the information security space, what's possible today and what may be possible in the future. We also talk about the pitfalls of using artificial intelligence in cyber security and about the best way to tackle the US's chronic cybersecurity talent shortage.

David Brumley: My name is David Brumley. I'm CEO of the company ForAllSecure. Yeah. ForAllSecure does automated analysis to find unknown defects in applications. We look for exploitable vulnerabilities. So, when we started this company, our mission and the set of products we're bringing to market are to automatically check the world's software for exploitable vulnerabilities. The two key important words for us are we want things that are automatic and to look at exploitable vulnerabilities.

Paul Roberts: As we have often said or observed on Security Ledger podcast, every company these days is becoming or has already become a software company. What kinds of companies does ForAllSecure work with? Are these traditional software publishers or are you working with some of the many companies that maybe are developing physical devices that run software?

David Brumley: Yeah, we've had kind of an interesting path. We came out of a DARPA research project called the Cyber Grand Challenge. So, when we first came to market, our big customers were the US government and kind of interestingly, we're testing already compiled software. So, we're testing it on the test and evaluation where someone has already written it and then later on it needs to be checked for vulnerabilities. Since then, we've expanded into the commercial sector and we're working in aerospace along with high tech companies who really want to do a better job finding these vulnerabilities before attackers.

Paul Roberts: The Cyber Grand Challenge, tell our audiences a little bit about the origins and what that challenge is about.

David Brumley: The Cyber Grand Challenge was pretty cool. So in 2014, the Defense Advanced Research Project Agency, DARPA, the people who really funded the original internet said, "Can we make cyber fully autonomous?" What they meant by that is given applications that you didn't write, can you write a system that automatically finds and proves vulnerabilities, and is able to self heal? One of the unique things about how they did this is they judged it in a full spectrum hacking contest. So, it wasn't about saying, "Okay, I think I found a bug and maybe this is a patch." It was about showing that you could actually beat adversaries and do these two steps faster than anyone else.

Paul Roberts: What are some of the types of tasks that maybe today human beings are being asked to do that would be better off shunting them off to an automated system, a machine learning system for example to perform?

David Brumley: One of the things that we've found is that humans are really bad at finding many types of common security vulnerabilities, especially in high-performance languages like C, Go or Rust or something that's going to get compiled down to an executable. So, the machines are really good about systematically testing those. One of the cool things you can do is just add more CPU power to do better testing. It's much cheaper than for example hiring an FTE. What we've seen in practice, for example, is Google has used automated fuzzing to find I think 12,000 new bugs in Chrome completely automatically. Of course, these are things that humans had missed previously.

Paul Roberts: These are bugs that are already made it through whatever QA process Chrome has which presumably is pretty substantial.

David Brumley: Oh yeah. The Chrome team, I think there's 38 people just on the security team. So, these are smart well funded people, but it's just hard to do that sort of in depth testing. Computers never get tired. They can be the proverbial monkey on the keyboard typing 24/7 trying to find those problems, improving them.

Paul Roberts: When we talk about finding vulnerabilities in software, what types of activities are we really talking about? I mean, give us an example of how either human or a computer learning system might go about locating a vulnerability in uncompiled code or compiled code, I guess for that matter?

David Brumley: I mean there's a really a historically two different processes. So, if you had compiled code, like if you're just given software, you bought it, maybe it's part of your SOHO wireless router. Really it would take significant reverse engineering expertise to even begin going down the path of finding exploitable vulnerabilities. So, that's a place where computers can really do a lot more than humans because they can reason about the code as it actually is going to execute. Once code is compiled, that's a language for computers to execute. So, it makes sense that computers are the best at analyzing it.

When you look at when people have source codes, people are like, "Well, why do we need computers there?" The reason is often people make mistakes in how they think about a program. For example, they may think, "Hey, the user's going to give me an input and it's only going to be as long as maybe a DNS record." but they never actually checked that. Computers can find those side cases. They can help cover the human blind spots, I guess, is one way to put it.

Paul Roberts: Assumptions, I guess, that developers make that users are going to use their software as designed rather than look for mistakes or vulnerabilities in it. I mean that seems to be a major obstacle even today.

David Brumley: It's a huge obstacle. I want to talk a second about the software supply chain problem. So, we've seen a pretty recurring problem. You'll write an application and maybe it parses XML or JSON. So, you'll go find open source that parses XML or JSON and you'll build your application on top of that. Even if you audit your own software, you're inheriting all these software vulnerabilities from the supply chain. So, you need to start looking at techniques that help you check that supply chain. Because at the end of the day, the user, the hacker, they don't care whether it was your software or the open source software or something from a third party. They just see it as your app and it's a huge blind spot. Developers often forget to check that.

Paul Roberts: Yeah. I mean there are companies out there that do assessments of open source, right? Black Duck and Synopsys and companies like that. I mean, is it adequate to use their services or is there more that needs to be done?

David Brumley: Well, I think it's still a growing field. If you look at this idea of I think we call it software component analysis, SCA, it's a rather new field. The general for SCA tools is, "Hey, I know this piece of software is vulnerable, it's an open source." They'll try to identify whether or not it's part of your build. If you're technical, they'll try to do strings and say, "Hey, the library version that we know as vulnerable is present on your system." I think that's a good start. Really what that's doing is saying all the known vulnerabilities out there on open source we can make sure that whether or not you're using those known vulnerable components, but we have to go beyond that, because open source isn't deeply checked.

I mean this idea that many eyes finds all vulnerabilities was a great theory, but it hasn't proven itself out in practice. That's where we find techniques like fuzzing. They don't just assume. The open source community knows of all bugs in it and so it's sufficient just to check for known bugs. If we use an XML library, just using the current version of that library isn't enough. There's still probably a whole host of problems that are still latent in it or it could just be how you're using it in your application is unintended, perfectly fine for you, but creates new security vulnerabilities you need to check. So, I say it's a blind spot because developers often think like it wasn't developed here so we don't have to worry about it or we'll just run SCA. That's really just saying, "Well, I'm sure someone else has checked.

Paul Roberts: We talked about, I know before also the mental assumption I think that people make that, especially if it's a widely used component, then you're kind of extra safe because so many people use it. Somewhere along the line there somebody must have audited this code. It's not me, but I'm assuming 100,000 downloads, somebody's audited it. In fact, I think we see that that's not the case. That everybody's kind of pointing fingers at everybody else. Often this widely used components, as you said, open source components, libraries and so on, might have even glaring vulnerabilities such as nobody's spot it.

David Brumley: Yeah. I mean I think if you think about it in terms of incentive, those open source developers are often unpaid. They're doing it as a side project. Maybe in the best case like Apache, there's a foundation, but they're not staffed to go and do rigorous security audits. So, it could be popular. It's solving an important problem, but that doesn't mean that you should assume for your use case that it's secure enough. I mean it's, it's really on you, if you're shipping software to make sure everything you do is secure.

I said I ran a hacking team and one of the things like if you're going to go look for a new vulnerability, you do is you actually just look through the open source components and try to figure out what hasn't been audited. If you look at like the Tesla hack, it was interesting. The way they hacked it was they found a vulnerability in Chrome. Now of course, Tesla doesn't write Chrome, but that's just an example of they're using open source. They said it was validated enough, but it certainly wasn't enough for automotive use.

Paul Roberts: As you said, sometimes behind these widely used components might need only be two or one individuals who obviously are more than happy to have help and maybe not looking too closely at what that person who's helping them is doing.

David Brumley: It's amazing, right? There's something that's just kind of out there. Anyone can contribute and then you're going to trust it. We even see this in proprietary software. We did an audit of wireless routers that you can buy from Amazon. A huge number of them contained essentially back doors. I mean we can call them field access if you want, but I mean come on. If you're buying around from Amazon, why should the company who made it be able to log into your router?

Paul Roberts: Distinction without a difference, as I say, yeah. I mean, you know.

David Brumley: We literally found a device that was used in safety critical systems that had a program called BK door, back door on the system.

Paul Roberts: Just in case you were confused by the unusual name of the function, what it was for. BK door, there we go.

David Brumley: Oh, no. Yeah. That's for field maintenance. I'm pretty sure none of the users expected that.

Paul Roberts: One of the challenges is that organizations just as the sort of risk problem, bites and there's more higher stakes and more people kind of paying attention to application security. There's also tremendous pressure to rapidly iterate software programs and applications and that preferences speed and kind of getting code out there. Is it possible to kind of reconcile that with the types of things that you're talking about and and if so, how?

David Brumley: So, some people use us and the way our product Mayhem was designed was to check compiled software. So, this allows the end user to check the security of the software they use, which I just felt was a fundamental primitive we didn't have. Things like SCA and static analysis, those are for the developer to check and that's great, but the end user should be able to check. Nonetheless, that's at the end of the process. What we're seeing with the rise of DevSecOps is I really think it's a transformational technology or transformational idiom. We're saying it's not enough to check security at the end. It has to be integrated into your dev cycle. Just like any new process, there's things you can do to make your life easier.

If you just take the same old way you've done stuff and say, "Okay, we're going to add security and shift left." That's not enough. You have to say, "What are the new processes and where can we add it to our pipeline to find these things as early as possible?" We're starting to see a shift towards that. Of course, it's not happening I think as quick as any of us would like, but there are companies doing it.

Paul Roberts: We talk about autonomous security. I mean, what in your mind is the proper balance I guess between automation and human analysts? Where's the handoff and what are things that, at least at this point in time, you're much better off having humans look at and what are things that you might save money and be better off having the computers and machine learning algorithms to take care of?

David Brumley: That's a good question. So first, I think in a lot of practices, the human is the weak point especially when you look at how software is deployed. It can be days, weeks, months, years before software gets deployed that has a fixed and we have to reduce that time. So, that's a place for autonomy. If it passes your regression test, you should be able to field it. That's what the dream in your organization you should be shooting for. So, I think that's one of the properties is make sure that you can automate to the point that if your automation says it's a pass, that you can actually field it. I think the role of the human is to architect the system to make it easy to check.

So, I'll use Google Chrome as an example, just we didn't write Google Chrome so it's a great third party example. Google has spent a lot of time building sandboxes into their web browser. So, these are like little safety pits for the thing playing MP3s and the thing that's playing videos and the thing that's doing audio processing. The reason I do that is not just for security, it just makes the software easier to test. So to answer your question, the human had to set up the architecture. They said, "Okay, this is a chunk, it's going to be one component and it's going to be testable. This is another chunk. It's testable. I can put them together and that's testable." You need the human to design those systems, so that the computer can take over from there.

Paul Roberts: You talked about the tremendous resources that companies like Google or Facebook or Microsoft, Apple are able to throw at security, big security problems. Obviously, most companies don't have those resources. One question would be, are we already seeing sort of a security poverty line where security becomes something for big wealthy companies, but everybody else, all the other kind of software publishers out there, it's sort of beyond their reach is because as you said the scarcity of talent, the cost of that talent and obviously the demands of the marketplace. I guess, is there a way to get around that, to square that triangle?

David Brumley: Oh, it's interesting that you had it as a security poverty line. I hadn't heard that, but that's exactly what we're experiencing. So, the long-term fixed right is we need to get more people interested in computer security as a field. There's just too few people coming out, which makes those coming out such highly sought after people that it's really hard to compete with a Google job offer if you're a smaller company and you're not willing to pay 300K a year and all those benefits. So, I think we're seeing that poverty line. I think the common wisdom is always you need to work smarter. The way that you do that is you can look at, what are those companies doing that's automated? How do I switch my processes so that I can take advantage of those even though I'm not big?

So, Google has I think 24,000 CPU cores that are trying to do fuzz testing for Google Chrome. Now, most people don't have 24,000 cores, but there's services and products like ours or there's open source utilities like AFL where you can set it up yourself. Even just putting 10 CPUs automatically checking your target, you're going to find a lot of problems that you didn't know about before. I mean I don't want to distract us from we have to fix the larger problem that computer security, I'm just going to be direct, is not a known field to the high school student. They don't come to university by and large saying, "I want to be a computer security expert." Even though it's highly paid, tons of jobs, great career paths. We need to fix that problem. I don't want to belittle that at all.

Things like hacking contest, engaging with open source, engaging with high school students are the ways to do that. When it comes to, what can you do today? It's taking those automated processes and saying, how can we incorporate those? The hard part for companies is you have to be willing to change. You can't just say, "I don't want to change anything, but I want security tomorrow at the same scale as Google." You're never going to win that game.

Paul Roberts: It's really interesting. I mean you're an educator obviously, so I know you see this firsthand. As I look at it, I mean it's not even that high school kids aren't thinking about cyber security. I mean I don't see many kids thinking about software development. I mean, I know that those majors are popular in colleges, but if you were somebody who was interested in something like software development, you really had to search it out and basically outside of the K through 12 system.

David Brumley: Yeah. I think that you have to start at the high school. There's a couple of barriers. So, one of the things we do at CMU is we run a high school hacking contest called picoCTF. That's open to everyone. It's actually going to run an October this year. Last year I think we had like 80,000 US high school students play. I found a couple of things.

Paul Roberts: Wow, that's great.

David Brumley: So, one thing is a lot of computer security in the marketplace is about fear, uncertainty or doubt. It's about danger and you can break into things. It confuses criminal with hacker. Hacker should be something we aspire to. You shouldn't equate it with criminal, but I think when we run these contests, what we found is that it actually puts a lot of students off as well. It's like, "My interest isn't how to break into things." When you start rephrasing things and say computer security is about building trust in the things that we use every day so that people can trust it. You're actually helping people. Good software development is the same way. You can actually reach a larger audience.

I think some of the things the US needs to do is actually be a little bit more serious about it. They put a lot of talk into it, but not a lot of action. I remember we run this as a volunteer effort, this high school hacking contest. Very little funding for it, pretty large participation. We started getting letters from various state saying, "Hey, you have to sign all these agreements with us because our students are using it." Every state had a different process.

While I admire the overall idea that they care deeply about their student privacy and what they're looking at online, it makes it really hard to build something that touches many lives. If you look at places like Russia, it's just a gladiator sport there, right? Whoever wins the big gladiator contest is the best. In the US, we're much more about human choice and about freedom. So, we need to spend more time encouraging it and giving people opportunities because it's not ever going to be mandated nor should it be mandated. We actually have to put our money where our mouth is on this.

Paul Roberts: Let's have a TV show or a Netflix show about somebody who does cybersecurity or does software application development who isn't wearing a hoodie and a misanthrope.

David Brumley: Absolutely. You know what? This TV show exists in China where there's the movie star with a great looks, the girl, but she's a hacker playing capture the flag contest. Why don't we have that here?

Paul Roberts: We have Mr. Robot, which is an amazing show, but you could be forgiven if you didn't see it and say, "I'm not sure that's the community for me."

David Brumley: Yeah. We should be raising these people up like a good hacker, like the people in Pwn2Own are helping us make things we care about like our cars better.

Paul Roberts: Exactly. Exactly. So, emphasizing pro social rather than antisocial tendencies, because obviously there are many more pro social than antisocial people out there.

David Brumley: Yeah, I totally agree. So, we're big advocates. Computer security is about fostering trust and increasing trust and actually helping people. It's about helping the person who's not a computer science or engineering major who wants to write their English paper or wants to do research on dinosaurs, making sure they can trust their devices, their airplanes they fly in. I think if we start rephrasing it that way, we'll attract a bigger audience. I think also those computer security tools really need to start focusing more on that message. I mean, in industry we need to do our part of not just going about the FUD, the fear, uncertainty, doubt.

Paul Roberts: So, for folks who are out there listening to Security Ledger podcast, maybe they're working in technology, maybe they're working for a company that is making some software driven thing. They're probably worried as heck about their software supply chain risk. Where do they start? How do they even start to get their arms around this very big problem?

David Brumley: Well, I think there's different stages for everyone. So, I'll give a couple of piece of advice.

Paul Roberts: Denial is the first stage, right?

David Brumley: Denial is the first stage. The second is wishing. Why didn't someone else solve it? I hope that there is like I can just go by that black box and it works.

Paul Roberts: The third stage is outsourcing.

David Brumley: Third stage is outsourcing. That doesn't work so well. I'm a big believer that you got to look at pairing tools with processes. So ForAllSecure, we have tools that help companies automate the same sort of things. Google and Microsoft do. We think they're more technically advanced. We won the Cyber Grand Challenge, DARPA deemed us best. Our website is F-O-R-A-L-L-S-E-C-U-R-E.com. So, I think if you're in business, that's a great way to get started. Just talk to us, get a different perspective. I think if what you're trying to also do is grow the community and I think some of your listeners are. Encourage people to play in these hacking contests like picoctf.com. You can learn a lot. We have a large number of US high school students play like I said. Create your own. We by no means think that we should be the authoritative source on that.

So, that's two answers there. One is start looking at products and don't just say, "Okay, I'm going to go look at what everyone else is buying." Start thinking about, "Well, what do the best people do and how do I mimic that?" It's not as hard as you think to get those sort of tools and we offer them. Then the second is start participating in the community for growing it. I think that we're really at a transition point in the US and as far as software development. So, I really look forward to hearing from people what they think are their problems so that we can better address them. I've talked about some here, but I think it's good to have that dialogue. So anyone, feel free to reach out to me. It's just dbrumley@forallsecure.com.

Paul Roberts: David Brumley of ForAllSecure. Thanks so much for coming on and speaking to us on Security Ledger podcast.

David Brumley: Oh, thanks for having me. Have a great day.

Paul Roberts: David Brumley is the chief executive officer and co founder at ForAllSecure. You've been listening to a spotlight edition of the Security Ledger podcast sponsored by ForAllSecure. ForAllSecure was founded with the mission to make the world's critical software safe. The company's patented technology is the product of over a decade of research into solving the difficult challenge of making software safer. ForAllSecure has partnered with fortune 1000 companies in aerospace, automotive, and high tech. Agencies within the US Department of Defense integrate ForAllSecure's next-generation fuzzing technology, Mayhem, into software development cycles for continuous security. Check them out at forallsecure.com.

Originally published at The Security Ledger

Share this post

Add a Little Mayhem to Your Inbox

Subscribe to our weekly newsletter for expert insights and news on DevSecOps topics, plus Mayhem tips and tutorials.

By subscribing, you're agreeing to our website terms and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Add Mayhem to Your DevSecOps for Free.

Get a full-featured 30 day free trial.

Complete API Security in 5 Minutes

Get started with Mayhem today for fast, comprehensive, API security. 

Get Mayhem

Maximize Code Coverage in Minutes

Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.

Get Mayhem