FuzzCon TV Tackles Federal Fuzz Testing

Robert Vamosi
July 7, 2020
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Following a successful FuzzCon event held in person at RSAC in San Francisco earlier this year, ForAllSecure is continuing the discussion with a series of follow-up sessions online called FuzzConTV (formerly A Fuzzing Affair). The first episode, in late February, is designed to be an introduction to fuzzing. It was hosted by Chelsea Mastilak, Corporate & Field Marketing Manager at ForAllSecure. Guests included -Chris Clark, Sr. Manager Embedded Ecosystems at Synopsys; Dr. Jared DeMott, CEO & Founder of VDA Labs; and Billy Rios, Founder of Whitescope, LLC. A recorded version of this first episode can be found here

Topics were suggested by the audience throughout the hour long discussion included maturity models (as in when to bring in fuzzing), the strategy behind which fuzzing technique to use, and whether to pairing fuzzing with other software testing tools. 

Any software maturity discussion starts with whether or not a software development life cycle is in place and what testing is needed to be done and when. “A lot of times we tell our customers we wish that they had known about robust fuzzing and fuzz testing earlier in the life cycle,” Billy Rios said. “They're building these products and things like that and they shouldn't wait for, you know, a third party consultant to come in, after everything's done.” But with less mature organizations it might not occur to them to include fuzzing earlier in the process. When Billy was at Microsoft and then Google, he said they did fuzzing as part of their quality assurance in the development lifecycle. He said robust fuzzing should occur throughout the entire development lifecycle so the bugs and vulnerabilities are discovered through fuzzing, so the developers have an opportunity to fix those before the final release. “I think from a lifecycle standpoint you can probably implement fuzzing at any point, but it's probably a better idea to do it sooner rather than later,” he said.

Jared DeMott agreed, and suggested that organizations “adopt kind of a crawl, walk, run” mentality first with their software testing. He suggested that organizations perform a risk assessment to determine which attack surfaces and which threats are the priority. Then start with whatever tools are easiest to apply and work into harder techniques as you mature and grow into that crawl, walk, run. He said that fuzzing is one of the more complex techniques.

Chris Clark said he agreed but that he didn’t want anybody to think, “Oh, well, I have to have specialized infrastructure just for fuzz testing. That's not necessarily the case.” He said a lot of the monitoring and infrastructure set up for fuzz testing will also benefit other areas of testing. “Especially, he said, “if you're going to be doing unit penetration testing. So, really fuzzing is an additive component. It helps you build your overall security posture for testing”

One question from the audience asked which fuzzing tool should be used and when. Rios said “There's some strategy behind the type of fuzzing you do and how you do it. And, so, there really is no one size fits all answer.” He said an organization needs to look at “What does the attack surface look like?  What's your most risky attack surface? Ask where fuzzing is likely to be more effective than other mechanisms to detect vulnerabilities. All that kind of has to come into play “

Clark suggested that an organization that's manufacturing a software component that's going into another system. “So it doesn't make sense to do protocol testing,” he said. “You might want to do something like AFL or some other form of fuzzing. You have to look at what you are providing, what's the end goal, and ask what other components the consumer of your product is requiring.” He said it’s like developing a software model so you need to ask what are the requirements for testing.

Clark continued. “Part of this is a contextual discussion. If you're a developer, and you're very well versed in C, why would I put a fuzzer in front of you that focuses on a protocol interface because you don't have a clear view into how that protocol works? The fuzzing can affect that protocol and the applications, if you don't have proper instrumentation. Whereas something like [ForAllSeure’s] Mayhem would be right up your alley because it's right in line with that software development process. The outputs and the inputs are things that you can control and you can understand.”

DeMott added that a lot of times customers only focus on the tools. “When we have conversations about fuzzing with people, what they actually focus on is the tools that are actually doing the fuzzing, is essentially raw form input. And that's not all that fuzzing encompasses.” He said once that tool does something that causes a crash or finds a vulnerability, there's a whole trail of activities that has to happen next, depending on what you want to come out of that.“ If you're an assessor, maybe you have to determine whether or not that particular crash is actually exploitable, and could be reached by remote means. As a developer, maybe what that means is that you have to be able to identify the specific line of code that's actually triggering this crash or this fault. And if you're a tester, it might mean being able to replicate that specific test case that caused that specific crash in a way that other people can actually replicate what you've discovered like so.” 

This discussion led to a question about what do you think of fuzzing combined with symbolic execution and address sanitizer?

Rios said the person who asked the question “understands essentially the triage piece of it, as well as getting the most out of their analysis quickly. There are things that you can do to essentially augment identification of those serious vulnerabilities. You have to address first things that are exploitable, things that are remotely exploitable, things that can be triggered very easily.” He said knowing those types of things is super important. “We may say it's a very poorly coded application, so pretty much any info we throw at it results in some exploitable condition. That's one case that we see quite often. And then, in other cases, we see this is actually a robust application. So, we don't want to waste our time doing triage on things that are not important to us. We'll augment our analysis or triage with tools like address sanitizer and other other tools to help us triage on crashes and on file rights and file reads and things like that. It'll quickly get us to those things that matter.”

Clark said we also have to look at address randomization and all these other techniques as well. “They definitely should be implemented. They should be available, but, if it's possible to disable those functions, we actually prefer to turn those off so that we can perform testing, and really evaluate the code -- what that code is capable of doing -- so we're not masking some other fault condition because it's been overwritten or been restarted.... We want to disable those mitigation techniques early so we can perform that testing as part of that maturity model, then we start adding those capabilities back in to see if our testing discovers anything else.”

DeMott agreed and said augmenting fuzz testing was sort of like cheese and wine. “You are always pairing so it's always a good idea to figure out where one does this. A sliver of what the world can do with security testing. How can we enhance that. There's been other research about taking the input of fuzzing mods ongoing and putting it into a disassembler to look at what's actually going on. Where's it at right now. Well, it's fuzzing. And then the code progression. There's all kinds of clever things that you could pair this technique with or with more developer-focused activity. I think it's a great way to be thinking about how to build this thing out.”

FuzzconTV is quarterly.  A second episode of FuzzconTV on Federal use cases for fuzzing is available here.

Share this post

Add a Little Mayhem to Your Inbox

Subscribe to our weekly newsletter for expert insights and news on DevSecOps topics, plus Mayhem tips and tutorials.

By subscribing, you're agreeing to our website terms and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Add Mayhem to Your DevSecOps for Free.

Get a full-featured 30 day free trial.

Complete API Security in 5 Minutes

Get started with Mayhem today for fast, comprehensive, API security. 

Get Mayhem

Maximize Code Coverage in Minutes

Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.

Get Mayhem