Presentation: Addressing Security Regression By Unit Testing
What You’ll Learn
- Understand the security regression problem space and how you can augment your practices to improve security posture.
- Gain knowledge that enables developers to write secure software more easily and with less effort.
- Learn techniques, such as dynamic unit test generation, that can be quickly implemented to provide security guarantees around future software.
Abstract
Regression in codebases is a significant problem that proportionally significant amounts of effort have already been spent addressing. Regression is a similarly large problem in the realm of security, yet de-facto standards and approaches for addressing the issue remain absent. Even when security programs have the proper staff, tooling, and budgets, they commonly struggle with ensuring that security holes remain fixed after they are initially patched. This talk will explore the application of a regression solution commonly employed in software development - unit testing - to fighting security regression. We will cover unit testing solutions that are both integrated into tested codebases as well as solutions that can test deployed codebases from a blackbox standpoint. Our talk will be aided by the release of an open-source software project built specifically to demonstrate how these practices can be employed in real-world scenarios and with re-usable core testing functionality that can be integrated into existing Python projects. Through this talk we hope the audience will leave with an understanding of the role that regression plays in security and how unit testing can be used as a tool to address security regression in both in-house codebases as well as untrusted third-party software.
QCon: You mentioned you've been doing software full time for the past 20 months or so. What's driven that?
Christoper: The majority of my experience has been in the penetration testing world. It has shown me that one of the big problems in the security space is that people really don't know what they have; what they have on their network, their devices and so on. It's really difficult to try and secure things that you have no visibility on. How do you strategize around that lack of visibility? The main purpose of the software I've been working on is to automate the process of gathering all the information that is relevant to enterprise attack surface.
QCon: Is this an agent that's running on your network, or something different—what does it look like?
Christopher: It’s entirely blackbox and it's entirely unprivileged. It's currently running from a deployment on a SaaS platform. It's also something that can be packaged up and ran on an appliance internally, and can perform a number of functions. Think about it as your own little Google spider for all of your network inventory.
QCon: So what's the goal for your talk?
Christopher: We have a similar problem to regression in software within the security space. just because a vulnerability is fixed once, doesn't mean that it remains fixed. This is one of the reasons I've built this platform: to identify enterprise attack surface and monitor it as it changes, and to then use unit testing to give guarantees around security regression.
QCon: How is your talk structured?
Christopher: Security regression is a problem. It’s not always easy to write secure software—it takes a significant amount of effort even for folks that work in security full time. In my talk, I’ll cover the problem space, and then how I address problem within web applications. We’ll look at some unit tests I’ve written, showing how you can use dynamic test generation to make this testing well with a minimal amount of code. It will be in Python working with the Django framework. We’ll look through all of that then do a release of an open source software package that contains all the code used within the talk.
QCon: What do you want someone who comes to your talk to leave with?
Christopher: Nobody wants to write vulnerable software. Mostly it comes down to two things: not having enough contextual knowledge, or that people simply don’t have the time. I want to address both of these issues and show ways that take very little effort, but provide significant security improvements to codebases. Attendees can take away from the talk actionable advice for improving security posture of their code that can be implemented into their work now with relative ease.
QCon: There are dependencies that aren't necessarily within code that cause vulnerability. How do you capture the full integration test of all these dependencies?
Christopher: This isn't necessarily something that's going to be able to find vulnerabilities out for you ahead of time. But let's say that we determine now we're reading some sort of web application, we're using some sort of third party library and relying upon this third party library. We discover some functionality is vulnerable. Maybe there's a public disclosure, maybe we ran a pen test. We have a security team that actually looks at the this and advises if the library is valuable to X Y Z, and we can then write an integration test which actually confirms that that vulnerabilities are there. Let's say that it's something like SQL injection. We can write a unit test that submits the HTTP request to the API endpoint that basically exploits that vulnerability, and then we can run the actual test state and look at the response coming back. The we might look for an artifact that shows that this was actually SQL injectable. Perhaps we have a bunch more results than we're supposed to have, or maybe some sort of error code is returned but we can embody that inside a single unit test and then we can use test driven development practices to make sure it’s green. And that's how we can embody not just the code that's being authored, but also what's in those dependencies.
QCon: How about when you might be depending on something in the environment that's not configured correctly. It's not just the code, it's this dependency to the environment you're operating in.
Christopher: This is something where unit testing allows the ability to use introspection. It gives visibility inside the codebase that’s being testing against. So when you're talking about other network level dependencies, you don't have the same level of introspection that you can dive into. However, we can still write code that establishes a network connection, and checks to see if a service is in a certain place for example. We can still take the same concept of performing an action whether or not that's a local action against the codebase or if it's actually getting the network to do something. We can take that action embodied in the test and then make sure that we're getting the result we expect. We can go back saying that the result is checking for a difference between a vulnerable and a not vulnerable state. This is absolutely something that we can not only deploy within the code base, but also within the network dependencies that we see in every deployment.
Similar Talks
Scaling DB Access for Billions of Queries Per Day @PayPal
Software Engineer @PayPal
Petrica Voicu
Psychologically Safe Process Evolution in a Flat Structure
Director of Software Development @Hunter_Ind
Christopher Lucian
PID Loops and the Art of Keeping Systems Stable
Senior Principal Engineer @awscloud
Colm MacCárthaigh
Are We Really Cloud-Native?
Director of Technology @Luminis_eu
Bert Ertman
The Trouble With Learning in Complex Systems
Senior Cloud Advocate @Microsoft
Jason Hand
How Did Things Go Right? Learning More From Incidents
Site Reliability Engineering @Netflix
Ryan Kitchens
What Breaks Our Systems: A Taxonomy of Black Swans
Site Reliability Engineer @Slack, Contributor to Seeking SRE, & SRECon Steering Committee
Laura Nolan
Cultivating High-Performing Teams in Hypergrowth
Chief Scientist @n26
Patrick Kua
Inside Job: How to Build Great Teams Within a Legacy Organization?
Engineering Director @Meetup