To do a job well, you need the right tools. But itās just as importantāperhaps even more soāto use those tools correctly. A hammer will make things worse in your construction project if youāre trying to use it as a screwdriver or a drill.
The same is true in software development. The intricacies of coding and the fact that itās done by humans means that throughout the software development life cycle (SDLC) there will be bugs and other defects that create security vulnerabilities that hackers can exploit.
Addressing those vulnerabilities effectively is called defect management.
There are multiple software security testing tools on the market. Among them: static analysis (SAST), which tests code at rest; dynamic analysis (DAST), which tests code while running; and interactive analysis (IAST), which tests code while interacting with external input. Software composition analysis (SCA) helps find and fix defects or licensing conflicts in open source or third-party software. Pen testing and red teaming at the end of the SDLC can find bugs that may have been missed earlier in the cycle.
But for those tools to be effective in a DevOps world, where there is enormous pressure on developers to produce quickly, security tools have to be configured properly. If they arenāt, they can flag every defect without regard for its significance and easily overwhelm developers. Any developer who is constantly bombarded with notifications from a security analysis tool will start to ignore them. It becomes white noise. And the inevitable result will be the opposite of the intentāless-secure code.
Thatās even more likely when tools are automated. Automation is good in that itās much faster than a manual process, but itās important to limit notifications to security vulnerabilities considered critical or high-risk.
Thatās one of the prime messages Meera Rao delivers to clients when talking about defect management.
Rao, senior director for product management (DevOps solutions) at Black Duck, said if any static analysis tool (CoverityĀ®), isnāt āfinely configured,ā it will push far too many defects, including those that are low risk, into a defect-tracking tool like Jira.
āDo I want all of the thousands of issues that Coverity found to go into defect tracking?ā she asked. āNo, because unless it is configured otherwise, it finds them allācritical, high, medium, low, informationalāand I donāt want them all to be flooded into Jira.ā
So when she talks to security teams at client organizations, she tells them their first priority should be to decide what security vulnerabilities are critical to the application being developed.
āIf it is externally facing, like a banking application that is going to be available throughout the world, then Iām most nervous about cross-site scripting (XSS) and SQL injection,ā she said. āI donāt care about empty cache blocks or other less important issues because then I would be flooding my defect tracking.ā
āWhen I configure a tool such as static analysis, I want to narrow it down to the vulnerabilities that my organization and my application care most about. The tool might have found thousands of other issues, but I donāt care.ā
Rao said itās also important for security and development teams to realize that the security vulnerabilities most important to them will likely be different from a general top 10 list like the one created by the Open Web Application Security Project (OWASP).
āThe OWASP Top 10 is unbelievably good, but those might not be the top 10 for your organization,ā she said. āSo you have to make sure you have the metrics to look at what are the top 10 or top 5 security vulnerabilities that matter the most to you. And just for those five, make sure that every time you run the tool, whether it is static, dynamic, or interactive analysis, you create defect tickets for those, and then see that it is a closed loop.ā
āThe main goal is to push in as many rules as possible within the tool,ā Rao said. āNot all of you are writing web applications. Some of you might be writing a microservice. Some might be writing middleware that has nothing to do with XSS or SQL injection because it doesnāt have a database.ā
āThe key is to make sure that you customize the toolāwhether it is SAST, DAST, IAST, or SCAāto the application, the language, the technology, or to the framework you are using, and then once you do that, you will have a narrow set of results. And then you can even fine-tune that as well,ā she said.
Fine configuration also means a vast reduction in one of the chief irritations for developersāfalse positives. āAre there chances that there might be some false positives in all of this workflow?ā Rao asked. āYes. Tools are tools and there will be false positives.ā
āBut what I ask organizations is, what is the rate of false positives when you finely configure the tool and the rules, and narrow it down to the ones that you truly care about? I have seen maybe 2% to 3% false positives at that point. Thatās acceptable.ā
That percentage can be cut even further over time, she said, because if developers notify the security team about a false positive, āthey mark it as such and then itās gone forever, because the tool will remember all of those.ā
The second important element of using security analysis tools and defect tracking correctly is to make sure that when they flag something thatās critical, it gets fixed
āAll organizations should have some kind of risk management,ā Rao said. Thatās needed to create a protocol for how critical security vulnerabilities are handled.
She said one way to do it is to give developers a deadlineāone or two weeksāto fix a critical defect. If a query to the defect management tool shows that it hasnāt been fixed by the deadline, āthen pause the pipeline. Immediately notify the development team, saying you cannot go to production.ā
Or alternatively, āsomeone needs to sign offātake the ownership,ā she said. āSay āI know there is a critical vulnerability but I have other controls in place and I need to push this to production.ā The defect management tool helps you control that.ā
Defect tracking, she said, can also help improve the quality and security of the code being written by the development team.
āOver months or even weeks, you will be able to see the ROI of what happened with this workflow,ā she said. It will keep a log of who on the development team is making the most mistakes, how quickly the defects are being fixed and who fixed them.
āThe tool you use has all those metrics,ā she said, āso you can see trends. Is the number of vulnerabilities going up? Do my developers need more training? Do I need to help them with instructor-led training, e-learning, or defensive programming? What are some of the vulnerabilities that they are creating over and over again? You get all these insights when you have a very tightly controlled defect management workflow.ā
That, Rao added, can be much more effective than a PDF or spreadsheet that nobody looks at. āHaving this tight loop where they create the ticket and youāre able to run the specific tool to identify whether they really fixed the vulnerability or not, thatās where you get a lot of benefits.ā
The bottom line is to help organizations understand that security defects are just as important as quality assurance (QA). Often, she said, āwhen teams find QA defects, they immediately create a ticket in Jira, but when it comes to security, they are more likely to say that maybe itās a false positive.ā
But if organizations customize and configure their security tools, they wonāt have to sacrifice speed for security.