top of page
Security Blog: Blog2
Search

Choosing SAST tools, what matters?

  • Writer: David Read
    David Read
  • Mar 13, 2020
  • 5 min read

Part II can be found here.


Photo by Chris Ried on Unsplash

A common mistake when picking a SAST tool is to choose the one you think will find the most vulnerabilities. There’s a whole load of factors (most not even related to Security) that need to be taken into account when deciding what tool to use. The cost of not taking these factors into account can grow exponentially with the size of your development team, and can also be incredibly hard to measure. The advice given here is mostly from my personal experience, witnessing the rise and fall of different SAST tools, both as a developer and a manager.


While this article talks mainly about SAST tools, most of these principles apply to other forms of automated analytical/scanning products and services too, including SCA, CVE scanning and [A-Z]AST (SAST, DAST, IAST tools for example)


Please note, no one solution satisfies everything. One product may work well for one company but be utterly incompatible with the next. This article is not a pitch for the latest SAST tools; we will instead talk about what the best SAST product should be without mentioning a single product.


Why do you even want a SAST tool?

It’s important to know why you want a SAST tool in the first place. If you need a SAST tool because:

  • Regulations/ISO27000/NIST said so.

  • Management asked for it.

  • Everyone else is/it’s best practice.

Then you are starting with the wrong reasons, and you might be setting yourself up to fail. None of these reasons imply you are implementing the tools for the right reasons. While you may succeed at pleasing management and ticking that regulatory tick box, you might not make your developer’s lives easier or your code any more secure.


There are typically three reasons to use SAST tools:

  • More secure code

  • Less buggy code

  • Reduce the cost (in time and money) of having 1 and 2

Reducing cost is the most crucial issue here.

Shifting Left reduces bug fix costs


This ties in with the shift left mantra; the goal is to identify bugs/security issues quicker and earlier as the faster you find them, the easier they are to fix. Identifying issues quicker is the real advantage of static analysis tools; they help you identity issues at the development/testing stages rather than at the production stage. Anything found at the development stage is far easier to fix compared to in production where a quick change of a couple of lines of code becomes a full bug fix that has many additional costs.


If you want secure and less buggy code, and money is no object, you can double the size of your development teams and have the new staff look for bugs and vulnerabilities full time. You’ll end up with much better quality code, but the cost will be astronomical compared to using static analysis tools. If your static analysis tools aren’t reducing the cost of fixing issues, then you are in trouble. Everything we talk about here will focus on this one mantra and these three requirements.


False Positives vs False Negatives

The big issue many people have with SAST tools are invalid results, AKA false positives and false negatives. Not wasting your time on alerts that aren’t an issue is important but also if your tool is not finding things you care about you have invested effort into a tool that is not returning the value you hoped it would. While both of these are important, there are other arguments and issues to think about too.


False Negatives

“But does it even find anything?” Often I have pitched a new SAST tool to a developer; this usually is their first response. And they are right. There is no point in procuring and deploying a new tool that does not produce any benefit. However, this causes there to be a focus on proving a SAST tool can find the coolest and most complex vulnerabilities or, vice versa, that because a particular tool didn’t find a bug that it, therefore, must be worthless.


God mode

If a tool existed that guaranteed to find all vulnerabilities, but it cost more to use than trained developers or alternative less powerful tools, the other options might still be better. Even when a tool doesn’t promise you the world if you are comparing two tools and trying to decide which to use, don’t just pick the one that can find the most issues. Its essential tools can find vulnerabilities, but some other factors might make a tool that appears worse at face value better in the long run.


Additionally, if anyone does claim their tool catches all vulnerabilities they are charlatans and will hopefully be found out for the fraud they are, hopefully before they manage to sell their company.


"All tools are worthless"

Just because a tool doesn’t find everything doesn’t mean it’s worthless. If you can prove a tool can find the most apparent issues this still saves your developers time, cuts down on common easy to avoid mistakes and reduces your developers’ mental load in general. There’s no point in fixing the most obscure issues until you’ve resolved the easy ones.


False positives

“It doesn’t find anything important!” Another valid gripe you might hear about SAST tools. Any time spent triaging fake or insignificant issues is time better spent being productive elsewhere. You want to make sure developers spend as little time as possible dealing with false positives.


Avoiding false positives can be particularly hard due to Bayes Theorem. Some products get around this using SAT solvers to guarantee that anything they identify is 100% an issue, but this tends to be more for generic testing than for vulnerability finding and these tools can have problems elsewhere.


If your developers spend 1 hour on each issue your SAST tool finds, then reducing false positives by 1% saves you 1 hour for every 100 items found. 1% doesn’t sound like much; however, it can start to matter when your codebase reaches into millions of lines of code, your found issues then run into the 1000s and you start to lose whole workdays dealing with false positives.


The best way to deal with this is to look at the categories of issues the tool produces if one group produces more false positives and hurts productivity find a way to configure the tool to stop reporting said vulnerabilities. If it's not possible to agree with the development teams, they can ignore them.


So what else should you care about?

There’s a range of issues that can affect the effectiveness of any SAST tool you use. However, unlike tracking false positives, they can be a lot more challenging to measure. If your SAST tool hampers your developer’s productivity or has significant maintenance and operational costs, then this collectively increases the cost of fixing the bugs/vulnerabilities it finds.


In part 2, we discuss these other issues in more detail, covering:

  • What features a SAST tool can have to help improve effectiveness and reduce cost

  • What’s most important from a service management and user engagement perspective

  • Other possible hidden costs to take into account

 
 
 

Commenti


Subscribe for updates. New posts every 2 weeks

Thanks for submitting!

bottom of page