The State of Software Security, As Reported By Veracode

One of the most comprehensive studies regarding software security that I’ve found is published on an annual basis by Veracode and is entitled, “The State of Software Security,” or simply SOSS. As of the end of 2018, they have published a total of 9 volumes, one for each of the past 9 years. The content in each of these reports is very enlightening. Unfortunately the report highlights that software applications simply aren’t getting more secure.

The SOSS report provides a wealth of insight into the myriad of types of vulnerabilities that plague software applications. In volume 9, Veracode aggregated information from a total of 700,000 scans across 2 trillion lines of code. The information in the report may be skewed because it came from a single vendor. Nonetheless, I feel that the information in the report is probably fairly accurate, considering that Veracode is the leading provider of SAST and DAST solutions.

I’m not going to rehash the entire report here. If you want, you can find SOSS Volume 9 on Veracode’s website. Unfortunately, you’ll need to fill in your contact information to download the report.

There are few key inferences that we can make from the information in this report. I’d like to highlight a few of them here and hopefully give you some techniques on dealing with the problems posed by this report.

There’s a Slight Disconnect Between Veracode and the OWASP Top 10

The report outlines 10 common flaws discovered in static analysis mapped to the percent of applications that contained those flaws. I thought that it would be interesting to match these up with the OWASP Top 10.

[table id=4 /]

Of course, static analysis gives us a one-dimensional view of the code. The report also contains the top 10 common flaws discovered in dynamic analysis mapped to the percent of applications that contained those flaws. Below, I’ve also matched these up to the OWASP Top 10.

[table id=5 /]

The differences in flaws discovered by SAST and DAST are quite understandable. It is easier to find certain flaws using static analysis. It is easier to find others in dynamic analysis. This simply reflects that both of these testing types are complementary, necessary, and useful. But what about the difference between the OWASP’s priority and the frequency of prevalence discovered by Veracode?

The OWASP community takes much more into consideration than the frequency and prevalence of security issues. They also take into account how difficult it is to exploit a vulnerability, how easy it is to discover, and the impact of the vulnerability on the application/organization. Still, there is enough of a disconnect here that development teams should be cognizant of the weaknesses in the technologies that they are using. In other words, simply use the OWASP Top 10 as a guideline for prioritizing your security fixes. Do not rely solely on tools to prioritize issues. Ultimately, you should assign priority to a vulnerability based upon the risk it poses to your organization.

One other thing that I thought was interesting was A9 Using Components with Known Vulnerabilities didn’t make Veracode’s top 10 list. The reason for this, however, may be that Software Composition Analysis is actually performed by a recently acquired product, SourceClear. Thus, Veracode may not yet have fully integrated SourceClear. Veracode did call this issue out in the report, however. As a matter of fact, the report indicates an obscene number of applications utilizing a third-party component with at least one known vulnerability. Yes, this is definitely an epidemic, and in my humble opinion, should probably be elevated by OWASP.

[table id=6 /]

Programming Languages Aren’t Necessarily Getting More Secure

Unfortunately, over the years, many developers have used Veracode’s SOSS reports to make a case for his/her programming language of choice. They document that their language is more secure, considering that there are less vulnerabilities documented. However, this assumes that Veracode has a large enough sampling of each language. And, unfortunately, this assumption just isn’t true. On page 42 of the SOSS, Veracode documents language prevalence. They also denote that the distribution may lead to some erroneous statistics.

[table id=7 /]

Unfortunately, this sampling is radically skewed. I cobbled together the table below to try and find the most popular programming languages for 2018.

[table id=8 /]

As you can see, JavaScript tops the list on most accounts. However, it only accounts for 11.5% of the lines of code scanned by Veracode. In addition, there are a few programming languages that appear to be growing in popularity very quickly. This includes Kotlin, Rust, and Go, but they didn’t make the list. We simply do not have adequate data on a number of these languages to deduce which language is the most secure.

However, Java, .NET, and C++ have been anything but stagnant languages. In the past few years, each of these languages has received major upgrades. Unfortunately, these language upgrades haven’t had a noticeable affect on building secure apps, as Veracode’s report indicates with more neck-breaking statistics. The following tables shows the percentage of applications written in these languages with at least one vulnerability:

[table id=9 /]

This clearly isn’t heading in the direction that security analysts or software architects desire. The solution to this problem is attention to detail. If you are going to architect a system, you need to be familiar with the good, the bad, and the ugly of your tech stack of choice. You need to become aware of your language’s security flaws and stay abreast of the vulnerabilities that are discovered in the components that you are using. This is no simple feat, and this is why static and dynamic analysis tools are so critically important.

As a software architect, one of the things that I have always striven to accomplish is to build out a framework that allows my developers to focus on writing business logic. The more plumbing you force your developers to write, the more opportunity that you give them to make mistakes. Your developers shouldn’t have to think about transactions, resource leaks, or information leakage. If you architect your application in the correct way from the beginning, much of the application can be secure by default.

You Can’t Fix the Issues if You Only Scan Once or Twice a Year

It’s amazing to me that organizations will go through a vetting process and purchase a SAST or DAST solution only to use it once or twice. According to the SOSS report, “Flaws persist 3.5 times longer in applications only scanned 1 to 3 times per year compared to ones tested 7 to 12 times per year.” A security analysis tool is one of the most powerful tools in a security-minded development team’s toolbox. As such, it deserves to be included in the CI pipeline.

But apparently, we still don’t get it. The report documents that a whopping 37.1% of applications are scanned only once per year. Only 2.3% are scanned about once per week. What about those applications that are scanned daily? They comprise 0.1%. Wow. Shoot me now. Really.

Issues have a tendency to either grow one of two ways. First, they can grow stale very quickly because they are in highly volatile areas of code. Developers are going to change that code frequently or refactor it out altogether. Secondly, issues have a tendency to set up camp and stick around for a while. Developers can easily bury them under 4 layers of complex architecture. My point is that this list of vulnerabilities is an every-changing landscape and you are asking for a headache if you don’t run a scan as often as you can.

I’m sure this seems like a daunting task to development teams that work on products with multiple million lines of code. I have personally worked on some fairly large codebases, and I’ve run scans that have taken upwards of 24 to 48 hours. Long scan times are unfortunately very common for monolithic applications. However, I’ve always placed an analysis tool in the CI pipeline and had it cranking out scans as often and as fast as it could. The faster you can provide feedback to your developers, the faster you can put the fire out. The longer the time between scans, the more stale and useless the scan becomes.

In my opinion, configuring a static analysis tool is just as critical as having an automated build. You should automate both processes. Go through the pain of configuring it once and you’ll save yourself countless hours down the road. And you’ll be light years ahead of everyone else, because you’ll have closed more than 90% of your flaws in under 25 days (according to the SOSS report).

In Summary

Volume 9 of the Veracode State of Software Security Report was an enlightening and eye-opening report. There are three key takeaways that I hope to leave with you:

  1. Static and dynamic analysis tools are going to provide an in-depth view of the security flaws in your code. These flaws may not match perfectly with the OWASP Top 10. It’s important to remember to prioritize these flaws based upon the risk they pose to your organization.
  2. Programming languages and technology stacks are not necessarily getting more secure over time. There are a few languages to stay away from or migrate away from as soon as possible, such as PHP, Classic ASP, and ColdFusion. However, most vulnerabilities and language deficiencies can be mitigated early on in a project with good architectural choices. Also, be sure to have good standards and best-practices documentation available for your developers.
  3. Scanning your application for vulnerabilities should be just as important as an automated build. Invest the time to automate both of these processes as soon as possible. You’ll save countless man-hours, the scans will produce more relevant information, and you’ll be much more productive when dealing with security flaws.

Photo by Samuel Zeller on Unsplash

Leave a Reply