Monthly Archives: March 2014

Avoiding Security Vulnerabilities during Implementation

Once a solid approach to architecture threat analysis has been established, most of the remaining security vulnerabilities are coding problems, that is, poor implementation. Examples include injection issues, encoding issues, and other problems such as listed at the OWASP Top 10 Project.

While a checklist of best practices for developers can help with addressing some of these bad coding habits, a more structured and repeatable approach should be established as well. Static application security testing (or “Static Code Analysis” – SCA) can identify most of the code-level vulnerabilities that remain after a thorough architecture threat analysis. However, it is crucial that SCA is executed consistently and automatically.

A common best practices is to analyze any newly written source code prior to compilation or, for scripting languages, prior to promoting it for an intermediate release. Automating this process in a build system / continuous delivery tool chain makes this very scalable and can also ensure that developers follow specific secure coding best practices.

When implementing automated SCA during builds, the project owner needs to make a decision of whether to fail a product build if the SCA process fails. I generally recommend that a developer should be able to run the entire toolchain on their local machine. This allows them to run the entire build locally, as it would be executed on the central build server, with all checks and automated tests, before they commit to source control. This does not only ensure proper execution of security tests, but also other quality assurance tools such as regression testing.

The Hat of Shame (

The Hat of Shame (

However, to improve their productivity, the developers must have the option to skip tests locally. To avoid developers committing bad code to source control, and so trigger unnecessary builds on the central build server, they must also have the option to configure the same tools that are used in the build chain for use in their IDE – with the same rules as used in the central project configuration, so that they can execute the SCA while they are writing the code, and justify skipping the SCA build locally before checking in.

Using such a setup ensures that developers can deliver code that meets the individual project’s standards. I generally also recommend failing the build on the central integration build server if committed code does not meet these standards. In most cases, this is caused by a developer not using the toolchain (including the local tools for the IDE) as instructed and so causing unnecessary work and delays for the rest of the team – which means that this developer is entitled to wearing the hat of shame for a while.

Secure Development Lifecycle and Agile Development Models

The processes I described earlier for security requirements analysis and architecture threat analysis earlier seem very heavy weight, and a question that I get asked frequently is how to use such processes in agile models. At this time, HP is the third largest software company in the world (measured in total software revenue, behind IBM and Microsoft). There is a huge bandwidth of software development models in HP: I have been leading secure software development lifecycle (SSDLC) programs in both HP Software and HP’s Printing and Personal Systems group, working with teams that employed traditional models (“waterfall style”) as well as with teams that used more progressive models (Scrum, XP, SAFe, etc).

With all teams I worked with, it was possible to create an SSDLC program that accommodated the individual team’s working model. As an example, while a team using a traditional waterfall model will perform the requirements and the design analysis in their “planning stage”, an agile team will commonly have already completed these activities in their previous Potentially Shippable Increment (PSI). In other words, while the majority of developers in a team that uses e.g. SAFe may be working on PSI n, part of the team has already started work on the analysis of the requirements and design that will go into PSI n+1.

The steps that need to be performed in a secure development lifecycle program are independent of the development model, but how they are scheduled and executed may be different with every organization. It is important to design the SSDLC program to match a team’s needs, and it is equally important to create metrics for the SSDLC program to match an organization – making sure that the metrics reflect not only the aspects of the SSDLC program, but also fit into the existing model of how an organization is measured.