Tag Archives: SSDLC Program

The Security Requirements Analysis

Once the main epics of a product release are known, the team can get into a more detailed “what needs to be built” analysis, and even get a first high-level understanding of the “how”.

The majority of the main functional and non-functional requirements can be derived from the product vision. During the requirements analysis, this initial set of requirements becomes more detailed and the architects start adding more comprehensive requirements that are, for instance, derived from best practices. In this step, the Security Architect will start a security requirements analysis to add to the requirements pool for the project. Eventually, this pool will translate into user stories and tasks and be assigned to PSIs (when following an agile development model like SAFe), but we are not quite there yet!

The security requirements analysis usually starts with an analysis of the product’s target market. As an example, if the product is meant to be sold to medical offices in the US or meant to process credit card payments, a lot of mandatory requirements derived from HIPAA and PCI respectively must be implemented by the product, or it cannot be shipped. On the other hand, if the product is mostly used by consumers, there are a lot of best practices that drive requirements for the product, but many of them are not mandatory for the product to be released.

Once the target market has been determined, the Security Architect should reach out once more to product management and marketing to understand if there are specific features that would allow the product to perform better in the market. Commonly, the Security Architect will help marketing and product management to analyze products of competitors, and suggest improvements that would distinguish the product from the competition. Trivial examples include offering encryption or privacy protection where competing products transfer or store data in plain text, implementing strong auditing and giving the customer easy access to such data, and offering secure and innovative authentication mechanisms where competitors only support username / password based authentication.

Security requirements are often derived from laws and regulations which are often far outside the comfort zone and experience of most developers and product architects. Security stakeholders on the customer side are business information security managers and chief information security officers (CIOs) who speak a language of their own, often leading to misinterpretation of the provided non-functional requirements by the product team. A Security Architect performing the security requirements analysis understands both the relevant legal regulations as well as the language used to correctly interpret them, and translates them into a language commonly understood by product development teams to ensure regulatory compliance.

After these preparatory steps have been completed, the Security Architect commonly works with the product architect(s) on a weighted and prioritized requirements list that ties back to the original target market analysis and also shows the inter-requirements dependencies. This requirements list is meant to help product management decide which security feature to implement for a specific release. It is a decision tool that allows product management to understand that if they want to sell the product to a specific market segment, or sell the product in a specific country, the product has to implement specific requirements. Also, the list might have some “non-negotiable” requirements, which must be implemented e.g. to satisfy corporate security policies.

Highlighting requirements inter-dependencies helps with coarse-grained effort estimations. As an example, if requirements X, Y, and Z depend on a complex set of requirements A, B, and C, and X, Y, and Z are required to sell in a specific market, then product management might decide to delay this feature set for a release or two to allow some extra implementation time to implement A, B, and C first, and rather focus e.g. on a different country in the first release.

At the end of the requirements analysis, the product team should have a clear understanding of what needs to be built to address both short-term and long-term security and privacy concerns for the specific target markets, as well as which requirements to implement first, and options of how to implement them so that they can start the product small and grow it long-term, without painting themselves into a corner.

Reduce Expenses for Security by Designing Security into the Product

For the majority of software products, security is primarily a cost factor. Regular readers of my blog keep asking me how they should afford all these expensive and time-intensive security counter measures like code reviews, code analysis, and pen testing.

It is interesting to see that the majority of security related costs in many products still accumulate at the end of the development lifecycle. The closer a product gets to its scheduled release date, the more investments in security are being made. I frequently still see a “test security in” approach in products that quickly evolve in the beginning without considering security, while there are few product teams that try to get the security related parts of the product right in the first place to avoid the high costs of extensive late-stage security counter measures and, in the worst case, major design changes in a late development stage.

Security vulnerability aggregators like the NVD and the underlying security vulnerability databases list around 60 thousand different publicly known vulnerabilities. There are indications from research performed by IBM that the number of actual security vulnerabilities that are not publicly known could higher by a factor of 20, which indicates well over a million open or latent security vulnerabilities in major products that are tracked in the NVD.

The cost of fixing a defect at various stages of the development lifecycle grows exponentially from the requirements stage over design, coding, testing, and maintenance. This observation lead to the undisputed statement that attempts to test quality into a product are generally futile, at least when cost is a factor to be considered. As security is one aspect of software quality, the statement also holds for the unfeasibility to test security into a product. Yet, the pen testing tool and consulting industry is booming, and while many software vendors use security testing tools and pen testing, few are using security requirements and design analysis early enough in the product lifecycle to prevent security issues in the first place and avoid the horrid costs of fixing security defects after the product has been released.

To change a product team’s Secure Software Development Lifecycle (SSDL) approach and ensure acceptance, it is important to meet a product team where they are, and not try to force new development models on them when they are not yet ready for a major change. When working with product teams, I generally try to establish a more proactive model than what the team is using, presenting various options and recommendations, and then work with the team leads to find out what works best for them.

Approach Advantage Disadvantage Rating
Fix security defects as they are discovered (the “backwards facing” approach – post-release). The vulnerabilities get fixed eventually… At least some of them. It is crucial to fix security defects, but waiting until the customer finds them is really a bad idea. Relying on security fixes as the primary means of dealing with security problems is expensive and only covers a fraction of the potential security problems. Extremely reactive (this is bad!), very costly, super inefficient, and generally nothing anyone should do.
Security penetration testing of running code using tools and / or pen testers (“dynamic application security testing” approach, pre-release, end of development). Some of the security issues that the customer would find are now found before the product is released. Helps reduce public shaming. Often too close to the planned release date to fix all issues that are found, the product ships with open security defects. Too late in the development lifecycle, this means that fixes are usually very expensive and might even require design changes (i.e. might never get fixed). Very reactive (still pretty bad!), costly, and generally leads to a “best effort” approach on security.
Analyzing source code for potential security issues (“static application security testing” approach, pre-release, during development) Somewhat improved cost/benefit ratio due to the comparatively early stage of execution. Can be very effective if automated tools are used that force individual developers to analyze their source code before committing to source control. Expensive and somewhat inefficient if a “manual code review by security experts” approach is used. Does not discover significant design problems, and is isolated to source code view (e.g. cannot consider deployment specific issues / implications) Somewhat reactive, because it does not prevent vulnerabilities, but merely removes them.
Using checklists before and during coding (“how to write better code cookbooks” approach, pre-release, during design and development) Good approach to improve security in specific / targeted problem areas. Can in particular help to avoid common pitfalls. Checklists are generic and can never address all of a product’s specific needs. Can lead to a false sense of security, as a fully covered checklist does not allow general conclusions on the security quality of a product, that is, determine the set of still-open security gaps. They are often too generic to draw conclusions on the security status of a specific product. Somewhat proactive, helps with both educating developers and avoiding common security problems. Still somewhat reactive, because checklists are created to address problems that have been discovered in the past.
Systematic analysis of product security requirements and design specific threats Helps to design security into a product early in the lifecycle, and reduces the need for rework, as small architectural changes can greatly reduce the risk of vulnerabilities. Avoids unpleasant surprises at product release by ensuring that all relevant security standards are covered. Although modeling tools are available, in-depth analysis still requires a certain level of security expertise to create a correct model and interpret the results correctly. Such security expertise is not cheap, and not always readily available. Very proactive (which is good), very effective, and probably the most cost efficient way to spend your security dollars.

Typically the best solution is to use all of these approaches as needed in a closed-loop SSDLC program, investing most of the available resources in the proactive counter measures, then in the automatable counter measures, and least in the manual and most reactive counter measures.

Steps in a Secure Software Development Lifecycle Model (2)

2013-10-18 - The Core of a Secure Software Development Lifecycle Model

In last week’s post, we discussed the first four elements in a Secure Software Development Lifecycle (SSDLC) model. As indicated before, the content of the “activity boxes” generally depends on the development model and the type of project, and the definitions below can only serve as a starting point for a customized model:

5. Coding

Writing quality code does include several aspects, of which two educational aspects are particularly relevant for secure software development: using proven patterns, and following best practices that apply to the task at hand. The usage of proven patterns will help to create code that is easier to understand, easier to review, and easier to maintain – all of which will ultimately contribute to a more secure codebase. Following best practices prevents the developer from having to reinvent the wheel, and then probably miss some important spokes when doing so. A third aspect around secure coding includes static code analysis, which helps developers better understand their code before it is deployed. Depending on the static analysis tool used, the code to be analyzed need not even yet compile! Static code analysis can be a very helpful tool for the security architect running an SSDLC program, and there are several free and commercial tools available for this task.

6. Integration and Testing

Once the code is in a stage that compiles cleanly and produces executable artifacts, it is possible to begin runtime integration of the various components. This should start as early as possible, following the “release early, release often” paradigm, even if the “release” is just for the internal quality assurance (QA) teams. Besides the non-security related QA activities, the verification process should include security specific activities such as fuzz testing, manual black box security penetration testing, and automated black box security testing. There are several tools available for automated security testing, with a focus on automated web application security testing. A major motivating factor for this is the high cost of manual security testing. There are, however, strategies for QA that allow reducing costs while still providing in-depth manual pen testing services. A second important security activity during the integration phase is for the security architect to re-review the results from the requirements and design analysis with the team, and make sure that the requirements have been correctly implemented, and the attack surface meets the results from the threat analysis step.

7. Release and Deployment

Once the final security review has been completed and the product is being deployed, the product team should already have a solid plan about how to respond to security incidents, create fixes for security issues discovered in production, quickly deliver them to the affected customers, and communicate the solution. Depending on the development model of the specific product, the team may also need to create an asset archiving strategy to make sure the required artifacts are available for maintenance later on. The team should also lay the foundations for a deployment security strategy, which includes documentation and best practices that are crucial to securely configuring the product.

8. Operations and Maintenance

The team operating the solution is responsible for deployment security, building on the foundations led earlier during the final development stages. While the operations team, which can be an internal team or a customer’s team, is responsible for securely operating the solution, the development team still has maintenance responsibilities. There are some development models that delegate maintenance tasks to a dedicated team (e.g. a “Current Product Engineering” / CPE teams), while other models (in particular agile models) only have one team that is responsible for the entire lifecycle of a product which includes operations and maintenance. In either case, the team responsible for maintenance must execute the incident response plan created earlier, and make sure that any fix that addresses security vulnerabilities or privacy violations is ported to all supported versions of the product.

 

There are many resources available to help an organization build their own SSDLC process, and also support them in creating metrics on how successful their program actually is. Examples include BSIMM and OpenSAMM, which provide excellent metrics on process definition and execution. There are other approaches which include processes implemented by major software vendors (e.g. Microsoft’s SDL) and consulting companies, as well as processes that may be mandatory when working with specific customers (such as the currently under-discussion NIST process which will be helpful when working with US government agencies). While all of these processes differ slightly in how and when specific steps are performed, good processes should be customizable to individual organizations – and they should actually be customized! Trying to employ a heavy-weight process that has been designed for a very policy-driven organization on a small startup will most certainly fail, while a process that works for a small startup will usually not work without modifications for a big corporation. Yet despite any customizations, the SSDLC process chosen should remain easily translatable to address new market segments and also give customers the opportunity to compare the program with other models.

Steps in a Secure Software Development Lifecycle Model (1)

As discussed earlier [1, 2], the Secure Software Development Lifecycle (SSDLC) process that I commonly use has an inner core that is built around policies, standards, and best practices, and an outer shell of ongoing activities around security training and education.

The middle circle groups the activities that need to be performed for every release of the product. It does not matter whether the product team is using, for instance, a waterfall model or an agile model; the basic activities are always the same. It is obvious though that in an agile model, where the release cycles are much shorter, some of the activities take considerably less time. This allows the agile team to keep their short release cycle, and the respective SSDLC activities to benefit from early “customer” feedback, which is an integral part of the agile philosophy. Depending on the project, the “customer” can vary, sometimes even between cycles: this role can be filled by company internal customers, operations teams, integration teams, external customers, and many more.

2013-10-18 - The Core of a Secure Software Development Lifecycle ModelThere are eight activities in the SSDLC, and each of the activities can be its own more or less complex process. The content of the “activity boxes” generally depends on the development model and the type of project, but I found the following definitions to be pretty universal and a good starting point:

1. System Concept Development

This activity answers important questions on a comparatively high level for executives, but it is also a good elevator pitch. Questions that need to be answered here are things like: What should the system do? Does it integrate with existing solutions? What is the value add (both intrinsic and extrinsic)? Did anyone else already build this? Why should we build this? Is the solution worth funding? In particular the last question is very interesting for everyone involved. If there are specific security implications (e.g. from a system managing PII), this should have already come up in the discussion by this point.

2. Planning

This is basic project management 101. At this time, a core team has usually been appointed for the project, and roles have been assigned within the core team (remember, this holds both for waterfall and agile models, where roles may change once a Potentially Shippable Increment [PSI] has been completed). Questions answered at this stage include things like: What needs to be built? Do we have all the resources we need to complete this iteration? What are the timeframes? Are there dependencies on other groups, or are other groups depending on this project release? Toward the end of this stage, the epics implemented in this phase will be known with high certainty, which allows the security architect to start thinking about their security implications.

3. Requirements Analysis

The requirements analysis is somewhat intertwined with the planning activities. In particular in agile development models, it is not uncommon that teams jump back and forth between planning and requirements analysis, although this happens less frequently the further the agile project progresses. A specific part of the requirements analysis is the security requirements analysis. As in a regular requirements analysis, a lot of the work is driven by the product vision and system concept, as well as the relevant standards, policies, and industry best practices. Based on a security and privacy risk assessment, the team should establish a solid set of security and privacy requirements, as well as quality requirements that will later on help establish acceptance criteria for implemented features.

4. Design Analysis

Once the requirements analysis is complete, the team should have a pretty solid understanding of the “what” they want to build. The design analysis answers the questions around “how” things should be built. The first step in the design analysis requires the architects to create design specifications that include major system components, with information about the type of data these components are processing, the users that are accessing them, and the trust zones in which they are operated. Part of the general design analysis is the threat analysis, which will produce a set of design requirements based on an attack surface analysis. The threat modelling process is probably the most complex part in a Secure Software Development Lifecycle process, and while there are tools and methodologies available that help structuring this process and make it repeatable, it usually requires a skilled security architect.

The Core of a Secure Software Development Lifecycle Model (2)

The Secure Software Development Lifecycle (SSDLC) process discussed earlier is built around a custom security policy, security standards, and security best practices, and completed through extensive security training and education. While the security policy is an important factor, security standards, best practices, and education are crucial to make an SSDLC program successful.

The security standards and security best practices include security relevant government standards and regulations (eg. NIST, HIPAA, PII regulations, …), but also established industry best practices (e.g. OWASP best practices for web application development, PCI-DSS compliance requirements for credit card pyaments, etc). Some standards and best practices are fairly universal, while others may only be relevant for specific projects. As an example, a web application that is processing credit card information will have to follow PCI-DSS regulations, be compliant with the relevant privacy standards, and implement a good deal of the OWASP recommended best practices. An application for a smartphone without its own billing system and without any credit card payment processing on the other hand can skip the PCI specific requirements.

An ongoing activity in the SSDLC is continuous security training and security education. This is fundamental for implementing a successful SSDLC program. Training and education must include all project members: developers, QA, architects, legal, project management, etc. Everyone needs to get a tailored training to understand both how the SSDLC works, and about foundational concepts like secure design, threat modeling, secure coding, and security testing. Depending on the project, the training can include aspects on relevant standards and best practices.

Security training can come in many forms, such as instructor led trainings, recorded video trainings, books, and training on the job. Once everyone in the team has reached a minimum baseline, I found training on the job to be the most effective and efficient in particular for the technical staff. When someone in the team (good case) or in the public (customers, white hats, black hats – bad case) has found a security vulnerability, I recommend to get at least the entire technical team (architect, dev, QA, operations) together for a post-mortem analysis. The person who found the issue then explains the problem, and asks the team to create a fix and a regression test to prevent the problem from happening again in the future. Also, the team must come up with a mitigation that can be used by operations to protect deployments when the latest update with the security fix cannot yet be installed.

The Core of a Secure Software Development Lifecycle Model (1)

There are various secure development lifecycle models. Some of them (like the Microsoft SDL – http://www.microsoft.com/security/sdl/default.aspx) are representing the development process as a timeline, others are representing it as a circle. I personally like the circle representation better, because it symbolizes the life “cycle” very well, and makes it pretty clear that the work does not end after the code has been released, but usually goes into the next round for the next release.

2013-10-18 - The Core of a Secure Software Development Lifecycle Model

The circle below shows a representation of the Secure Software Development Lifecycle (SSDLC) process similar to the one I created for various businesses in HP, and that I use when I am training teams on secure software development. It is rather generic, which allows it to tie in with existing business processes and development models.

The SSDLC process I usually use is (like many others) built around a custom security policy, security standards, and security best practices.

The security policy is commonly defined by the organization developing the product. The policy may include all kinds of assertions that the organization makes around software development. I have seen policies that are rather philosophical (“we will do our best to crate secure software and fix defects timely”), and policies that are very stringent and precise (“a medium security defect will be fixed in 5 days or less”).

From my experience, having a security policy is more important than the content of the security policy (at least while the content is somewhat useful), because it shows that an organization is committed to security. However, it is crucial that the security policy is known to everyone in the organization, and that everyone is following the policy on peril of losing their job or at least being removed from the project. The security policy must be absolutely binding for everyone, and “everyone” includes people managers, developers, architect, legal, marketing, and everyone else who is working on the project. Such a stringent policy helps to create awareness and gives the security person in charge for the project (for instance, the security architect) the instrument necessary to delay or even stop delivery of the project in case of major security flaws. This may sound very drastic, and it is by far better to rely on good arguments to convince the team members to address a major security issue instead of wielding the security policy club. However, experience shows that without such a powerful security policy, security goals may be quickly sacrificed to meet a release deadline. On a side note, HP has a very strong security policy, and everyone values it so highly that in all the years I have worked as a security architect for HP, I never had to cite it even once.

HP Fortify manual rule pack update

With the Fortify products, HP has acquired a great suite of security tools for security static code analysis (“Fortify SCA”). But HP’s security product line-up also includes other tools, for instance for runtime analysis (“Fortify Runtime”, which analyzes code while it is in production) or HP WebInspect for automated black box security testing.

The Fortify SCA products include tools like the “Audit Workbench” that are available to developers, but also server products that are more suitable for a continuous integration environment.

I discussed the Audit Workbench with a couple of developers today, and, during the walk through, came across the auto-update feature. Fortify regularly provides updates to the rule packs, and so makes new scan capabilities available to the users. The update is automated (the default is to check for updates every 15 days, see “Options” -> “Options” menu), but sometimes one wants to trigger the update manually.

It took us a couple of minutes to find it in the documentation, but a look in the bin directory of the installation quickly helped: one can either use rulepackupdate or fortifyupdate to trigger the manual update. While rulepackupdate still works in the current release, it is deprecated and replaced by the new fortifyupdate.

If you are connecting to the Internet through a proxy server: the settings for configuring the proxy hostname and port are in the “Options” -> “Options” menu, under “Server Configuration”.

Product Development Model vs Secure Software Development Lifecycle Model

The first step in any project is creating a list of requirements. In enterprise software development, this step may actually one of the most time-consuming parts. It frequently requires coordinating with several departments and stakeholders, each of them providing information about what the product should do, how it should behave, and how it should tie in with existing solutions. This may include diverse groups such as legal, marketing, engineering, and others. The result of the process is a prioritized wish-list, and most likely some sketches that show how the new product may connect to existing products.

From here on, the next steps somewhat depend on the individual organization and their development model. Although there are a variety of models, a large percentage of projects are developed using a variant of the waterfall model or some flavor of agile development models. Interestingly enough, these two models are representing the two extremes with respect to release cycle lengths. There is a lot of fuzz around how different agile is from waterfall, and sometimes people can get quite agitated around which model works better. I will skip this discussion here, and just note that each of these models has similar phases that are relevant for a secure software development process. The main difference is how the phases are executed, and how the results are used in the development process.

As an example, each project shares the initial requirements gathering phase. This phase is so universal because we always have to find out what we want to build, whether the market needs this, and how the development will be funded. In this stage, it does not matter whether the product is commercial or open source or a mix, because even a developer working in their free time will ask the question of whether implementing e.g. yet another web content management system would be worth their time.

Once an investment decision has been made, the next steps differ to a certain extent. For example, a team using a waterfall style development model will now start a very detailed analysis of the high level requirements gathered earlier, and turn them into much more detailed requirements and specifications. A team following an agile approach will start with a more preliminary design, release code early and release often, and continuously refine the product until it fulfills all of the project sponsor’s requirements. Such a release is often called a “Potentially Shippable Increment”, or PSI. And although agile seems to be so much different from the waterfall model, the agile team will perform similar steps in each PSI as waterfall team does. The difference is in the planning horizon: while the waterfall team plans for e.g. two years, an agile team may only plan for a couple of weeks. Still, the agile process requires a longer term vision to keep the project on track.

The Secure Software Development Lifecycle Process, or SSDLC process for short, should tie in seamlessly into the existing development model that a specific team choses. There are several models of SSDLC processes, but none of the better processes requires the teams to actually change how they develop their product.

Secure Software Development as a Deterministic Process

Software security still has an aura of secrecy and mystery, and some people still think that security experts are magicians that can secure a product with a tip of their wand. Some self-appointed “security gurus” actually play a big part in creating this mystery and keeping it alive, because they make it seem as if it there is no structured approach to software security and that a security guru is a crucial part in building secure software.

This is, however, not true. Building secure software can be a very well structured process, which makes it reliable and, as importantly, repeatable. Process repeatability and consistent metrics are as important for software security as for any other aspect of structured software development.

Software security is not security software. Software security is about building things properly. Applying magic fairy dust in the form of cryptography, security tools, or pen testing consultants does not automatically make software secure! An important person to have on your team however is a good Security Architect. Security Architects can cover a broad field, which includes supporting the team with application design, establishing and supervising execution of a secure development process, project and program management, engagements with key customers (pro-active and in case of a security breach), securing operations of cloud solutions, and fire-fighting in case something went wrong. Good Security Architects are hard to find, because they must not only be experts in security and master architects, but also have advanced people skills to master the daily tensions of conflicts arising from competing requirements and at the same time be presentable to key customers.

A product does often not primarily sell because it is secure, but it will often miserably fail if it has security issues. Consequently, the Security Architect needs to work closely with product management to balance new features that customers are pushing for against security requirements and defects that must be fixed, but for which customers are usually not paying for directly (and in the best case do not even know about). While Security Architects are often empowered to stop a product release if they have major security concerns, they should use this power very wisely and rather secure a product by creating superior designs that address both security concerns and provide new distinguishing features.

This kind of work often starts by gathering security requirements from customers, creating a design that covers both the functional and non-functional requirements (often in collaboration with other domain architects), and then coordinating the work of several R&D teams to implement the specs and turn them into a secure product.

I will create a series of posts to review one possible embodiment of a secure development process, covering everything from the early requirements analysis stage over the design analysis to implementation and secure operations.