Having been at Mozilla for some time now, I’m still fascinated by the varying perceptions people have of security reviews. To some developers it feels like the Spanish Inquisition (minus the comfy pillows), while to others its an opportunity to uncover potential issues they may have missed during the design or implementation process. The interesting thing is that our approach is pretty much the same for every review.
But has become evident that we need to do a better job of defining and communicating both the structure and value of the process in a way the help each developer maximize the value of their reviews. Among those changes, we are going to try a more structured approach to how we run the meetings.
For a typical 60 minute design review, we will break up the time as follows:
Introduction by the feature team (5-10 minutes)
- Goal of the feature. What outcome is it trying to achieve: problems solved, uses cases enabled, etc.
- What solutions/approaches were considered
- Why was this solution chosen
- Any security threats or issues that were considered in the design
The purpose here is to help the security team understand the fundamental motivations and purpose of the feature, thereby setting the correct context for the rest of the conversation. If we don’t know why a feature is proposed, it becomes hard to justify (any) risk. That said, this is not the time to critique the feature. Comments and questions should be saved for later unless they directly relate to understanding the feature better.
Threat brainstorm (30-40 minutes)
A truly open brainstorming session on potential threats the feature could face or introduce. Like any good brainstorming it will involve some really interesting ideas as well as some really silly ones. The goal is not to make judgments during this phase; it is to test. Its vital for the feature team to participate in a critical analysis (in the classical sense) of their own feature.
Rather than feeling like they need to defend their feature, the feature team should strive to “swap sides” and think like an attacker. The value is to help teams understand how they can go through this questioning process themselves, but also realistically there are very few features we know enough about to go through this process w/o the feature team being represented. The ability to objectively critique one’s baby is a difficult but very valuable skill.
Conclusions and action items (10-20 minutes).
Summarize the threats uncovered and recommend and prioritize the mitigation to each. Identify parts of the feature necessary for followup security work, which may include detailed threat modeling, source code review, targeted fuzzing, etc.
Formal security reviews are not our only tool of course. In addition to our ongoing fuzzing and bug bounty programs–and the code review patches goes through anyway–we also use other approaches such as embedding security team members into the most complex projects. However that approach clearly does not scale to the nearly the number of projects we have. The best approach is often a series of small conversations. Lightweight interactions can be a great way to bounce ideas and get feedback, helping a team to crystallize specific aspects of a feature and prepare them for a detailed review in the future.
But the most important characteristic of any type of security interaction is that it happens as early as possible. The earlier we talk, the more options we have to address any significant shortcomings while still shipping what you want, when you want to. The later we talk, the smaller our toolbox becomes, eventually coming down to a single boolean lever: do we ship or not?