Digility Ltd

Avoid Issues in Operations – Be More Secure by Design

Applying the principles of Secure by Design will reduce the security issues that get into operations, and save time and money.

Brief at the start

Go straight to the full length version

Would you feel comfortable flying in an aeroplane designed by engineers who only considered what might go wrong after they had built it?

‘Secure by Design’ is not a technology; it is a set of principles to be adopted to improve business risk and resilience. It has strong similarities to conventional engineering practices, it will save money by reducing wasteful rework.

There are far more detailed descriptions of these principles, such as the guidance from the UK Government (links in footnotes). This article offers our opinion as practitioners and the benefits that it can deliver.

The critical first step is to understand the risks the solution will be exposed to. Like Failure Mode Analysis in conventional engineering, these inherent risks form an essential part of the solution requirements. The design can then be a collaborative and iterative exercise of review and enhancement to meet the security requirements.

Effort spent defining requirements before design and implementation is widely recognised to save time and money. The situation is no different with security requirements. But there are wider benefits as well, compared to addressing security late in the lifecycle:

  • Security controls applied after design and implementation are more likely to restrict functionality, undermining overall user satisfaction and the return on investment;
  • Early engagement reduces the risk of budget overruns, or having to accept inadequate security if you can’t secure the budget;
  • A well-documented set of risks, security controls and design decisions can then follow the solution through implementation and into operations, enabling future change to understand past rationale;
  • Above all else, late identification of risk and security requirements causes wasteful rework of the solution, which will cost time and money.

The key to success is defining the system scope correctly. If the scope is too great and encompasses a number of separate systems, then the benefits are eroded and the exercise becomes more akin to a homogenous enterprise risk assessment. If the scope is too small, the number of systems becomes unwieldy and unsustainable to assess and manage.

Get the Full Version

It is not a technology, and it is not new

Despite what you might believe from some of the cyber tech product sheets, Secure by Design (SbD) is not a technology (for that matter, Zero Trust, which we see as a valuable component of SbD practice, is not a technology either). It is a philosophy or strategy, a set of principles that bring efficiency, consistency and discipline to cyber risk management. You may find tools that help you adopt these principles, and the practice requires a sound understanding of technology. But we firmly believe that SbD is a human endeavour.

Like many other buzzwords in the security community, Secure by Design is frequently presented as something rather mystical, requiring specialist knowledge and attracting a new set of standards and vocabulary. We don’t hold with this concept. In our view, it “does exactly what it says on the tin”1. It is about ensuring the system’s very design enforces security and mitigates risk rather than relying on sticking plasters applied after implementation. Whether those design features are preventative controls, controls to detect and respond to issues, or any other category, they will have been defined and tuned to the specific risks and characteristics of the solution in advance (and managed through life).

In our view [Secure by Design] “does exactly what it says on the tin”

The concept is not new. The benefits of early security engagement have been known for some time. In 1972 the US Air Force published the Anderson Report2. In it they made the statement that “Unless security is designed into a system from its inception, there is little chance that it can be made secure by retrofit”.

Sadly it is now fifty years later and this is still a long way from being universally adopted. As the cyber security industry matures and the frequency and impact of cyber attacks on businesses increases, the call for this discipline has been increasing. Governments are starting to mandate it in the standards and security governance of technology programmes3.

The similarities between digital and conventional engineering

Most engineering lifecycles, not just those related to digital solutions, recognise the importance of spending adequate time defining the requirements. At the start of a programme, the level of uncertainty will be at its greatest. The purpose of Requirements Engineering4 is to reduce that uncertainty so that design and implementation can proceed with direction and to minimise the number of ‘wrong turns’ that have to be unwound. If you don’t reduce uncertainty as early as possible, the problems grow as they move downstream, and solving them then becomes a disheartening exercise in “pushing water uphill”.

Let’s imagine we want someone to build us a house. We go to our local house building company and commission the job. If they get started immediately, the chances of the end result being anything like what we originally wanted would be almost zero. Where do we want our home located? How many bedrooms, bathrooms and living rooms? What architectural style? What about the fixtures and fittings? We will identify everything wrong once the sub-optimal, ill-thought-out building is completed for our inspection. Putting those right at this stage will cost orders of magnitude more than they would have with an effective design phase. Worse, there will be many issues that we cannot put right without starting again, and therefore, we will be left operating in a flawed and compromised solution.

If you don’t reduce uncertainty as early as possible, the problems grow as they move downstream, and solving them then becomes a disheartening exercise in “pushing water uphill”

Where do we start?

So, how do we identify the security requirements for the design? What is Requirements Engineering in a security context? The security requirements are defined by the risks that the solution will be exposed to. One of the most important SbD principles emphasises this by stating that you must “adopt a risk-driven approach”. These risks and your organisation’s appetite to accept risk determine the requirements for controls; or to put it another way, the controls are required to mitigate the risk to a level that it is within your organisation’s appetite. Again, there are similarities with conventional engineering. Understanding the risks that the design must treat is similar to identifying the Failure Modes of an aircraft or other system5.

The risks need to be articulated so that all stakeholders can understand them, including by the non-technical and non-security communities. Getting all stakeholders to sign off on these inherent risks is crucial to ensure everyone recognises the constraints the solution will be confined by. If you don’t have a sound understanding of the risks before work starts on the design, let alone the implementation, then you are lacking an essential part of the solution requirements.

If you don’t have a sound understanding of the risks before work starts on the design, let alone the implementation, then you are lacking an essential part of the solution requirements.

Review, Collaborate and Iterate

Once you have the security requirements, you can feed them into the design process similar to functional requirements. Selecting appropriate controls to meet the requirements will undoubtedly require some specialist expertise. However, this is similar to the requirement for technical architects to be familiar with the technologies employed in the solution stack.

This design process should be iterative. Requirements change, frequently due to learning in one iteration providing feedback into the next. The security requirements may influence the architectural approach to fulfil the functional requirements. Occasionally, a complete rethink may be required to adjust the functional requirements to meet the security constraints while also meeting the business needs.

However, like the house-building analogy above, this time spent optimising the design will be significantly less than the time, cost, and disruption caused if security is addressed later in the lifecycle.
Each iteration takes the proposed design; reviews the inherent risks to identify any that can be retired or if new ones have been created; assesses the residual risk given the existing security controls; and identifies additional security controls to reduce the residual risk to an acceptable level. Done collaboratively, this can introduce fast feedback into the design process, and over time, the technical architects will become more familiar with security issues and their resolutions.

Time spent optimising the design will be significantly less than the time, cost and disruption caused if security is addressed later in the lifecycle

What part does Zero Trust play in the exercise, and the role of scope definition

Zero Trust6 7 is another trending buzzword frequently camouflaged with mystique or hijacked by as “features” by product sheets. Our view on Zero Trust is similar to our view on SbD. It should be easy to understand and “does exactly what it says on the tin”. In design and in operations we start from the baseline that nothing is trusted. Whether it is digital identities, devices, applications or services, we can only trust them once we have an objective and explicit reason to trust them. We touch on this concept in some of the articles in our Cyber Security for Non-Security Professionals series. Trust in Digital Identities exists on a continuum and we use Trust Personas to determine the extent of that trust in different circumstances. Similarly, there are different thresholds for trust earned by Devices and other systems and services.

We use the principle of Zero Trust extensively when applying Secure by Design. By having no implict trust in any identity, device or service we can decide what the minimum level of trust we need to enforce and the maximum level of trust that the entity can offer. If there the maximum trust on offer is less than the minimum trust we need then there is a design decision to be made about how we close the gap. It may be necessary to reduce functionality in order to reduce the required minimum. Or we may need to put in place other compensatory controls to reduce the risk in other ways.

Defining an appropriate scope of the system is key to success. If you set the scope too large then everything is inside the ‘circle of trust’ and SbD becomes a homogenous exercise in enterprise security. If you set the scope too small then you will drown under the sheer quantity of projects to manage.

The world is not a greenfield site, and security is not a fire-and-forget weapon

The world is not a green field site, and there will be challenges retrofitting a SbD approach to the broad portfolio of legacy solutions. There is no simple or quick solution to this. It will be a case of progressively revisiting each project’s architecture and identifying the changes that will make it secure by design.

But risk can help us here, too. Some projects or services will be sufficiently low-risk that they can be tolerated until they are retired (so long as they are not trusted by any other more important system).

The SbD approach lends itself well to a progressive rollout. SbD will limit the negative impact that a legacy system can have on a target system, because nothing outside of a project’s scope is implicitly trusted. You can only aim for a perfect world by progressively taking steps to make it a better world.

In this article, we explain why risk management needs to be addressed at the design phase of projects. This does not mean that we believe this is the end of the journey. Security and risk management still needs to be managed in operations as new threats change the risk profile, or change is applied to a system. But with the foundations laid early in the lifecycle, the task of management through life becomes easier. The documentation generated by SbD should provide clear traceability to between risks and controls. When a project is reviewed in life, the rationale behind previous decisions can be clearly understood, enabling change to be an informed process.

You can only aim for a perfect world by progressively taking steps to make it a better world

Summary

This article outlines why we believe applying the principles of Secure by Design avoids issues getting into operations, and saves time and money. We haven’t been able to go into a lot of detail, but hopefully, it gives a guide to follow.

If what we have described seems obvious, then that is great. But from our experience too many projects do not consider security an essential component of design. We believe that is a missed opportunity, and applied correctly it delivers solutions that are:

  • More secure;
  • Easier to manage;

And that it does this more efficiently, saving time and money.

  1. Credit to the Sherwin-Williams Company ↩︎
  2. US Air Force Anderson Report, 1972 – “Computer Security Technology Planning Study” ↩︎
  3. Secure by Design Priciples, UK Government ↩︎
  4. Requirements Engineering, Wikipedia ↩︎
  5. Failure Mode and Effects Analysis, Wikipedia ↩︎
  6. NIST SP 800-207 Zero Trust Architecture ↩︎
  7. NCSC Zero Trust Architecture Design Principles ↩︎
More Posts

How to Protect the Digital Achilles Heel of Military Capability

Our demographics and the moral value we place on life as a society mean our military must rely on it exploiting technological advantage. But the increased dependence on support from suppliers makes the supply chain an extended part of the networked battlespace, and their security and resilience are critical.

Microsoft’s and Google’s poor discipline is weakening herd immunity

Email was insecure by design, but additional standards have progressively improved that. However, our recent research has indicated that poor discipline at Microsoft and Google is putting all of that hard work at risk. As the dominant providers of email services to our businesses this puts all of us at risk.

(How) Can we trust AI?

Is human nature at fault for the weakness in our management of technology risk? How do we need to change our perspective as AI makes us more dependent?

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top