Добавил:
Я за два дня прошел весь курс ТВиМС Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
!ОСНОВНАЯ КНИГА КУРСА! Chapple M. (ISC)2 CISSP Certified IS...Study Guide 9ed 2021.pdf
Скачиваний:
1
Добавлен:
20.06.2025
Размер:
15.75 Mб
Скачать

314Chapter 8  Principles of Security Models, Design, and Capabilities

Secure Defaults

You have probably heard the phrase “the tyranny of the default.” But do you really know what this means? Tyranny has several definitions, but the one that applies here is “a rigorous condition imposed by some outside agency or force” (attributed to American historian Dixon Wecter). Many assume that the settings that are present in a software or hardware product when it is first installed are the optimal settings. This is based on the assumption that the designers and developers of a product know the most about that product and so the settings they made are likely the best ones. However, this assumption overlooks the fact that often the default settings of a product are selected to minimize installation problems to avoid increased load on the technical support services. For example, consider the fact that most devices have a default password, which minimizes the costs of support when installing or using the product for the first time. Unfortunately, default settings often make discovery and exploitation of equipment trivial for attackers.

Never assume that the default settings of any product are secure. They typically are not, because secure settings would likely get in the way of existing business tasks or system operations. It is always up to the system’s administrator and/or company security staff to alter a product’s settings to comply with the organization’s security policies. Unless your organization hired the developer, that developer did not craft the code or choose settings specifically for your organization’s use of their product.

A much better assumption is that the default settings of a product are the worst possible options for your organization. Therefore, you need to review each and every setting to determine what it does and what you need it to be configured to do in order to optimize security while supporting business operations.

Fortunately, there is some movement toward more secure defaults. As mentioned in Chapter 1, Microsoft’s Security Development Lifecycle (SDL) has a motto named SD3+C, which includes the phrase “Secure by Default.” Some products, especially security products, may now be designed with their most secure settings enabled by default. However, such a locked-down product will have fewer enabled capabilities and will likely be less user friendly. Thus, while being more secure, secure defaults may be an obstacle for those who only want their systems to function.

If you are a developer, then it is your responsibility to create detailed explanations of each of the configuration options of your product. You can’t assume that customers know everything about your product, especially what the configuration settings are and what each option does to alter its features, operations, communications, and so forth. You may be required to have default settings to make the product as easy to install as possible, but you may be able to provide one or more configurations in either written instructional form or in a file that can be imported or applied. This will go a long way to assist customers with gaining the most advantage from your product while minimizing the security risks.

Fail Securely

System failures can occur due to a wide range of causes. Once the failure event occurs, how the system or environment handles the failure is important. The most desired result is for an application to fail securely. The first type of failure management is programmatic error

Secure Design Principles

315

handling (aka exception handling). This is the process where a programmer codes in mechanisms to anticipate and defend against errors in order to avoid the termination of execution. Error handling is the inclusion of code that will attempt to handle errors when they arise before they can cause harm or interrupt execution.

One such mechanism, which is supported by many languages, is a try..catch statement. This logical block statement is used to place code that could result in an error on the try branch, and then code that will be executed if there is an error on the catch branch. This is similar to if..then..else statements, but it is designed to deftly handle errors. Other mechanisms are to avoid or prevent errors, especially as related to user input. Input sanitization, input filtering, or input validation are some of the terms used to refer to this concept. This often includes checking the input for length, filtering against a block list of unwanted input, and escaping metacharacters. See more about secure coding practices in

Chapter 9; Chapter 15, “Security Assessment and Testing,” and Chapter 20.

There are several similar terms that can be confusing and thus require a bit of focus to comprehend. These terms are fail-soft, fail-secure, fail-safe, fail-open, and fail-closed. Typically, the confusion occurs when not understanding the context where these terms are used.

The two primary contexts are the physical world and the digital environment. In the physical world, entities primarily prioritize the protection of people. However, there are some circumstances where assets are protected in priority over people. In the digital world, entities focus on protecting assets but the type of protection may vary amongst the CIA triad.

When a program fails securely, it was able to do so only because it was designed and programmed to. When secure failure is integrated into a system, the designer must make a few difficult choices about what the results of a failure event will be. The first question to be resolved is whether the system can operate in a fail-soft mode. To fail-soft is to allow a system to continue to operate after a component fails. This is an alternative to having

a failure cause a complete system failure. An example is a typical multitasking operating system that can support numerous simultaneous applications. If one application fails, the others can typically continue to operate.

If fail-soft isn’t a viable option, then the designer needs to consider the type of product, its deployment scenarios, and the priorities related to failure response. In other words, when the product fails without a fail-soft design, it will fail completely. The designer/developer needs to decide what type of complete failure to perform and what to protect or sacrifice to achieve the planned failure result. There are numerous scenarios to consider. The initial distinction is whether the product is something that affects the physical world, such as a door locking mechanism, or is it primarily a digital asset focused product, such as a firewall.

If a product can affect the physical world, then the life and safety of humans needs to be considered and likely prioritized. This human protection prioritization is called fail-safe. The idea is that when a failure occurs, the system, device, or product will revert to a state that protects the health and safety of people. For example, a fail-safe door will open easily in an emergency in order to allow people to escape a building. But this implies that the protection of assets may be sacrificed in favor of personnel safety.

However, in some physical world situations, a product could be designed and intended to protect assets in priority above people, such as a bank vault, medical lab, or even data center. A fail-secure system prioritizes the physical security of assets over any other consideration.

For example, a vault door may automatically close and lock when the building enters a state

316Chapter 8  Principles of Security Models, Design, and Capabilities

of emergency. This prioritization of asset protection may occur at the potential cost of harming personnel who could be trapped inside. Obviously, the prioritization of physical world products needs to be considered carefully. In the context of the physical world, the terms failopen is a synonym to fail-safe and fail-closed is a synonym of fail-secure.

If the product is primarily digital, then the focus of security is completely on digital assets. That means the designer must then decide upon the security aspect to prioritize—namely, availability or confidentiality and integrity. If the priority is for maintaining availability, then when the product fails, the connection or communication is allowed to continue. This is known as fail-open. If the priority is for maintaining confidentiality and integrity, then when the product fails, the connection or communication is cut off. This is known as fail-secure, fail-closed, and/or fail-safe (again, in the context of a digital environment). (Note: the IETF recommends avoiding the use of the term fail-safe when discussing digital only issues as it introduces the concept of human safety which is not a concern in a digital context and thus causes unnecessary confusion).

Take note that when the context switches from the physical world to the digital world, the definition of fail-safe changes. An example could be a firewall, which if designed to failopen would allow communications without filtering, whereas if implement a fail-secure, fail-closed, or fail-safe solution would cut off communications. The fail-open state protects availability through the sacrifice of confidentiality and integrity, whereas the fail-closed state sacrifices availability in order to protect confidentiality and integrity. Another example of

a digital environment event following a fail-secure, fail-closed, and/or fail-safe procedure is when an operating system encounters a processing or memory isolation violation, it terminates all executions, then initiates a reboot. This mechanism is known as a stop error, or the Blue Screen of Death (BSoD) in Windows.

A condensed summary of the context and protection priority of these terms is presented in Table 8.2.

TABLE 8 . 2  

Fail terms definitions related to physical and digital products

 

 

 

 

 

 

 

 

 

 

Physical

 

 

State

 

 

Digital

 

 

 

 

 

 

 

 

 

 

 

 

Protect People

 

 

Fail-Open

 

 

 

Protect Availability

 

 

 

 

Protect People

 

 

Fail-Safe

 

 

 

 

Protect Confidentiality & Integrity

 

 

 

 

 

Protect Assets

 

 

Fail-Closed

 

 

Protect Confidentiality & Integrity

 

 

 

 

Protect Assets

 

 

Fail-Secure

 

 

 

Protect Confidentiality & Integrity

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Keep It Simple

Keep it simple is a shortened form of the classic statement of “keep it simple, stupid” or “keep it stupid simple.” This is sometimes called the KISS principle. In the realm of security, this concept is the encouragement to avoid overcomplicating the environment, organization, or product design. The more complex a system, the more difficult it is to secure. The more

Secure Design Principles

317

lines of code, the more challenging it is to thoroughly test it. The more parts there are, the more places there are for things to go wrong. The more features and capabilities, the larger the attack surface.

There are many other concepts that have a similar or related emphasis, such as the following:

“Don’t Repeat Yourself” (DRY)   The idea of eliminating redundancy in software by not repeating the same code in multiple places, which would increase the difficulty if changes are needed.

Computing Minimalism   Crafting code so that it uses the least necessary hardware and software resources possible; this is also the goal of the program evaluation and review technique (PERT), which is discussed in Chapter 20.

Rule of Least Power   Use the least powerful programming language that is suitable for the needed solution.

“Worse Is Better” (aka New Jersey Style)   The quality of software does not necessarily increase with an increase in capabilities and functions; there is often a worse software state (i.e., fewer functions), which is the better (i.e., preferred, maybe more secure) option.

“You Aren’t Gonna Need It” (YAGNI)   Programmers should not add capabilities and functions until they are actually necessary, so rather than create it when you think of it, instead create it only when you actually need it.

It is easy to get caught up in adding complexity to a system, whether that system is a software program or an organizational IT security structure. The KISS principle encourages us all to avoid the overly complex in favor of the streamlined, optimized, and reduced solution. Simpler solutions are easier to secure, easier to troubleshoot, and easier to verify.

Zero Trust

Zero trust is a security concept where nothing inside the organization is automatically trusted. There has long been an assumption that everything on the inside is trusted and everything on the outside is untrusted. This has led to a significant security focus on endpoint devices, the locations where users interact with company resources. An endpoint device could be a user’s workstation, a tablet, a smartphone, an Internet of Things (IoT) device, an industrial control system (ICS), an edge computing sensor, or any public-facing servers in a screened subnet or extranet. The idea that a security perimeter exists between the safe inside and the harmful outside is problematic. There have been too many occurrences of security breaches caused by insiders as well as external hacker breaches that gained the freedom to perform lateral movement internally once they breached the security barrier. The concept of a security perimeter is further complicated by the use of mobile devices, the cloud, and the proliferation of endpoint devices. For most organizations, there is no longer a clearly defined line between inside and outside.

Zero trust is an alternate approach to security where nothing is automatically trusted. Instead, each request for activity or access is assumed to be from an unknown and untrusted location until otherwise verified. The concept is “never trust, always verify.” Since anyone

318Chapter 8  Principles of Security Models, Design, and Capabilities

and anything could be malicious, every transaction should be verified before it is allowed to occur. The zero trust model is based around “assume breach,” meaning that you should always assume a security breach has occurred and that whoever or whatever is making a request could be malicious. The goal is to have every access request be authenticated, authorized, and encrypted prior to the access being granted to a resource or asset. The implementation of a zero trust architecture does involve a significant shift from historical security management concepts. This shift typically requires internal microsegmentation and strong

adherence to the principle of least privilege. This approach prevents lateral movement so that if there is a breach or even a malicious insider, their ability to move about the environment is severely restricted.

Microsegmentation is dividing up an internal network into numerous subzones. Each zone is separated from the others by internal segmentation firewalls (ISFWs), subnets, or VLANs. Zones could be as small as a single device, such as a high-value server or even a client or endpoint device. Any and all communications between zones are filtered, may be required to authenticate, often require session encryption, and may be subjected to allow list and block list control.

Zero trust is implemented using a wide range of security solutions, including internal segmentation firewalls (ISFWs), multifactor authentication (MFA), identity and access management (IAM), and next-generation endpoint security (see Chapter 9). A zero trust approach to security management can only be successful if a means to continuously validate and monitor user activities is implemented. If a one-time validation mechanism is used, then the opportunity to abuse the system remains since threats, users, and connection characteristics are always subject to change. Thus, zero trust networking can only work if real-time vetting and visibility into user activities is maintained.

In some situations, complete isolation may be needed instead of controlled and filtered interaction. This type of isolation is achieved using an air gap. An air gap is a network security measure employed to ensure that a secure system is physically isolated from other systems. Air gap implies that neither cabled nor wireless network links are available.

In order to implement a zero trust system, an organization must be capable of and willing to abandon some long-held assumptions about security. First and foremost, it must be understood that there is no such thing as a trusted source. No entity, asset, or subject—internal

or external—is to be trusted by default. Instead, always assume attackers are already on the inside, on every system. From this new “no assumed trust” position, it is obvious that traditional default access controls are insufficient. Each and every subject, each and every time, needs to be authenticated, authorized, and encrypted. From there, a continuous real-time monitoring system needs to be established to look for violations and suspicious events. But even with zero trust integrated into the IT architecture, it is only an element of a holistic security strategy that is integrated into the entire organization’s management processes.

Соседние файлы в предмете Информационная безопасность автоматизированных систем