Safety: Terminology

Table of contents

Core Terminology

Hazard

Unsafe coding construct that may lead to a bug or vulnerability. For example, indexing an array with a user-supplied and unvalidated index is a hazard.

Bug or defect

Reachable program behavior contrary to the author’s intent.

Active bug

Buggy behavior that is actively occurring for users of the program.

Latent bug

Buggy behavior that does not currently occur for users, but is reachable. Behaviors that are reachable, and so can happen, but don’t happen today in practice are always still bugs!

Safety

Absent a qualifier or narrow context, refers to system safety, and safety engineering. Always a property of a system or product as a whole, including human factors, etc.

Code, software, or program safety

Invariants or limits on program behavior in the face of bugs.

  • Very narrow and specific meaning.
  • Often necessary but not sufficient for system safety.

This is a specific subset of safety concerns, and the ones we are most often focused on with programming language and library design.

Safety bugs

Bugs where some aspect of program behavior has insufficient (often none) invariants or limits.

For example, undefined behavior definitionally has no invariant or limit, and reaching it is always a safety bug.

Initial bug

The first behavior contrary to the author’s intent, distinct from subsequent deviations.

Fail-stop

The behavior of immediately terminating the program, minimizing any further business logic. This is in contrast to any form of “correct” program termination, continuing execution, or unwinding.

Vulnerability Terminology

Vulnerability or security vulnerability

A bug that creates the possibility for a malicious actor to subvert a program’s intended behavior in a way that violates a security policy (for example, confidentiality, integrity, availability). Vulnerabilities are often exploitable manifestations of underlying bugs.

Vulnerability defense

The set of strategies and techniques employed to reduce the risks posed by vulnerabilities arising from bugs. These strategies operate at different levels and have varying degrees of effectiveness.

Detecting

While still leaving the code vulnerable, a defense that attempts to recognize and potentially track when a specific bug has occurred dynamically. Requires some invariant or limit, but very minimal.

Mitigating

Making a vulnerability significantly more expensive, difficult, or improbable to be exploited.

Preventing vulnerabilities

Making it impossible for a bug to be exploited as a vulnerability without resolving the underlying bug – the program still doesn’t behave as intended, it just cannot be exploited. Often this is done by defining behavior to fail-stop.

Ensuring correctness

Ensures that if the program compiles successfully, it behaves as intended. This typically prevents a bug being written and compiled into a program in the first place. For example, statically typed languages typically ensure that the types used in the program are correct.

Hardening

Combinations of mitigation, prevention, and ensured correctness to reduce the practical risk of vulnerabilities due to bugs.

Memory Safety Specifics

Memory safety

Having well-defined and predictable behavior regarding memory access, even in the face of bugs. Memory safety encompasses several key aspects:

Temporal safety

Memory accesses occur only within the valid lifetime of the intended memory object.

Spatial safety

Memory accesses remain within the intended bounds of memory regions.

Type safety

Memory is accessed and interpreted according to its intended type, preventing type confusion.

Initialization safety

Memory is properly initialized before being read, avoiding the use of uninitialized data.

Data-race safety

Memory writes are synchronized with reads or writes on other threads.

Memory safety bug

A safety bug that violates memory safety.

Memory-safe platform or environment

A computing platform or execution environment that provides mechanisms to prevent memory safety bugs in programs running on it from becoming vulnerabilities. This is a systems path to achieving memory safety by providing the well-defined and predictable behavior by way of the execution environment. For example, a strongly sandboxed WebAssembly runtime environment can allow a program that is itself unsafe to be executed safely

Memory-safe language

A programming language with sufficient defenses against memory safety bugs for them to not be a significant source of security vulnerabilities. This requires preventing vulnerabilities or ensuring correctness; mitigation is not sufficient to provide an adequate level of memory safety.

We identify several key requirements for a language to be memory-safe:

  • The default mode or subset of the language must provide guaranteed spatial, temporal, type, and initialization memory safety.
  • Any unsafe subset must only be needed and only be used in rare, exceptional cases. Any use of the unsafe subset must also be well delineated and auditable.
  • Currently, security evidence doesn’t require providing guaranteed data-race safety for data-race bugs that are not also temporal memory safety bugs. However, the temporal memory safety guarantee must still hold.