Publication: 2023-12-30

Summary: Why were NULL pointers introduced? Why are they the one billion dollar mistake? What are common prevention measures? Is there a consistent and useful behaviour to give them? This article explores these questions and the idea of defining access to a NULL pointer as reading a zero-initialized structure.

This article was discussed on Reddit.

A sweet dream for NULL pointers

C and C++ adventurers always wander on the lands of frightening beasts. When offended, these mythological creatures may freeze time, create abominations, and even violate causality, leading bewildered humans on their trail. However, when humans pay sufficient respect to them, the creatures can be surprisingly helpful, sometimes even docile, as long as nothing irritates them.

One of their species, especially mischievous, is called the Pointer. Most of the time, humans can make contact without major difficulties, and pointers even have useful information to share. But sometimes, a pointer is indisposed to interactions; if a careless human contacts them, reality will start crumbling.

It is common to find pointers in nature. Humans try to avoid lone pointers, but they often need to interact with them to gain valuable information. Extreme and continuous caution is required to identify an indisposed pointer. Can humanity find a less tedious way to interact with pointers?

Pointers 101

In a computer, the processor serves as a super calculator. From its point of view, the CPU only stores the numbers required for the current computation. Anything else is abstracted by "virtual memory", an abstraction where the Operating System assigns a number to each piece of information (= a byte), and the processor can retrieve an information by giving a number to the OS. For the processor, the information is only a sequence of bytes, but for the OS, it corresponds to RAM, disk or network.

This number is called a virtual address, and it uniquely identifies one byte of information. The electronics of current desktop computers limit them between 0 and 2^48 (= address space). Thus, the CPU could theoretically access 256 terabytes (= 2^48 bytes) of information. In practice, we only have a few gigabytes of RAM, and a software only requires a bunch of files to work with, so the OS leaves most of the virtual space unassigned; if the CPU tries to access it, the OS will make the current program crash.

A pointer is a variable that stores an address and whose type indicates how to interpret the sequence of bytes starting at that address. By convention, a pointer stores the address 0 (NULL pointer) as a special value to mean the absence of information. The OS ensures that no information is assigned to address 0, and makes a program crash if it tries to access address 0.

Usefulness of NULL pointers

This convention is often used in C and C++ for optional function parameters and erroneous return values, for instance malloc(), free(), fopen()... It can also serve as a sentinel in data structures such as linked lists.

Pointers inside structures are also set to NULL pointers when initialized. So, NULL is a value of choice inside structures for optional configuration, state or information. Here is an example from the API of Vulkan:

typedef struct VkInstanceCreateInfo {
    VkStructureType             sType;  // Must be equal to VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
    const void*                 pNext;  // NULL, or a pointer to extra information.
    VkInstanceCreateFlags       flags;  // Can be 0 for default behaviour.
    const VkApplicationInfo*    pApplicationInfo;           // Can be NULL.
    uint32_t                    enabledLayerCount;          // Can be 0.
    const char* const*          ppEnabledLayerNames;        // Can be NULL if count is 0.
    uint32_t                    enabledExtensionCount;      // Can be 0.
    const char* const*          ppEnabledExtensionNames;    // Can be NULL if count is 0.
} VkInstanceCreateInfo;

VkResult vkCreateInstance(
    const VkInstanceCreateInfo*     pCreateInfo, // Required input: cannot be NULL.
    const VkAllocationCallbacks*    pAllocator,  // Optional input: can be NULL.
    VkInstance*                     pInstance);  // Destination: cannot be NULL.

Even with all these configuration options, thanks to zero-initialization and the fact that Vulkan APIs were designed with 0 / NULL as a suitable default, the usage code remains simple, as long as you do not need to customize it.

VkInstanceCreateInfo info = {0};
info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;

VkInstance vk;
VkResult res = vkCreateInstance(&info, NULL, &vk);

Zero-initialization is also a natural extension of some structures. For instance, a sequence of bytes can be represented as either (1) the address of the first byte and the number of bytes, or (2) the address range of the bytes.

typedef struct Bytes1 {
    char*  addressBegin;
    size_t byteCount;
} Bytes1;

// OR

typedef struct Bytes2 {
    char* addressBegin;
    char* addressEnd;
} Bytes2;

In both cases, zero-initialization just works to represent an empty sequence. For Bytes1, size == 0, thus any function correctly checking the array size before accessing it works. For Bytes2, because there exists no memory address such that NULL <= p < NULL, iteration on bytes will stop immediately, effectively manipulating an empty sequence of bytes.

So zero-initialization is pretty nice, and NULL pointers have a useful role. When does the trouble appear?

The billion dollar mistake

The convention of NULL pointers can be traced back to 1965 and the ALGOL language. Tony Hoare, its inventor, gave a conference at QCon London 2009 explaining its choice (I recommend the textual highlights in the description for a summary of the story). Hoare invented used NULL pointers because they were convenient in not-yet-initialized cases, such as during the construction of trees or cyclic lists.

Edit 2024-01-03: Reddit user trvlng_ging commented that pointers already existed in other languages during the early 1960s, and some languages provided automatic checks. But at a time when computers were slow and optimizing compilers were in their infantry, this was a performance issue. With ALGOL, Hoare decided to shift checking pointers to the programmer, given that at the time there was not much memory, so a human could check validity.

The mistake appears due to the implication that manipulating a pointer in the code is no longer safe. The developer must now check pointers before using them, and if they are not conscientious at least once, a new bug appears. Even testing is not sufficient, as software is deployed in diverse conditions and viruses explore the less tested parts of code.

Moreover, C and C++ are compiled with optimizing compilers; a compiler may change the authored code to make it faster, as long as correct programs keep the same behaviour ("as-if" rule). In this regard, accessing address 0 is considered "undefined behaviour". A compiler supposes that developers are perfect, so the compiler is free to replace code assuming that no access to NULL pointer are made, making the program harder to debug.

So, the fact that any pointer can hold NULL is a burden for the developer, who becomes responsible to always check for NULL, and a failure results in a hard-to-reason, hard-to-reproduce bug. Surely, some people have tried to mitigate the issue?

Prevention measures

Statically typed languages younger than C usually provide syntactic features to indicate whether a given pointer (or reference) can store NULL. With language features, two codebases will handle the NULL situation the same way, reducing frictions between the two codebases.

  • C++ "value semantics" imply that by default, if a variable is named, it exists. Additionally, C++ provides references T& and std::reference_wrapper<T> as pointers, but they must be initialized with existing information. Also, core Guidelines Support Library provides gsl::not_null<T*> to indicate that an API takes a pointer which cannot hold NULL.

  • Java and C# stores class as references, which can be null. To mitigate that, C# has struct for opt-in value semantics.

  • In Rust, Zig and Kotlin, pointers and references are supposed non-NULL. Explicit syntax is provided to indicate that a given variable may also hold NULL.

For languages which have no syntactic features, the Null Object design pattern can be used. The implementer of a type provides an extra state to represent the absence of information. When manipulating a variable with this state, all operations do nothing. With such a type, developers do not need to check the validity of the pointer: if information was absent, nothing will be done, and the program continues.

  • In Java, the author of a class Class can provide a derived class NullClass where all methods are overridden with a do-nothing implementation.

  • In C++, the same pattern can be found occasionally, for instance with std::pmr::null_memory_resource() for the class std::pmr::memory_resource.

  • POSIX specifies the special file /dev/null, where no data can be read and data written is discarded: read() and write() do nothing.

  • A parallel can be drawn in hardware, with the float type capable of holding NaN (Not-a-Number). Manipulating NaN does nothing and produce NaN too. Thus, we do not need to check if a float holds NaN before adding two values.

In C, we do not have polymorphism nor complex constructors for global variables. The closest, already useful feature is zero-initialization, which provides a generally natural invalid state to a structure. However, if the structure holds a pointer, then we have a NULL pointer stored in our structure. Wouldn't it be nice to have this zero-initialization works recursively?

My sweet dream

I had this question a few weeks ago; given that accessing a NULL pointer is "undefined behaviour", what behaviour could we give it to be both consistent and useful?

My proposition is that reading through a NULL pointer should always return 0. Reading a structure through a NULL pointer would be the same as reading a zero-initialized structure. If this structure holds pointers, these pointers will be NULL; and so, recursively, the pointed structures would also be considered as-if it was zero-initialized.

What about writing to a structure pointed by a NULL pointer? I suggest ignoring the write operation, because giving NULL as an output pointer has already the meaning of "I don't care about the output".

Existing code can still continue to use NULL pointers as a special value, but in the case a NULL pointer leaks to code not expecting it, the program still has defined behaviour.

In practice, this behaviour could be implemented by the OS relatively easy: when a process starts, before the program executes, the OS already must do virtual memory manipulations to place executable code, global data and stack in the virtual memory. During this step, it would also need to reserve the first addresses of the address space for NULL pointers. An example of implementation for Linux:

// 1. Map NULL reads to /dev/null, which will return 0.
// This code is for exposition, but does not work:
// giving NULL to mmap() has special meaning.
int fd = open("/dev/null", O_RDONLY);
mmap(NULL, 65536, PROT_READ, MAP_PRIVATE, fd, 0);

// 2. Create handler to detect and ignore NULL writes.
void handle_sigsegv(int, siginfo_t* p, void*) {
    if ((uintptr_t)p->si_addr < 65536)
        return; // NULL write: ignore and continue
    else
        kill(0, SIGKILL);
}

// 3. Register handler
struct sigaction act = {0};
act.sa_sigaction = handle_sigsegv;
act.sa_flags = SA_SIGINFO;
sigaction(SIGSEGV, &act, NULL);

A similar approach would be feasible on Windows with VirtualAlloc() and AddVectoredExceptionHandler().

As an extra bonus in a far future, if CPU manufacturers choose for their next instruction sets to encode the RET instruction as byte 00, then a NULL function pointer could be executed and do nothing.

Waking up

When I first thought of this solution, I thought of it as a wasted opportunity: it probably would have been nice to have, but now that C and C++ are multi-decades old, it felt impossible to modify something as fundamental as NULL pointers. But after giving a second thought, the behaviour of NULL pointers has always been considered undefined, and can either cause crashes or bugs in optimized programs.

Moreover, it does not require impossible effort: GCC has -fno-delete-null-pointer-checks to deactivate the optimizations relying on NULL pointers undefined behaviour, and a voluntary OS could implement the solution as opt-in per process. Existing programs should not rely on a specific behaviour when accessing a NULL pointer. Thus, most programs should be compatible with this solution.

I am still hesitating on my dream; is it an elegant solution, or an exploit? Is it better to make programs rely on such behaviour, or let them crash so errors are noisy and can be fixed? I am still torn between resiliency and robustness. This is also why I wrote this article; to seek other perspectives on this "NULL is zero" idea. You can share your feedback on social media, or by sending an email to my public inbox.

Feedback

Addition 2024-01-03

After sharing this article on December 30th, 2023, on Reddit, I collected 26% upvotes, 74% downvotes, and about fifty comments. The idea was rejected for several good reasons, which I want to highlight here.

  1. Writes to NULL being ignored changes language semantics: after doing *p = 1, you cannot be sure that *p == 1. Moreover, invoking a signal handler for every write would yield terrible performance.
  2. Zero-initialization is not always a valid state for C++ for structures and classes with a non-default constructor; it may break invariants.
  3. Most comments would prefer to have a defined behaviour for NULL access where programs are guaranteed to crash, throw or halt.
  4. It requires virtual memory and an OS, which are not available on many embedded systems.
  5. NULL pointers are only part of the issue, the more problematic one is use-after-free, which is not tackled here.

So in conclusion, this sweet dream was likely to become a nightmare.

Thanks for reading!