Introduction

Small string optimization (SSO) is a mechanism present in many programming languages and modern compilers. Its purpose is to reduce memory allocations on the heap. This optimization is particularly crucial in embedded environments and low-latency systems, where we must be vigilant about both our application’s memory consumption and code performance.

In the cppcon talk “The strange details of std::string at Facebook”, Nicholas Ormrod mentioned that std::string is the most frequently included file in the Facebook database, representing 18% of the CPU time spent in the STL.

Memory allocations are slow due to the numerous steps involved in the process of allocating memory on the heap. Typically, when working with strings, this memory is swiftly deallocated after usage, such as when printing text or assigning names to local (stack) objects.

The most frequently used string in C++ is the empty string. This string is generated when we perform default initialization on a string, and it ocuppies1 byte, which is the null terminator.

How does it works?

Before GCC 5.0, Small String Optimization (SSO) was not implemented. The std::string had a size of 8 bytes because both the size and the capacity were stored in positions -3 and -2 of the char* used to reference the std::string.

Source: Nicholas Ormrod

Before delving into how Small String Optimization (SSO) is implemented, it’s important to revisit how regular string objects are defined. Here’s a typical std::string implementation using modern C++:

struct string {
  std::unique_ptr<char[]> data; // data
  std::size_t size; // actual size
  std::size_t capacity; // total capacity of the buffer
}

With this implementation, we have a string class with a size of 24 bytes (on 64-bit architectures). If we compare this to the actual std::string in C++ found in the <string> header, we can observe that the C++ implementation has a size of 32 bytes. This size difference is attributable to Small String Optimization (SSO), which consumes some space within the struct. When SSO is applied, the implementation changes to something similar to this:

struct string_sso {
  std::size_t capacity;
  std::unique_ptr<char[]> data;
  std:size_t size;
  std::array<char, 16> sso_buffer;
}

The magic number 16 is determined by the compiler’s implementation and typically corresponds to 16 (15 characters + 1 null terminator).

This implementation has its advantages and disadvantages. If your string is shorter than 15 characters, it will be stored inside the std::array on the stack. However, if it exceeds 15 characters, your string will occupy 40 bytes in memory, with 24 bytes of that space going unused when SSO (capacity, size and data won’t be necessary until we reach 16 characters).

Note: std::array already has its own size that can be calculated in O(1).

The following implementation offers a more efficient way to utilize memory, paying only for what you actually use.

struct string_sso {
  std::size_t capacity;
  union {
    struct {
      std::unique_ptr<char[]> data;
      std:size_t size;
    } heap_buffer;
    std::array<char, sizeof(heap_buffer)> sso_buffer;
  };
}

In this new implementation, we utilize a static array (std::array) that eliminates the need for memory allocation on the heap. The critical element here is the sizeof(heap_buffer), which on a 64-bit architecture would be 16 bytes. This buffer accommodates 15 bytes for the string content and 1 byte for the null terminator for small strings. With this approach, the string size remains at 24 bytes, and the use of a union ensures that the same space is utilized whether the string size is larger or smaller than 16 bytes.

Personally, I prefer the second implementation, even though it lacks the flexibility to set a fixed size for every architecture or operating system, and it is tightly bound to data sizes. There are two potential reasons for using the first approach in compilers: either they consider it more efficient or they want to maintain compatibility with the existing Application Binary Interface (ABI).

Performance

The answer seems obvious, if our data structure is smaller and non allocated on the heap, our operations will be faster since it is faster to allocate and retrieve from the stack than from the heap. There are a lot of testing doing in the internet about that but I would like to add two of the most clear and well done that I found.

Source: https://cpp-optimizations.netlify.app/small_strings/
Source: HugoTeixeira in https://stackoverflow.com/a/51796541

Specially in the last one we can see clearly how in MacOS the SSO size is set to 23 characters, instead of 15 that is in Ubuntu

FBString by Andrei Alexandrescu

In the CppCon 2016, Nicholas Ormrod mention in his talk “The strange details of std::string at Facebook”, the implementation done by Andrei Alexandrescu of FBString in order to implement a SSO prior to GCC 5.0.

In contrast to some SSO optimizations implemented by compilers, FBString offers an SSO buffer size of 23 characters, which is larger than the 15 characters provided by the vast majority of implementations. But why is this significant? As Nicholas mentioned in the talk, one of the most commonly used strings at Facebook consists of UUIDs with 20 characters. With the majority of compiler implementations, these strings exceed the maximum characters required to fit within the SSO buffer, leading to heap memory allocation. Aside from Facebook case, 15 characters are not enought space in order to parse some of the 64-bit integer values to strings (int64_t, uint64_t, long long)

You can find the FBString optimization here

This is accomplished through the implementation of a three-tiered storage strategy and by collaborating with the memory allocator. Specifically, FBString is designed to identify the use of jemalloc and work in conjunction with it to attain substantial enhancements in both speed and memory utilization.

FBString SSO uses the last byte to represent the spare capacity for the string (remaining caracters).

Source: Nicholas Ormrod

This is quite clever because 0 is equivalent to ‘\0’. Consequently, when the spare capacity is 0, our last character will be the null terminator.

Source: Nicholas Ormrod

With this implementation, we have 3 bits remaining within the 24 available bytes to use as flags. Here’s a quick summary table about how we store tha values based on the number of characters:

Character rangeStorage method
<= 23 charactersAllocated on the stack
24-255 charactersare stored in malloc-allocated memory and copied eagerly.
> 255 charactersare stored in malloc-allocated memory and copied lazily (copy on write).

Copy-on-write (COW) semantics were removed in C++11 due to multithreading problems, but Andrei’s implementation offers multithreading safety.

Nicholas claims that this new implementation resulted in a 1% performance improvement for Facebook, which may seem substantial for a mere std::string optimization. One of the reasons for this improvement is its high optimization for the cache. Although it generates more assembly instructions, it is more cache-friendly, making the implementation faster. It’s essential to consider whether our target platform has cache available when assessing the impact of this optimization.

Note: Clang and MSVC have a similar implementation.

String_view

std::string_view was introduced in C++17. The primary purpose of this data structure is to optimize operations involving read-only strings. As a result, the std::string_view interface closely resembles that of a constant std::string.

A std::string_view is exceptionally lightweight, consisting of just one pointer to data (8 bytes on 64-bit architectures) and the size (8 bytes on 64-bit architectures), for a total of 16 bytes of space. This property makes it extremely efficient to convert a std::string into a std::string_view using the std::basic_string_view(const CharT* s, size_type count) constructor. This operation is highly resource-efficient.

In contrast, transforming a null-terminated string (const char*) into a suitable data structure typically requires iterating through all the data to retrieve the size of the string. This process has a time complexity of O(n), which can be significant when dealing with very large null-terminated strings, although such situations are relatively rare.

It’s crucial to highlight that std::string_view instances are allocated on the stack, eliminating the need for memory heap allocations. This is made possible by their immutability and their role as a “view” of an existing string, rather than a separate data container.

It’s indeed a good practice to consider using std::string_view whenever you are working with data that effectively behaves like a const std::string& or a const char* without the need for a null terminator

Last but not least, std::string_view does not have a null terminator character!!!

References

  1. https://stackoverflow.com/a/10319672
  2. https://stackoverflow.com/questions/10315041/meaning-of-acronym-sso-in-the-context-of-stdstring/51796541#51796541
  3. https://cpp-optimizations.netlify.app/small_strings/
  4. https://pvs-studio.com/en/blog/terms/6658/
  5. CppCon 2016: Nicholas Ormrod “The strange details of std::string at Facebook”: https://www.youtube.com/watch?v=kPR8h4-qZdk
  6. CppCon 2015: Marshall Clow “string_view”: https://www.youtube.com/watch?v=H9gAaNRoon4

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *