Small String Optimization (SSO) in C++

Small String optimization SSO in C++

Introduction

Small string optimization (SSO) is a mechanism present in many programming languages and modern compilers. Its purpose is to reduce memory allocations on the heap. This optimization is particularly crucial in embedded environments and low-latency systems, where we must be vigilant about both our application’s memory consumption and code performance.

In the cppcon talk “The strange details of std::string at Facebook”, Nicholas Ormrod mentioned that std::string is the most frequently included file in the Facebook database, representing 18% of the CPU time spent in the STL.

Memory allocations are slow due to the numerous steps involved in the process of allocating memory on the heap. Typically, when working with strings, this memory is swiftly deallocated after usage, such as when printing text or assigning names to local (stack) objects.

The most frequently used string in C++ is the empty string. This string is generated when we perform default initialization on a string, and it ocuppies1 byte, which is the null terminator.

How does it works?

Before GCC 5.0, Small String Optimization (SSO) was not implemented. The std::string had a size of 8 bytes because both the size and the capacity were stored in positions -3 and -2 of the char* used to reference the std::string.

Source: Nicholas Ormrod

Before delving into how Small String Optimization (SSO) is implemented, it’s important to revisit how regular string objects are defined. Here’s a typical std::string implementation using modern C++:

struct string {
  std::unique_ptr<char[]> data; // data
  std::size_t size; // actual size
  std::size_t capacity; // total capacity of the buffer
}

With this implementation, we have a string class with a size of 24 bytes (on 64-bit architectures). If we compare this to the actual std::string in C++ found in the <string> header, we can observe that the C++ implementation has a size of 32 bytes. This size difference is attributable to Small String Optimization (SSO), which consumes some space within the struct. When SSO is applied, the implementation changes to something similar to this:

struct string_sso {
  std::size_t capacity;
  std::unique_ptr<char[]> data;
  std:size_t size;
  std::array<char, 16> sso_buffer;
}

The magic number 16 is determined by the compiler’s implementation and typically corresponds to 16 (15 characters + 1 null terminator).

This implementation has its advantages and disadvantages. If your string is shorter than 15 characters, it will be stored inside the std::array on the stack. However, if it exceeds 15 characters, your string will occupy 40 bytes in memory, with 24 bytes of that space going unused when SSO (capacity, size and data won’t be necessary until we reach 16 characters).

Note: std::array already has its own size that can be calculated in O(1).

The following implementation offers a more efficient way to utilize memory, paying only for what you actually use.

struct string_sso {
  std::size_t capacity;
  union {
    struct {
      std::unique_ptr<char[]> data;
      std:size_t size;
    } heap_buffer;
    std::array<char, sizeof(heap_buffer)> sso_buffer;
  };
}

In this new implementation, we utilize a static array (std::array) that eliminates the need for memory allocation on the heap. The critical element here is the sizeof(heap_buffer), which on a 64-bit architecture would be 16 bytes. This buffer accommodates 15 bytes for the string content and 1 byte for the null terminator for small strings. With this approach, the string size remains at 24 bytes, and the use of a union ensures that the same space is utilized whether the string size is larger or smaller than 16 bytes.

Personally, I prefer the second implementation, even though it lacks the flexibility to set a fixed size for every architecture or operating system, and it is tightly bound to data sizes. There are two potential reasons for using the first approach in compilers: either they consider it more efficient or they want to maintain compatibility with the existing Application Binary Interface (ABI).

Performance

The answer seems obvious, if our data structure is smaller and non allocated on the heap, our operations will be faster since it is faster to allocate and retrieve from the stack than from the heap. There are a lot of testing doing in the internet about that but I would like to add two of the most clear and well done that I found.

Source: https://cpp-optimizations.netlify.app/small_strings/
Source: HugoTeixeira in https://stackoverflow.com/a/51796541

Specially in the last one we can see clearly how in MacOS the SSO size is set to 23 characters, instead of 15 that is in Ubuntu

FBString by Andrei Alexandrescu

In the CppCon 2016, Nicholas Ormrod mention in his talk “The strange details of std::string at Facebook”, the implementation done by Andrei Alexandrescu of FBString in order to implement a SSO prior to GCC 5.0.

In contrast to some SSO optimizations implemented by compilers, FBString offers an SSO buffer size of 23 characters, which is larger than the 15 characters provided by the vast majority of implementations. But why is this significant? As Nicholas mentioned in the talk, one of the most commonly used strings at Facebook consists of UUIDs with 20 characters. With the majority of compiler implementations, these strings exceed the maximum characters required to fit within the SSO buffer, leading to heap memory allocation. Aside from Facebook case, 15 characters are not enought space in order to parse some of the 64-bit integer values to strings (int64_t, uint64_t, long long)

You can find the FBString optimization here

This is accomplished through the implementation of a three-tiered storage strategy and by collaborating with the memory allocator. Specifically, FBString is designed to identify the use of jemalloc and work in conjunction with it to attain substantial enhancements in both speed and memory utilization.

FBString SSO uses the last byte to represent the spare capacity for the string (remaining caracters).

Source: Nicholas Ormrod

This is quite clever because 0 is equivalent to ‘\0’. Consequently, when the spare capacity is 0, our last character will be the null terminator.

Source: Nicholas Ormrod

With this implementation, we have 3 bits remaining within the 24 available bytes to use as flags. Here’s a quick summary table about how we store tha values based on the number of characters:

Character rangeStorage method
<= 23 charactersAllocated on the stack
24-255 charactersare stored in malloc-allocated memory and copied eagerly.
> 255 charactersare stored in malloc-allocated memory and copied lazily (copy on write).

Copy-on-write (COW) semantics were removed in C++11 due to multithreading problems, but Andrei’s implementation offers multithreading safety.

Nicholas claims that this new implementation resulted in a 1% performance improvement for Facebook, which may seem substantial for a mere std::string optimization. One of the reasons for this improvement is its high optimization for the cache. Although it generates more assembly instructions, it is more cache-friendly, making the implementation faster. It’s essential to consider whether our target platform has cache available when assessing the impact of this optimization.

Note: Clang and MSVC have a similar implementation.

String_view

std::string_view was introduced in C++17. The primary purpose of this data structure is to optimize operations involving read-only strings. As a result, the std::string_view interface closely resembles that of a constant std::string.

A std::string_view is exceptionally lightweight, consisting of just one pointer to data (8 bytes on 64-bit architectures) and the size (8 bytes on 64-bit architectures), for a total of 16 bytes of space. This property makes it extremely efficient to convert a std::string into a std::string_view using the std::basic_string_view(const CharT* s, size_type count) constructor. This operation is highly resource-efficient.

In contrast, transforming a null-terminated string (const char*) into a suitable data structure typically requires iterating through all the data to retrieve the size of the string. This process has a time complexity of O(n), which can be significant when dealing with very large null-terminated strings, although such situations are relatively rare.

It’s crucial to highlight that std::string_view instances are allocated on the stack, eliminating the need for memory heap allocations. This is made possible by their immutability and their role as a “view” of an existing string, rather than a separate data container.

It’s indeed a good practice to consider using std::string_view whenever you are working with data that effectively behaves like a const std::string& or a const char* without the need for a null terminator

Last but not least, std::string_view does not have a null terminator character!!!

References

  1. https://stackoverflow.com/a/10319672
  2. https://stackoverflow.com/questions/10315041/meaning-of-acronym-sso-in-the-context-of-stdstring/51796541#51796541
  3. https://cpp-optimizations.netlify.app/small_strings/
  4. https://pvs-studio.com/en/blog/terms/6658/
  5. CppCon 2016: Nicholas Ormrod “The strange details of std::string at Facebook”: https://www.youtube.com/watch?v=kPR8h4-qZdk
  6. CppCon 2015: Marshall Clow “string_view”: https://www.youtube.com/watch?v=H9gAaNRoon4

4 Effortless Tricks to Improve Your Code with ChatGPT

It is widely acknowledged that ChatGPT has gained significant recognition in the field, so let us proceed directly to the valuable techniques I have discovered through my extensive experience as a programmer, which I diligently apply in my everyday coding endeavors.

1. Generate code for languages I’m not familiar with

It is a prevalent practice in software development projects to utilize multiple programming languages. When working with C++, for instance, it is often necessary to employ additional languages such as CMake, Bash, Python, and others. However, it is quite common to use these auxiliary languages for specific tasks and subsequently overlook them.

This approach proves particularly advantageous for C++ projects, as the initial setup process in CMake can be intricate and time-consuming. In situations where a simple configuration is all that is needed to initiate a project, employing this technique can save valuable time and effort.

ChatGPT output for the question: "Generate a CMake project"

By adopting this approach, I am able to initiate a project promptly without the need to comprehend the intricacies of CMake or its usage. This significantly reduces the time required for project setup, enabling me to dive into coding swiftly.

Another language that has proven immensely valuable is Bash. However, working with Bash sometimes necessitates an extensive understanding of the language itself as well as the Linux operating system. This dependency can be mitigated by leveraging the capabilities of ChatGPT. For instance, consider the following example where I require a quick method to obtain a list of all process IDs (PIDs) that are currently running, and subsequently remove a specific process by providing an input parameter.

2. Explain code

In the software development realm, it is not uncommon to encounter scenarios where we need to delve into pre-existing legacy code, whether they were generated for us or by other individuals. As time goes by, it becomes increasingly challenging to comprehend code that we ourselves wrote in the distant past. This difficulty amplifies when dealing with external code. However, ChatGPT excels in its ability to provide comprehensive explanations of code. Let’s consider the following code snippet:

int randomNumber = Random::get<int>(0, std::numeric_limits<int>::max());
// current date/time based on current system
std::time_t now = std::time(0);

// convert now to string form
std::string time = std::asctime(std::localtime(&now));
std::string title("Simulations/Simulation" + std::to_string(randomNumber) +".csv");
std::ofstream stream(title);
csv2::Writer<csv2::delimiter<','>> writer(stream);

This is the depth explanation:

3. Comment code

Generating meaningful and concise comments for code can often be a tedious and challenging task. It requires careful consideration of how other programmers who read the code will comprehend it accurately. Thankfully, ChatGPT can assist in automating this process. Let’s continue with the previous code snippet and ask ChatGPT to generate meaningful comments for it:

// Generate a random number
int randomNumber = Random::get<int>(0, std::numeric_limits<int>::max());

// Get the current system time
std::time_t now = std::time(0);

// Convert the time to a string
std::string time = std::asctime(std::localtime(&now));

// Create the file name for writing the CSV data
std::string title("Simulations/Simulation" + std::to_string(randomNumber) + ".csv");

// Open the file for writing
std::ofstream stream(title);

// Create a CSV writer for the file
csv2::Writer<csv2::delimiter<','>> writer(stream);

4. Improve code

Setting personal ego aside, it is a common occurrence for programmers to find themselves needing to make quick implementations to meet deadlines or due to the desire for efficiency. In such situations, ChatGPT proves to be highly valuable as it excels in improving code in ways that may not have been initially considered by the programmer.

The following code snippet is extracted from a personal project TwitchLootClicker that it is a Google Chrome extension in Javascript. Javascript is an easy language to pick up quickly but I am not very familiar with so I tend to code like if I still using C++:

function millisToMinutesAndSeconds(millis) {
    if(millis < 0){
        return "Ready to claim";
    }
    var minutes = Math.floor(millis / 60000);
    var seconds = ((millis % 60000) / 1000).toFixed(0);
    return minutes + ":" + (seconds < 10 ? '0' : '') + seconds;
}

function millisToHoursMinutesAndSeconds(millis){
    var seconds = (millis / 1000).toFixed(0);
    var minutes = Math.floor(seconds / 60);
    var hours = "";
    if (minutes > 59) {
        hours = Math.floor(minutes / 60);
        hours = (hours >= 10) ? hours : "0" + hours;
        minutes = minutes - (hours * 60);
        minutes = (minutes >= 10) ? minutes : "0" + minutes;
    }

    seconds = Math.floor(seconds % 60);
    seconds = (seconds >= 10) ? seconds : "0" + seconds;
    if (hours != "") {
        return hours + "h " + minutes + "m " + seconds + "s";
    }
    return minutes + "m " + seconds + "s ";
}

function minutesToMilliseconds(minutes){
    return (minutes * 60000);
}

The first function serves to convert milliseconds to the format of minutes and seconds (MM:SS). The second function converts milliseconds to the format of hours, minutes, and seconds (HH:MM:SS). Lastly, the third function converts minutes to milliseconds.

Introducing the next prompt to ChatGPT: “Refactor this code using idiomatic JavaScript conventions.” The goal of this prompt is to modify the code to adhere to the standard programming practices followed by JavaScript developers.

/**
 * Converts milliseconds to minutes and seconds
 * @param {number} millis - The time value in milliseconds
 * @returns {string} The formatted time in "minutes:seconds" format
 */
function millisToMinutesAndSeconds(millis) {
  if (millis < 0) {
    return "Ready to claim";
  }

  const minutes = Math.floor(millis / 60000);
  const seconds = Math.floor((millis % 60000) / 1000).toString().padStart(2, '0');
  return `${minutes}:${seconds}`;
}

/**
 * Converts milliseconds to hours, minutes, and seconds
 * @param {number} millis - The time value in milliseconds
 * @returns {string} The formatted time in "hours:h minutes:m seconds:s" format
 */
function millisToHoursMinutesAndSeconds(millis) {
  const seconds = Math.floor(millis / 1000);
  const minutes = Math.floor(seconds / 60);
  let hours = "";

  if (minutes > 59) {
    hours = Math.floor(minutes / 60).toString().padStart(2, '0');
    minutes %= 60;
  }

  const formattedMinutes = minutes.toString().padStart(2, '0');
  const formattedSeconds = (seconds % 60).toString().padStart(2, '0');

  if (hours !== "") {
    return `${hours}h ${formattedMinutes}m ${formattedSeconds}s`;
  }

  return `${formattedMinutes}m ${formattedSeconds}s`;
}

/**
 * Convert minutes to milliseconds
 * @param {number} minutes - The time value in minutes
 * @returns {number} The equivalent time value in milliseconds
 */
function minutesToMilliseconds(minutes) {
  return minutes * 60000;
}

Those are the improvements done by ChatGPT to the former code:

  • Added type annotations to the function parameters for better code documentation.
  • Used template literals (“) for string concatenation, making the code more readable.
  • Replaced the toFixed() method with Math.floor() and toString().padStart() to ensure consistent formatting and avoid floating-point errors.
  • Used the const keyword for variables that don’t require reassignment.
  • Made the code more concise and eliminated unnecessary variable assignments.

Code coverage in Rust

Code coverage in Rust

What is code coverage

Code coverage is a metric that verifies the extent to which code is safeguarded. It is pretty useful when implementing unit testing to ensure that your tests are covering the code conditions you really want to.
Code coverage is conventionally represented as a numerical percentage, wherein a lower value implies a diminished level of code protection.

Metrics used for code coverage

While lines of code serve as one metric for assessing code coverage, it is crucial to acknowledge that they are not the sole determinant of comprehensive code protection. Various units of measurement contribute to achieving well-rounded coverage, such as function coverage, statement coverage, branch coverage and condition coverage.

  • Function coverage: Is a vital metric that quantifies the extent to which the defined functions are actually invoked or called during program execution. By measuring function coverage, developers can assess the effectiveness of their tests in exercising all the functions and identifying any potential gaps in code execution.
  • Statement coverage: Is a fundamental metric used to evaluate the degree to which statements are executed during program run-time. It measures the proportion of statements that are traversed and executed by the test suite. By examining statement coverage, developers can gain insights into the thoroughness of their tests in terms of exploring different code paths and identifying any unexecuted or potentially problematic statements.
  • Branch coverage: is a crucial metric that assesses the extent to which different branches of code bifurcations are executed by the test suite. It specifically measures whether all possible branches, such as those within if-else or if-elseif-else conditions, are exercised during program execution. By analyzing branch coverage, developers can determine whether their tests adequately explore various code paths, ensuring comprehensive validation of all possible branch outcomes. This helps identify potential gaps in testing and increases confidence in the reliability and robustness of the code.
  • Condition coverage: Is a metric used to evaluate the adequacy of tests in terms of covering all possible outcomes of boolean sub-expressions. It measures whether different possibilities, such as true or false evaluations of conditions, are effectively tested. By assessing condition coverage, developers can ensure that all potential combinations and variations within boolean expressions are thoroughly examined, mitigating the risk of undetected issues related to specific condition outcomes.

Given the next code snippet:

Rust
pub fn add(x: usize, y: usize) -> usize {
    let mut z = 0;

    if x > 0 && y > 0 {
        z = x;
    }
    z
}

  • Function coverage will be achieved when the function add is executed.
  • Statement coverage will be achieved when the function add is called, such as add(1, 1), ensuring that all the lines within the function are executed.
  • Branch coverage will be achieved when the function is called with add(1, 0) and add(1, 1), as the first call does not cover the if statement and line 5 remains unexecuted, while the second call enters the if statement.
  • Condition coverage will be achieved when the function is called with add(1, 0), add(0, 1), and add(1, 1), encompassing all possible conditions within the if statement.

LLVM-based coverage

In Rust 1.60.0, support for LLVM-based coverage instrumentalitation was stabilized un rustc.

It is called source-based because it operates on AST (Abstract syntax tree) and preporcessor information.

Code coverage relies on 3 basic steps:

  1. Compiling with coverage enabled: Enabling code coverage during compilation in clangd requires the inclusion of specific flags: -fprofile-instr-generate and -fcoverage-mapping.
  2. Running the instrumented program: When the instrumented program concludes its execution, it generates a raw profile file. The path for this file is determined by the LLVM_PROFILE_FILE environment variable. If the variable is not defined, the file will be created as default.profraw in the program’s current directory. If the specified folder does not exist, it will be generated accordingly. The program replaces specific pattern strings with the corresponding values:
    • %p: Process ID.
    • %h: Hostname of the machine.
    • %t: The value of TMPDIR env variable.
    • %Nm: Instrumented binary signature. If N is not specified (run as %m), it is assumed to be N=1.
    • %c: This string does not expand to any specific value but serves as a marker to indicate that execution is constantly synchronized. Consequently, if the program crashes, the coverage results will be retained.
  3. Creating coverage reports: Raw profiles files have to be indexed before generating coverage reports. This indexing process is performed by llvm-profdata.
Bash
llvm-profdata merge -sparse foo.profraw -o foo.profdata

Generate code coverage in Rust

To generate coverage reports for our tests, we can follow the documentation provided by Rust and LLVM. The first step is to install the LLVM tools.

Bash
rustup component add llvm-tools-preview

After installing the LLVM tools, we can proceed to generate the code coverage. It is highly recommended to delete any previous results to avoid potential issues. To do so, we need to execute the following sequence of commands:

Bash
# Remove possible existing coverages
cargo clean && mkdir -p coverage/ && rm -r coverage/*
CARGO_INCREMENTAL=0 RUSTFLAGS='-Cinstrument-coverage' LLVM_PROFILE_FILE='coverage/cargo-test-%p-%m.profraw' cargo test

This will execute the command cargo test and calculate all the coverage for the tests executed. This process will generate the *.profaw files that contain the coverage information.

Generate HTML reports

To visualize the coverage results more effectively, we can utilize the grcov tool, which generates HTML static pages. Installing grcov is straightforward and can be done using cargo.

Bash
cargo install grcov


Once grcov is installed, we can proceed to generate the HTML files that will display the coverage results.

Bash
grcov . --binary-path ./target/debug/deps/ -s . -t html --branch --ignore-not-existing --ignore '../*' --ignore "/*" -o target/coverage/

  • –binary-path: Set the path to the compiled binary that will be used.
  • -s: Specify the root directory of the source files.
  • -t: Set the desired output type. Options include:
    • html for a HTML coverage report.
    • coveralls for the Coveralls specific format.
    • lcov for the lcov INFO format.
    • covdir for the covdir recursive JSON format.
    • coveralls+ for the Coveralls specific format with function information.
    • ade for the ActiveData-ETL specific format.
    • files to only return a list of files.
    • markdown for human easy read.
  • –branch: Enable parsing of branch coverage information.
  • –ignore-not-existing: Ignore source files that cannot be found on the disk.
  • –ignore: Ignore files or directories specified as globs.
  • -o: Specifies the output path.

Upon successful execution of the aforementioned command, an index.html file will be generated inside the target/coverage/ directory. The resulting HTML file will provide a visual representation of the coverage report, presenting the coverage information in a structured and user-friendly manner.

Generate lcov files

Indeed, in addition to HTML reports, generating an lcov file can be beneficial for visualizing the coverage results with external tools like Visual Studio Code. To generate an lcov file using grcov, you can use the same command as before but replace the output type “html” with “lcov.” This will generate an lcov file that can be imported and viewed in various coverage analysis tools, providing a comprehensive overview of the code coverage in a standardized format.

Bash
grcov . --binary-path ./target/debug/deps/ -s . -t lcov --branch --ignore-not-existing --ignore '../*' --ignore "/*" -o target/coverage/lcov.info

Finally, we can install any extension to interpretate this information. In my case, I will use Coverage Gutters.

Finally, our vscode will look something like this. When we can see more visually and dynamic which lines of our code are being tested and the total percentage of the current file.

This is my script I recommend to use in order to generate the code coverage with HTML and LCOV files.

Bash
#!/bin/bash

# Define color variables
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

function cleanup() {
  echo -e "${YELLOW}Cleaning up previous coverages...${NC}"
  cargo clean && mkdir -p coverage/ && rm -r coverage/*
  echo -e "${GREEN}Success: Crate cleaned successfully${NC}" 
}

function run_tests() {
  echo -e "${YELLOW}Compiling and running tests with code coverage...${NC}"
  CARGO_INCREMENTAL=0 RUSTFLAGS='-Cinstrument-coverage' LLVM_PROFILE_FILE='coverage/cargo-test-%p-%m.profraw' cargo test --workspace
  if [[ $? -ne 0 ]]; then
    echo -e "${RED}Error: Tests failed to execute${NC}"
    exit 1
  fi
  echo -e "${GREEN}Success: All tests were executed correctly!${NC}"
}

function generate_coverage() {
  echo -e "${YELLOW}Generating code coverage...${NC}"
  grcov . --binary-path ./target/debug/deps/ -s . -t html --branch --ignore-not-existing --ignore '../*' --ignore "/*" -o target/coverage/ && \
  grcov . --binary-path ./target/debug/deps/ -s . -t lcov --branch --ignore-not-existing --ignore '../*' --ignore "/*" -o target/coverage/lcov.info
  if [[ $? -ne 0 ]]; then
    echo -e "${RED}Error: Failed to generate code coverage${NC}"
    exit 1
  fi
  echo -e "${GREEN}Success: Code coverage generated correctly!${NC}"
}

echo -e "${GREEN}========== Running test coverage ==========${NC}"
echo

cleanup

run_tests

generate_coverage

Using Tarpaulin

There is an existing powerful tool called Tarpaulin that can do all this for us. We can install it with cargo:

Bash
cargo install cargo-tarpaulin

And finally, execute with a single command:

Bash
cargo tarpaulin
Jun 18 16:54:56.341  INFO cargo_tarpaulin::config: Creating config
Jun 18 16:54:56.433  INFO cargo_tarpaulin: Running Tarpaulin
Jun 18 16:54:56.433  INFO cargo_tarpaulin: Building project
Jun 18 16:54:56.433  INFO cargo_tarpaulin::cargo: Cleaning project
   Compiling rust_sandbox v0.1.0 (/home/imrubensi/rust_sandbox)
    Finished test [unoptimized + debuginfo] target(s) in 0.35s
Jun 18 16:54:56.907  INFO cargo_tarpaulin::process_handling::linux: Launching test
Jun 18 16:54:56.907  INFO cargo_tarpaulin::process_handling: running /home/imrubensi/rust_sandbox/target/debug/deps/rust_sandbox-0bab96c8aa79a774

running 1 test
test tests::it_works ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s

Jun 18 16:54:57.137  INFO cargo_tarpaulin::report: Coverage Results:
|| Uncovered Lines:
|| Tested/Total Lines:
|| src/lib.rs: 4/4
|| 
100.00% coverage, 4/4 lines covered

Branch and condition coverage are the two main functionalities missed in this project. Apart from that, I do think that Tarpaulin is a good choice if you want to quickly assess your code coverage.

References

Swap_remove in Rust

Introduction

The other day, looking at the std::Vec documentation in Rust, I found this interesting function called swap_remove.

First, we have to talk about the obvious difference between the standard remove, that everybody has used before, and this swap_remove that is a more tricky implementation for some use cases:

Remove

Remove, as defined in its documentation, follows the next function signature: pub fn remove(&mut self, index: usize) -> T. It recieves the mutable vector itself, and the desired index of the element to be removed, and return it.

What it is important to mention is how Rust (and almost every other language) handle vectors. Vectors in computer science are defined as a collection of elements followed in memory, that means, the distance between two elements of a vector is the size of the element.

4 blocks with the values 1,2,3,4, splited in 4 bytes each since they are signed integers of 4 bytes

Once we delete an element that is in the middle of the structure (if you want to delete the first or last element Rust recommends to use VecDeque instead), Rust has to shift the elements to avoid empty spots in the collection. This work has a O(n) complexity in the worst case. Wich means that if we have one million elements and we want to remove the second one, the vector will have to shift 999.998 elements!!!

Swap_remove

The function swap_remove is useful to handle this problem. This operation will swap the last element with the one to be erased, and finally perform a back pop to remove it. E.g: If we remove the value 2, the vector will look like the following:

4 blocks with the values 1,4,3, splited in 4 bytes each since they are signed integers of 4 bytes
Since access and insertions in a vector are O(1), this provides vec::swap_remove a complexity of O(1). Wich means that we have to make 999.997 less operations than the remove method. Cool right?

When to use it or not

With the last explanation seems like we have to use always swap_remove instead of remove right? Thats wrong!!!

Vectors are a powerful structure when we have the values sorted. This invalidates the whole point of using swap_remove, since we will have to sort the values again.

Rust
let mut v: Vec<i32> = vec![1, 2, 3, 4];
assert!(v.is_sorted());
v.swap_remove(1); // v = [1, 4, 3]
assert!(!v.is_sorted());

We have to be careful with working with raw pointers in Rust since they are unsafe, but I will explain some errors that we can find.

Rust
let mut v: Vec<i32> = vec![1 ,2, 3, 4];
let points_to_four = &v[3] as *const i32;
let points_to_two = &v[1] as *const i32;
assert_eq!(unsafe{*points_to_four}, 4);
assert_eq!(unsafe{*points_to_two}, 2);
v.swap_remove(1);
assert_eq!(unsafe{*points_to_four}, 4); // Could be false at some point that we can not control
assert_ne!(unsafe{*points_to_two}, 2); // Now points to the value 4 at v[1]

This test was very interesting. It seems like Rust clones the elements to swap instead of moving it, this is understandable since an i32 implements the Copy and Clone traits.

Those are unsafe operations but we have to be very careful of what are we doing because if our Vec<T> where T !Copy, the assert could be or not true. By the way, even if now it is true because the value is copied, this could change in the future once the OS claim this free memory address. This happens aswell with vec::remove.

Rust
let mut v: Vec<i32> = vec![1, 2, 3, 4];
let last_ptr = &v[3] as *const i32;
assert_eq!(unsafe{last_ptr.read()}, 4);
v.swap_remove(1);
assert_eq!(unsafe{last_ptr.read()}, 4); // Still true
v.insert(1, 10);
assert_eq!(unsafe{last_ptr.read()}, 4); // FAIL: Left: 3, Right: 4. The vector has growth, shifting the value 3 to the right.

Conclusion

We should use vec::remove when we care about the order and vec::swap_remove otherwise. Working with raw pointers and remove elements of a vector is highly unsafe, but at worst… We have the same as C++ right?

The reason you should test your personal projects

Why do we underestimate testing?

Testing in software development tend to be the forgotten and underestimated part. It usually takes time and do not produce real products or concrete features, this is why it usually get relegated in order to prioritize other aspects in a development cycle.

There is a rule that says you have to have one line of testing for one line of code. In practice, this is usually not fulfilled, usually due to time, money and resources. Companies ofter underestimate how powerfull testing is and spend a lot of hours in finding and solving bugs that shouldn’t even been in production.

Testing in big projects

Testing in big projects is always a must. Even thought, you have more or less tests, better or worse tests, but there are always some testing in huge projects where a lot of people are involved and one minor change can downfall into a bug issue. Companies are aware of that and tend to force their developers to implement testing in the code, but let’s be honest, testing is usually the part that developers hate to implement and they tend to get rid of it as soon as possible, letting a lot of cases uncovered.

Testing in small and personal projects

But obviously, if there is a place when testing is absolutely forgotten is in small and mainly personal projects. Ey! I do not say that you have to “waste” time testing you small project used by absolutely nobody in the world. But this trend of not implement testing in small projects tend to extend to libraries or small software products used by users.

Almost 6 months ago, I started learning Rust and trying to heavily participate in adding my small grain of sand to the ecosystem the comunity is building arround it. That includes libraries, frameworks, external tools and also projects.

When you add a library made by someone in your project, you are trusting this piece of software. You take for granted that eveything should work as intended. And you, as developer, are in the moral responsability that this is true, otherwise, you are not only developing a bad software but making other developers to waste their time by your fault.

Hidden gems in testing

Programmers usually understand testing as a way to ensure that his code is well constructed for any possible scenarios that may happen in the execution.

Meanwhile this is true, it is only one of the reasons for testing. A part for covering all the possible edge cases that your program can have, there are other things that testing hold your back.

Development speed in the future

Adding test to your software gives you the ability to quickly ensure that any modification you are making will not break the existing logic. This is the main reason testing is done in big projects. You want to implement a new feature or fix a bug and if testing is properly done, you do not have to mind if you are breaking other stuffs with your new implementation because the tests will ensure that for you.

Third party libraries

If we only focus on us as the only people that can break our software we are wrong. Usually, in a product development we use a bunch of third party libraries that are constantly updating and changing its logic. Maybe that third party library that you used for a function called getDollars(float euros) have change in its last release and now its new implementation is more like this: getDollars(float pounds).

This change in a third party library will produce a silent bug in our code. Now, all of our conversions will return an incorrect value since we thought we are converting our euros to dollars. But the program will not crash in any way possible unless we made the propers tests for that.

External changes

Sometimes, our software depends in some public data or values that we take for granted as inmutable. This is the parciular case that made me make this post.

Recently, I have been working in a Rust project that I will talk more about it soon. The project consist in web scrapping some website in order to retrieve information and process it. For this reason, the implementation of my logic heavily depends on how the web page structure is done and any minor change in the website made by their developers can easily break all my code.

Obviously, I can not anticipate on how and when the web programmers and UI/UX teams will update the site. In this case, aswell as tests that tries to cover all the possibles fails if the page layout changes, I also want that my tests fails if someday this happens. For example:

HTML
<!-- Version 1.0 -->
<html>
	<body>
        <p> Valuable data $$ </p>
    </body>
</html>
C++
// logic.cpp
std::string data = get_first_element_in_body();

In this case our logic is simple, we want to get the first element inside the body in order to retrieve our valuable data. But then, the webpage made a change and this is the new code:

HTML
<!-- Version 1.1 -->
<html>
	<body>
        <h1> Stupid title </h1>
        <p> Valuable data $$ </p>
    </body>
</html>

With this small change, all our code is absolutely and silently broken. The developers decided that they wanted to add a title before that information for stetics reasons and now our webscrapping does not provide the same result as before. This would be so difficult to find without tests for that. Imagine we are storing this directly in a database or sending it to another resource. We wouldn’t know anything until the user tell us.

This can be prevented with a simple small test

C++
// test.cpp
std::string data = get_first_element_in_body();
static_assert(is_this_data_valuable(data));

Conclusion

Testing is as important as real product implementations. We have to start thinking as live saver and not as time wasted. In this article, I also proved that testing is not only important for breaking changes that you may introduce in your code but also for silent bugs introducen not even by yourself. I hope after this read you start seeing testing this way. You can not imagine how many stress take away from you when developing to know that your product is working as intended just by running a single command that executes the tests or even a CI flow that ensure that nobody can change anything that breaks the existing implementation.