Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 6/10)

No Allocation in the Loop: Memory Rules That Survive CI

Parts 3 through 5 established the core theme: determinism comes from rules that you encode into code and tooling. Memory is where many embedded systems lose that discipline over time. A feature lands, a buffer grows, a “temporary” vector appears, and suddenly timing slips and SRAM evaporates.

This post is about a memory policy that remains true after six releases and three developers, plus where the Embedded Template Library (ETL) fits when the standard library is a poor match for your target.

The boundary: where allocation is allowed

You want three zones, each with different rules:

  • Hot path: control loop and ISR-adjacent code. No dynamic allocation, no surprises.
  • Warm path: initialization, configuration, command parsing. Still constrained, but you may allow limited allocation if you can justify and bound it.
  • Cold path: host tools and tests. Allocate freely.

If you only pick one rule, pick this one:

  • Target code can allocate at startup. It must not allocate in the periodic loop.

In many systems the stricter rule is better: no heap usage anywhere on target. It simplifies the safety story and prevents “it only allocates sometimes” regressions.

Core rules

No dynamic allocation in the control loop

Dynamic allocation fails determinism in three ways:

  • Allocator runtime varies due to fragmentation and internal data structures.
  • Failure happens at arbitrary times rather than at known boundaries.
  • Allocations get introduced indirectly through libraries and “convenient” abstractions.

Policy options:

  • Strict: no heap usage on target at all.
  • Pragmatic: heap allowed during startup only; hot path must be allocation-free.

Either is defensible. What is not defensible is “we avoid allocation, mostly.”

Use fixed-capacity storage everywhere

Own storage with std::array (or equivalent fixed buffers), pass it with std::span, and prefer containers whose capacity is a compile-time constant.

You want data structures with explicit capacity and explicit overflow behavior:

  • Fixed-capacity queues that drop and count.
  • Circular buffers sized at compile time.
  • Fixed-size strings for diagnostics and command parsing.

Overflow and saturation must be explicit

In deterministic firmware, overflow is not exceptional. It is a state.

Do not throw. Do not silently wrap. Do not “just crash.”

Choose one policy per buffer:

  • Drop newest and increment a counter.
  • Drop oldest and increment a counter.
  • Saturate a value to a limit and increment a counter.
  • Transition to a known fault mode.

Make that behavior part of the API.

ETL: where it fits in a modern C++ firmware stack

The Embedded Template Library (ETL) exists for a reason: many embedded toolchains historically had incomplete, heavy, or undesirable standard library implementations for small targets, and embedded teams needed deterministic containers with fixed capacity.

When ETL makes sense:

  • You want containers that look like STL but are fixed-capacity by construction (vector, string, deque, queue, map variants depending on what you use).
  • You want predictable memory behavior without relying on the quality of libstdc++ on your target.
  • You want a uniform container set across multiple embedded toolchains.

How ETL fits with the approach in this series:

  • The policy stays the same: no allocation in hot paths, explicit capacity, explicit overflow behavior.
  • ETL becomes another tool for implementing that policy, especially when std:: containers are unavailable or inappropriate on your target.
  • You still prefer std::span at interfaces where possible, because spans keep APIs honest regardless of the underlying container.

Practical guidance:

  • Use ETL containers in target code when they measurably reduce risk versus your target’s standard library.
  • Keep host-only code on the normal STL unless you have a strong reason not to. Tests and tools benefit from standard behavior and rich ecosystems.
  • Do not mix container “worlds” casually. Pick a simple rule such as: core target code uses ETL containers for ownership, and uses spans for APIs.

Unsolicited advice: ETL is most valuable when you treat it as an implementation detail behind clear APIs, not as a new default everywhere. If you expose ETL types across module boundaries indiscriminately, you will eventually regret it.

One tight example: fixed-capacity ring buffer

This is the pattern you want for UART TX queues, event queues, sample buffers, and telemetry staging.

#include <array>
#include <cstddef>

template <typename T, std::size_t N>
class RingBuffer final {
public:
    [[nodiscard]] bool try_push(const T& value) noexcept {
        if(this->count_ >= N) {
            ++this->overflow_count_;
            return false;
        }
        this->data_.at(this->write_) = value;
        this->write_ = (this->write_ + 1U) % N;
        ++this->count_;
        return true;
    }

    [[nodiscard]] bool try_pop(T& out) noexcept {
        if(this->count_ == 0U) {
            return false;
        }
        out = this->data_.at(this->read_);
        this->read_ = (this->read_ + 1U) % N;
        --this->count_;
        return true;
    }

    [[nodiscard]] std::size_t size() const noexcept { return this->count_; }
    [[nodiscard]] std::uint32_t overflow_count() const noexcept { return this->overflow_count_; }

private:
    std::array<T, N> data_{};
    std::size_t read_{0U};
    std::size_t write_{0U};
    std::size_t count_{0U};
    std::uint32_t overflow_count_{0U};
};

Notes:

  • Uses std::array::at() to make bounds violations loud in development.
  • Exposes overflow as a counter so you can detect “near miss” behavior before it becomes a failure.

The same design works if you replace std::array with an ETL fixed-capacity container internally, but the important part is the policy: explicit capacity, explicit failure behavior, no hidden allocation.

Binary hygiene: keep host luxuries off the target

A large fraction of “C++ is too big” comes from linking in the wrong things.

Rules that work:

  • Split into libraries with different constraints:
    • Core target library: strict rules, no heap in loop, no heavy headers.
    • Platform library: vendor SDK glue, drivers, ISRs.
    • Host tools library: parsing, visualization, richer containers, file I/O.
  • Ensure the core library cannot include host-only headers or dependencies.
  • Ensure host tools cannot accidentally get linked into firmware.

If you do this, you can keep rich analysis and tooling without paying for it in flash.

ETL aligns well with this split: it is a target-side choice. Host tools can remain on the standard library and remain productive.

Encoding memory rules into code conventions

A memory policy should be visible at the call site.

Conventions that help:

  • Use try_ prefixes for bounded operations that can fail without side effects: try_push, try_parse, try_encode.
  • Mark important return values [[nodiscard]].
  • Prefer std::span in interfaces over raw pointers.
  • Prefer explicit sizes and named constants over literals.

Avoid hiding memory behavior:

  • No “magic growth” containers in target code.
  • No formatting libraries that allocate.
  • No implicit dynamic dispatch mechanisms that might allocate behind the scenes.

Tooling support: make violations fail automatically

Compiler and linker flags

Enforce the basic constraints:

  • Disable exceptions and RTTI in target builds.
  • Warnings as errors.
  • Keep flags consistent across variants, and make variant differences explicit.

Static analysis to block banned patterns

Use clang-tidy, plus simple project-specific checks, to catch:

  • new, delete, malloc, free usage in target code.
  • Inclusion of forbidden headers (<vector>, <string>, <functional>) in core hot-path modules.
  • std::function usage in hot paths.
  • C-style arrays in core modules.

This is also where you enforce “ETL is allowed here, STL is allowed there” by directory or target boundaries.

CI size gates: fail the build if budgets regress

The most effective “memory discipline” tool is a hard budget gate.

Policy:

  • Every target build variant must pass flash and SRAM limits.
  • The pipeline fails immediately on overflow.
  • Optional: define per-variant headroom requirements so you do not run at 99% forever.

Unsolicited advice: do not treat size checks as “release only.” Run them on every merge request.

A minimal checklist

If you want the shortest version of this post:

  • No dynamic allocation in the control loop.
  • Prefer no allocation anywhere on target if you can.
  • Use fixed-capacity structures and expose overflow explicitly.
  • Own memory with std::array or ETL fixed-capacity containers, pass views with std::span.
  • Split host tooling from target code so rich features do not bloat firmware.
  • Enforce budgets and banned patterns in CI.

Part 7 will take advantage of these boundaries: host-first testing with mocks, where the core remains allocation-free and deterministic while the test harness can be as rich as you want.

The Complete “Modern C++ Firmware” Series:


Need professional firmware development help? Engage with Polyrhythm!


Discover more from John Farrier

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from John Farrier

Subscribe now to keep reading and get access to the full archive.

Continue reading