
GitLab Pipeline Blueprint and a Migration Checklist
Parts 1 through 9 were about technical discipline: standard selection, deterministic rules, time handling, clean boundaries, memory policy, host-first testing, HIL automation, and observability without bloat.
Part 10 is about making that discipline real. The GitLab CI/CD pipeline is where your rules become enforceable. If a rule is not enforced automatically, it is optional.
This post provides a practical pipeline blueprint and a checklist for migrating an existing codebase without turning the effort into a multi-month rewrite.
The goal: fail early, fail for the right reasons
A firmware pipeline should answer three questions on every change:
- Did we break correctness?
- Did we break determinism constraints?
- Did we break the ability to ship (size budgets, toolchain, reproducibility)?
The pipeline should fail early and cheaply. You want to reject bad changes before anyone flashes hardware.
A pipeline blueprint that matches the series
A useful structure is seven stages. Names are less important than behavior.
Lint and style
Purpose: stop style churn and low-value review debates.
Typical jobs:
- clang-format check
- basic lint scripts (forbidden headers in core modules, file naming rules)
Gate policy: usually blocking. If a style issue can be auto-fixed, do not allow it to linger.
Static analysis
Purpose: catch rule violations and likely defects before builds and tests.
Typical jobs:
- clang-tidy with a project config tuned to your rules
- cppcheck if you like having a second viewpoint
- custom checks for:
- allocation in target code
- forbidden includes in core
- std::function or virtual usage in hot-path modules
- C-style arrays in core modules
Gate policy: blocking for rule violations, warning-only for advisory checks at first. Promote to blocking once the baseline is clean.
Host build
Purpose: compile the core and tests in a fast, reproducible environment.
Typical jobs:
- GCC build (host)
- Clang build (host)
Gate policy: blocking.
Host tests and coverage
Purpose: validate most behavior without hardware, every time.
Typical jobs:
- GoogleTest (or your framework) on host
- Coverage report generation
Gate policy: blocking. Enforce a coverage threshold for core logic. Start modest, then ratchet up.
Unsolicited advice: do not set “95%” as a day-one requirement unless you already have it. Pick a threshold you can meet, then raise it deliberately.
Sanitizers
Purpose: kill memory and UB bugs cheaply.
Typical jobs:
- ASAN (blocking)
- UBSAN (blocking)
- TSAN is optional and often noisy for embedded-style code, but it can be valuable for host-side concurrency
Gate policy: blocking for at least ASAN and UBSAN on merge requests.
Target firmware builds
Purpose: prove that the real toolchain still builds, and that every shipping variant fits.
Typical jobs:
- target build for each firmware variant you ship (protocol variants, feature variants, IoTest variant)
- map file generation and retention as artifacts
Gate policy: blocking.
Size and budget gates
Purpose: enforce flash and SRAM budgets continuously.
Typical jobs:
- parse
sizeoutput (or map file) and fail if flash or SRAM exceed configured limits - optionally enforce headroom (for example, require 5-10% flash headroom)
Gate policy: blocking.
Unsolicited advice: enforce budgets per variant. A debug variant can lie to you about size pressure if you only measure one build.
Hardware-in-the-loop jobs
Purpose: validate the hardware truths, but keep it operationally realistic.
Typical jobs:
- flash IoTest image
- run pytest HIL suite
- upload trace artifacts and decoded reports
Gate policy: best handled as:
- scheduled/nightly and on release branches, plus
- manual on merge requests for hardware-related changes, if you have limited hardware runners
If you can afford dedicated HIL runners, you can make HIL blocking on main. If not, keep it disciplined but practical.
Packaging and release artifacts
Purpose: produce traceable, reproducible outputs.
Typical jobs:
- package ELF/HEX/BIN, map files, version info, and release notes
- publish artifacts for production or for manufacturing workflows
Gate policy: typically runs on tagged releases or main branch merges.
Making the pipeline enforce the rules from Parts 3-9
If the series is going to matter, the pipeline should encode the rules:
- Part 3 rules:
- target build flags disable exceptions and RTTI
- clang-tidy checks reject forbidden patterns in hot paths
- Part 4 rules:
- chrono usage encouraged in APIs; tick conversions isolated to platform modules
- host tests cover scheduling slip behavior
- Part 5 rules:
- core cannot include vendor headers
- platform boundary is enforced by directory or target-level include restrictions
- Part 6 rules:
- allocation in core or hot-path modules is rejected
- fixed-capacity containers are required on target boundaries
- overflow counters are not ignored
- Part 7 rules:
- host-first tests run on every merge request
- scenario tests cover core behavior
- Part 8 rules:
- IoTest build exists and is testable
- HIL suite is runnable and produces artifacts
- Part 9 rules:
- trace format is fixed-size, versioned, and decoded on host
- observability does not rely on production logging bloat
The pattern is always the same: codify the rule, then make CI enforce it.
A migration checklist that does not require a rewrite
Many teams want the outcomes of this series but are sitting on a legacy codebase. The fastest way to fail is to attempt a full architecture rewrite first.
Here is a staged migration plan that tends to work.
Step 1: Lock down the CI/CD toolchain and build reproducibility
- Containerize the host build toolchain.
- Pin compiler versions in CI.
- Get a target build job running in CI, even if tests are not ready.
Success criterion: CI can build host and target reliably, every time.
Step 2: Create a testable core boundary for CI/CD
- Identify core logic that can be separated from hardware.
- Move it into a core library with no vendor includes.
- Keep platform code in a separate library or directory.
Success criterion: core compiles on host without vendor SDK.
Step 3: Adopt the “most recent standard minus one” baseline
- Set target to C++20.
- Keep C++23 as opt-in, preferably host-only at first.
- Disable exceptions and RTTI on target builds.
Success criterion: target builds with the intended flags and no exemptions.
Step 4: Fix your interfaces first
- Replace pointer-plus-size APIs with spans where possible.
- Replace naked integer time with chrono types at interfaces.
- Introduce fixed-capacity storage patterns (array, ETL containers if needed).
Success criterion: the most error-prone APIs become harder to misuse.
Step 5: Add host-first tests and make them the main gate
- Start with scenario-driven tests for the highest-risk behaviors.
- Add sanitizers on host builds.
- Add a coverage job with a realistic initial threshold.
Success criterion: most logic regressions are caught without hardware.
Step 6: CI/CD to Enforce memory budgets and rule checks
- Add size gates for flash and SRAM per variant.
- Add static analysis or custom checks for forbidden patterns:
- allocation in hot paths
- forbidden includes in core
- std::function or virtual usage in hot path modules
Success criterion: the pipeline prevents the most common determinism regressions.
Step 7: Add IoTest and HIL automation
- Create an IoTest build target that exposes explicit commands.
- Implement a Python pytest harness.
- Run HIL on a schedule or on demand, and store artifacts.
Success criterion: hardware checks become repeatable and scriptable.
Step 8: CI/CD to Add observability artifacts
- Implement fixed-size trace records and counters.
- Add a host decoder that produces readable reports.
- Wire report generation into CI artifacts.
Success criterion: behavior regressions can be diagnosed from artifacts, not folklore.
What to measure
Pipelines can become busywork if you do not measure outcomes.
Useful measures:
- mean time to detect a regression (should drop)
- escaped defects tied to memory and timing issues (should drop)
- flash and SRAM headroom trends per release (should stay stable)
- HIL stability rate and time to reproduce failures (should improve)
- time spent debugging on hardware versus on host (should shift toward host)
Unsolicited advice: if you do not track headroom trends, you will eventually hit a size wall at the worst possible time.
Minimal checklist
If you want the shortest version:
- Host builds, tests, sanitizers, and coverage run on every merge request.
- Target builds for every variant run on every merge request.
- Flash and SRAM budgets are enforced per variant.
- Static analysis blocks forbidden patterns in core and hot paths.
- HIL exists as a first-class job and produces artifacts.
- Observability is minimal on target and rich on host.
This series is not a claim that modern C++ magically produces deterministic firmware. It is a claim that modern C++ plus disciplined boundaries plus a CI pipeline that enforces rules produces deterministic firmware more reliably than tribal knowledge ever will.
The Complete “Modern C++ Firmware” Series:
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 1/10)
- The Case for Modern C++ on Tiny, Safety Critical Targets
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 2/10)
- Choosing C++20 Today, C++23 on a Short Leash
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 3/10)
- Deterministic By Construction: The Rules You Do Not Cross
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 4/10)
- Time and Scheduling Without Footguns
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 5/10)
- Concepts for Hardware Platforms, Not Vtables
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 6/10)
- No Allocation in the Loop: Memory Rules That Survive CI
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 7/10)
- Test the Firmware Without the Board: Host First Strategy
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 8/10)
- Python and ASCII Protocols for Hardware in the Loop
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 9/10)
- Observability Belongs on the PC, Not in the Production Binary
- Modern C++ Firmware: Proven Strategies for Tiny, Critical Systems (Part 10/10)
- GitLab Pipeline Blueprint and a Migration Checklist
Need professional firmware development help? Engage with Polyrhythm
Discover more from John Farrier
Subscribe to get the latest posts sent to your email.