StackLanguagesRust

Rust

Last updated: Apr 15, 2026


Rationale

Rust is Fluid Attacks' language for low-level components where performance is the main concern.

The main reasons why we chose it over other alternatives are:

  • It is open source.
  • It is memory-safe by design, with a type system that prevents entire categories of bugs at compile time, including null dereferences, data races, and unhandled error paths.
  • It has very powerful static typing with algebraic data types and exhaustive pattern matching, which allows encoding business invariants such that invalid states are not possible.
  • It is a compiled language that produces self-contained native binaries, making distribution to clients and CI environments straightforward without requiring an external runtime or virtual environment.
  • It supports cross-compilation to multiple OS and architecture targets from a single build environment.
  • It is one of the fastest programming languages on the market, with predictable, low-latency execution and no garbage collector pauses, which is essential for our scanner workloads.
  • Its dependency management via Cargo is significantly better than Python's, with a reliable build system and reproducible environments.
  • It naturally supports our engineering values: immutability by default, first-class functional programming patterns, and algebraic data types as core language features.

Alternatives

The following languages were considered:

Python

  • It is open source.
  • It is a high-level language that is easy to read and write.
  • It has a very large community and ecosystem.
  • It has a large market, making it easy to hire developers.
  • It supports static typing only as an optional extension via mypy, not as a built-in compiler guarantee.
  • It is an interpreted language, meaning it cannot produce self-contained binaries for distribution.
  • It is significantly slower than Rust for CPU-bound workloads, and has become a bottleneck in our scanner components.

Python is our current baseline. The pain points that motivated this evaluation — lack of compile-time correctness, painful binary distribution, and slow scanner throughput — are structural limitations of the language that cannot be resolved through tooling alone.

Go

  • It is open source.
  • It has a slightly more complex syntax than Python but is fairly accessible.
  • It has a growing community and market.
  • It compiles to native binaries and supports cross-compilation.
  • It has a garbage collector, which introduces non-deterministic pauses under memory pressure, making performance less predictable than Rust for our workloads.
  • Its type system does not support algebraic data types or exhaustive pattern matching, which limits the ability to model domain invariants at compile time.

Go is a strong general-purpose alternative but does not match Rust's correctness guarantees or performance predictability for CPU-intensive applications.

Haskell

  • It is open source.
  • It has the strongest functional programming and type system among all candidates, with full support for higher kind types and purely functional idioms.
  • Its community and ecosystem are small and specialized.
  • It has a very small market, making hiring and onboarding significantly harder than any other candidate.
  • Its ecosystem for our critical dependencies (AWS SDK, tree-sitter) is limited and not commercially backed.
  • Its learning curve is steep and the gap from Python or TypeScript is large.

Haskell scores highest for engineering values alignment but scores poorly on ecosystem viability and talent, and does not justify the onboarding and ecosystem risk for production workloads.

OCaml

  • It is open source.
  • It has a strong type system with algebraic data types and pattern matching.
  • Its community and ecosystem are small and growing slowly.
  • It has a very small market, and the pool of experienced engineers is extremely limited.
  • Its ecosystem for AWS and tree-sitter is not mature.
  • It lacks the safety guarantees around memory and concurrency that Rust provides natively.

OCaml shares some of Rust's type system strengths but falls behind on performance, memory safety, ecosystem viability, and available talent.

Usage

Rust is used by:

  • Sniffs
  • Peels

Standards for Rust components

We have established a common minimum standard that all components should follow regarding linting and testing to guarantee code quality.

Formatting and linting

rustfmt — code formatting

All Rust source is formatted with rustfmt via cargo fmt. No custom rustfmt.toml is used; the upstream default style is the standard. In CI, cargo fmt --check fails the build if any file diverges from that style. Locally, cargo fmt auto-corrects.

The Python equivalent is ruff format, which is also run in check-only mode on CI and auto-corrects locally. Both tools enforce a single canonical style so formatting is never a review concern.

clippy — static analysis

Lints are configured entirely inside Cargo.toml under [lints.clippy], keeping the lint policy version-controlled alongside the code it governs. A companion clippy.toml holds numeric thresholds. CI runs:

cargo clippy --all-targets --all-features -- -D warnings
Unsafe code — [lints.rust]
[lints.rust]
unsafe_code = "forbid"

unsafe_code = "forbid" is non-negotiable: no Rust component may contain unsafe blocks. forbid is stronger than deny — it cannot be overridden with a local #[allow] attribute. This eliminates an entire class of memory-safety bugs that Rust's type system would otherwise prevent. Python has no equivalent because the runtime is already memory-safe by design.

Lint groups — [lints.clippy]
all      = { level = "deny", priority = -2 }
pedantic = { level = "deny", priority = -1 }
nursery  = { level = "deny", priority = -1 }

All three groups are denied up front. The priority values let individual lints override the group default: a lower priority is evaluated first, so all is the widest net, then pedantic and nursery add their lints at a higher priority.

GroupWhat it adds
allEvery stable lint that clippy ships
pedanticStricter style and correctness lints, opt-in by convention
nurseryUnstable lints still being refined; catching them early avoids future debt

The Python analogue is select = ["ALL"] in ruff.toml, which enables every rule Ruff knows about. The philosophy is identical: start from the strictest possible baseline, then explicitly allow what is intentionally not enforced, rather than accumulating an ever-growing opt-in list.

Allowed lints
missing_errors_doc   = "allow"
missing_panics_doc   = "allow"
module_name_repetitions = "allow"
LintWhy it is allowed
missing_errors_docDoc-comment completeness is enforced through code review, not tooling
missing_panics_docSame rationale as above
module_name_repetitionsCommon in idiomatic Rust module layout; suppressing it produces noise without benefit

In Python, the D1xx docstring rules are ignored there for the same reason as the two doc lints above — docstring completeness is a review concern when necessary, not a CI gate.

Restriction lints

The following lints from clippy::restriction are denied in every component:

LintRationale
dbg_macroDebug artefact; must not reach production code
expect_usedForces explicit error propagation instead of panicking
panicSurfaces accidental panics; all error paths must be typed
todoUnfinished code must not ship
unimplementedSame as todo
unwrap_usedForces explicit error handling on Option and Result
unreachableMust be proven by the type system, not asserted at runtime
arithmetic_side_effectsPrevents silent integer overflow and underflow
as_conversionsPrevents silent truncation and sign-extension in casts
implicit_cloneMakes .clone() call sites visible; clones are not free
indexing_slicingForces bounds-checked access; prevents panics on out-of-bounds
inefficient_to_stringAvoids redundant allocations from calling .to_string() on &str
manual_let_elseEnforces idiomatic let … else instead of verbose match/if let
option_if_let_elseEnforces map_or_else idiom over if let … else on Option
print_stdoutBinaries must use tracing for output, not raw println!
print_stderrSame as print_stdout
shadow_unrelatedPrevents confusing variable shadowing when the types differ
str_to_stringAvoids redundant String allocations from "…".to_string()
wildcard_importsKeeps imports explicit; wildcard imports obscure where names come from

In tests, the restriction on panicking macros is lifted:

allow-unwrap-in-tests  = true
allow-expect-in-tests  = true
allow-panic-in-tests   = true
allow-dbg-in-tests     = true

clippy.toml — complexity thresholds

cognitive-complexity-threshold = 8
too-many-lines-threshold       = 50
too-many-arguments-threshold   = 5
enum-variant-name-threshold    = 3
excessive-nesting-threshold    = 4

avoid-breaking-exported-api = false
SettingValueRationale
cognitive-complexity-threshold8Keeps functions small enough to reason about in isolation; matches the cyclomatic limit used in Python
too-many-lines-threshold50Functions longer than 50 lines almost always benefit from being split
too-many-arguments-threshold5More than 5 parameters is a signal to introduce a struct
enum-variant-name-threshold3An enum with more than 3 variants that share a prefix should be named without the prefix
excessive-nesting-threshold4Deep nesting hurts readability; matches Python's max-nested-blocks = 4
avoid-breaking-exported-apifalseAllows clippy to suggest renames even for public items; correctness takes priority over API stability during active development

The Python equivalents in ruff.toml (skims component) are:

Ruff settingValueRust equivalent
mccabe.max-complexity8cognitive-complexity-threshold = 8
pylint.max-nested-blocks4excessive-nesting-threshold = 4
pylint.max-branches8Subsumed by cognitive-complexity-threshold
pylint.max-statements25Partially covered by too-many-lines-threshold = 50
pylint.max-bool-expr5too-many-arguments-threshold = 5 (different concern, similar intent)

The cognitive-complexity and nesting limits are intentionally identical across languages so that the same design instincts apply regardless of which stack is in use.

Dependency auditing

Two tools replace the role that deptry plays in Python components.

cargo-udeps — unused dependency detection

cargo +nightly udeps --all-targets

cargo-udeps detects packages declared in Cargo.toml that are never imported. A declared but unused dependency must be removed before CI passes. It requires the nightly toolchain because it hooks into unstable compiler internals to produce accurate results across all feature combinations.

The Python equivalent is deptry, which performs the same check for entries in pyproject.toml that are never imported by the source.

cargo-deny — supply-chain auditing

cargo deny check

cargo-deny is configured via deny.toml at the component root. It enforces four categories in a single pass:

CategoryWhat it checks
advisoriesEvery dependency against the RustSec advisory database
licensesAll transitive dependencies carry an approved SPDX license
bansNo banned crates, no yanked versions, no unintended duplicates
sourcesAll dependencies originate from crates.io or an explicitly allowed git source

cargo-deny is a strict superset of cargo-audit (which covers advisories only). The Python equivalent is pip-audit for advisories and manual license review; cargo-deny consolidates all four concerns into one tool and one CI step.

Approved licenses:

MIT, Apache-2.0, Apache-2.0 WITH LLVM-exception, ISC, Unicode-3.0,
BSD-2-Clause, BSD-3-Clause, MPL-2.0, OpenSSL, Zlib

Any license not on this list requires an explicit addition to deny.toml and a justification in the merge request. unused-allowed-license = "deny" keeps the allowlist minimal by failing if a listed license is no longer used by any dependency.

Architecture enforcement

cargo modules structure              # print the module tree
cargo modules dependencies --lib     # print the crate-internal dependency graph

cargo-modules analyses the module tree and detects orphaned modules and circular dependencies between modules. Running it locally gives a fast visual of how a crate is structured; running it in CI catches regressions.

Testing and coverage

Test strategy

Every Rust component must ship unit tests at each module level.

Running the test suite

cargo test --all-features

The Python equivalent is pytest. Both commands are the single entry point for the full test suite of a component.

Coverage — ratchet mechanism

Coverage is measured with cargo-llvm-cov, which uses the LLVM instrumentation already present in the Rust toolchain. It supports all tier-1 targets, including macOS ARM, unlike cargo-tarpaulin which has platform limitations.

# Measure and produce a JSON report
cargo llvm-cov --all-features --json --output-path coverage-report.json

# Local inspection (LCOV format for IDE integration)
cargo llvm-cov --all-features --lcov --output-path lcov.info

Coverage is enforced per module using a ratchet — the same mechanism used by some Python components. Each module directory contains a coverage file with a single decimal number: the line-coverage percentage committed when the module was last updated.

src/
  advisories/
    mod.rs
    coverage        ← baseline for this module (e.g. "87.5")
  sbom/
    mod.rs
    coverage

Single-file modules (src/foo.rs) must be structured as src/foo/mod.rs so the coverage file has a natural home alongside the source.

Ratchet rules:

SituationAction
Coverage drops below baselineCI fails; add tests before merging
Coverage increases above baselineUpdate the baseline file and commit it
New module with no baselineMust reach ≥ 80 % line coverage before the first baseline is committed
Thin adapter module (e.g., CLI argument parsing)May be exempted by omitting its baseline file; exemption must be justified in code review

The CI check is performed by a helper script:

python3 scripts/check_coverage.py coverage-report.json src/

The ratchet prevents coverage from silently eroding as new code is added. It does not require a global target — each module owns its own baseline — so new modules cannot hide behind a passing aggregate percentage.

On this page