Published numbers, reproducible methodology, no marketing claims without measurements behind them.
2.57 ms p95 across 20,000 runs(~19× faster than the published Lakera <50ms claim)
Each request runs through up to four layers. The slowest single layer dominates the budget.
Where vendors publish a number, we list it. Where they don't, we say so.
packages/core/src/firewall/detection-engine.tsvia tsx. Production runs against the bundled JS path, which is typically faster.Clone the public repo, run one command, get the same JSON shape.
# Public repo git clone https://github.com/EvalGuardAi/evalguard cd evalguard pnpm install # Default run — 5,000 iterations, plain stdout node scripts/benchmark-firewall-latency.mjs # JSON output (same shape this page reads) node scripts/benchmark-firewall-latency.mjs --runs=20000 --json > latency.json # Or via the npm script pnpm bench:firewall-latency
CI fails on p95 regression past 50ms — see .github/workflows/ci.yml.
Numbers are refreshed on every release that touches the firewall path. CI gate prevents regressions.
The latency budget here is a code-level measurement. Per-tier SLA + uptime history lives on the SLA page.