Benchmarks
These are the local benchmark numbers currently shown on the landing page.
This page is for workload shape and reproducibility, not for the normal product integration flow.
These examples are workload-specific. They show the best local results we currently highlight, not a blanket claim that every rewrite or every ClickHouse proxy request will improve by the same factor.
Measured results
| Workload | Baseline | Rewritten | Result |
|---|---|---|---|
| Dense integer IN list, 100,000 adjacent ids | 126.8 ms | 8.0 ms | 15.8x faster |
| Sparse bigint IN list, 100,000 random ids | 110.0 ms | 24.9 ms | 4.4x faster |
Searched CASE tuple map, 2,000 branches over (country, plan) |
2601.2 ms | 363.8 ms | 7.1x faster |
Benchmark runner
The measurements come from:
crates/sqltamer-bench/src/bin/postgres_exec_bench.rs
The runner:
- generates the SQL shape for a named workload
- runs the original query against Postgres
- runs the rewritten plan against the same Postgres setup
- checks that the original and rewritten results match
- prints a JSON report with timings and plan metrics
There is also a ClickHouse HTTP runner:
crates/sqltamer-bench/src/bin/clickhouse_http_bench.rs
That runner:
- runs the original query against ClickHouse HTTP
- runs the rewritten query directly against ClickHouse HTTP
- runs the same query through the SQL Tamer ClickHouse proxy
- checks that all three results match
- supports
--workload in_listand--workload nested_case
Local setup
These numbers were rerun locally on March 8, 2026 against PostgreSQL 17.9.
The benchmark rebuilds the test table before each run.
For in_list, the table shape is:
CREATE TABLE benchmark_items (
id bigint NOT NULL PRIMARY KEY,
label text
);
and it is filled with ids from 1 through table_rows.
For searched_case_tuple_map, the table shape is:
CREATE TABLE benchmark_items (
id bigint NOT NULL PRIMARY KEY,
label text,
country text,
plan text
);
and country / plan values are generated deterministically from the row number.
Commands used
Build the runner first:
cargo build -p sqltamer-bench --bin postgres_exec_bench --release
Then run one of the workloads below. Replace the --database-url value with your own local Postgres connection string if needed.
Dense integer IN list
This is the contiguous case:
SELECT COUNT(*)
FROM benchmark_items
WHERE id IN (1, 2, 3, ..., 100000);
Command:
target/release/postgres_exec_bench \
--database-url 'host=127.0.0.1 port=5432 user=postgres dbname=postgres' \
--workload in_list \
--query-size 100000 \
--table-rows 100000 \
--in-list-shape contiguous \
--min-in-list-items 1000 \
--insert-chunk-size 1000
Sparse bigint IN list
This is the same query form, but with 100,000 random unique ids from a larger domain:
SELECT COUNT(*)
FROM benchmark_items
WHERE id IN (<100000 random unique ids>);
Command:
target/release/postgres_exec_bench \
--database-url 'host=127.0.0.1 port=5432 user=postgres dbname=postgres' \
--workload in_list \
--query-size 100000 \
--table-rows 100000 \
--in-list-shape random \
--random-domain-size 1000000 \
--random-seed 1 \
--min-in-list-items 1000 \
--insert-chunk-size 1000
Searched CASE tuple map
This workload generates a lookup-like CASE expression over country and plan, alternating predicate order to exercise normalization:
SELECT COUNT(*)
FROM benchmark_items
WHERE CASE
WHEN country = 'country_1' AND plan = 'plan_1' THEN 'label_1'
WHEN plan = 'plan_1' AND country = 'country_2' THEN 'label_2'
...
ELSE 'other'
END <> 'other';
Command:
target/release/postgres_exec_bench \
--database-url 'host=127.0.0.1 port=5432 user=postgres dbname=postgres' \
--workload searched_case_tuple_map \
--query-size 2000 \
--table-rows 100000 \
--insert-chunk-size 1000
How to interpret the numbers
- These are local measurements, so exact milliseconds will vary by machine.
- The useful part is the workload shape and whether that shape improved materially.
- These are example wins, not a blanket claim that every rewrite gets faster.
If you want to validate your own workload, start from the closest command above and change the size and shape parameters first.
ClickHouse HTTP examples
Build the ClickHouse runner:
cargo build -p sqltamer-bench --bin clickhouse_http_bench --release
Long IN list:
target/release/clickhouse_http_bench \
--workload in_list \
--query-size 100000 \
--min-in-list-items 1000 \
--in-list-shape random
Deep nested CASE:
target/release/clickhouse_http_bench \
--workload nested_case \
--query-size 1024