ClickHouse API
Use this when your app already talks to ClickHouse over HTTP and can swap the base URL.
SQL Tamer sits in front of ClickHouse, rewrites supported queries, and returns the normal ClickHouse response.
When this is the right path
Good fit:
- Your app sends normal ClickHouse HTTP requests
- Your client can target a normal HTTP base URL at
/ - You want the simplest hosted integration
Not a fit:
- Native ClickHouse TCP clients such as
clickhouse client
Quick start
1. Create auth
Replace SQLTAMER_BASE_URL with your API base URL.
- Hosted product: use
https://sqltamer.com - Local website eval: use
http://127.0.0.1:8080
sqltamer auth init --server-url "$SQLTAMER_BASE_URL"
2. Create a destination
sqltamer clickhouse add \
--name production \
--upstream https://clickhouse.example.com:8443/ \
--min-in-list-items 3
The output includes proxy_base_url. Your app uses that URL plus the same SQL Tamer API key it
already has for control-plane requests when the proxy endpoint is path-based on the main host.
Hosted destinations default to min_in_list_items=1000. Lower it during smoke tests when you
want shorter demo queries to cross the rewrite threshold.
3. Point your client at the proxy
- Use
proxy_base_urlas the ClickHouse HTTP base URL - Keep your normal ClickHouse username and password
- If your client stores
scheme,host, andportseparately, parse them fromproxy_base_url - If
proxy_base_urlis path-based on your main SQL Tamer host, for examplehttps://sqltamer.com/v1/proxy/clickhouse/..., also send your SQL Tamer API key asX-API-Key - In the local website eval,
proxy_base_urlmay use a generated*.localhosthostname. That is expected; use it as-is.
Hosted SQL Tamer can only forward to ClickHouse upstreams that are reachable from the hosted
server. A local bundle URL such as http://127.0.0.1:8123/ or another private-network address
will not work on sqltamer.com; use the local website eval for that shape.
Copy-paste quickstart
This is the whole ClickHouse proxy flow in one place:
sqltamer auth init --server-url "$SQLTAMER_BASE_URL"
SQLTAMER_API_KEY=$(sqltamer auth show --reveal | sed -n 's/^API key: //p' | head -n1)
sqltamer clickhouse add \
--name production \
--upstream https://clickhouse.example.com:8443/ \
--min-in-list-items 3 \
--json
curl -X POST "$PROXY_BASE_URL?database=default" \
-H "X-API-Key: $SQLTAMER_API_KEY" \
-H "X-ClickHouse-User: analytics" \
-H "X-ClickHouse-Key: secret" \
-H "Content-Type: text/plain" \
--data 'SELECT count() FROM events WHERE user_id IN (101, 909, 4201, 9907, 12003, 15055)'
For the local website playground, the same flow looks like:
make doctor PLAYGROUND_MODE=website
make clickhouse-up
BASE_URL=$(cat .runtime/clickhouse/http-url)
curl -fsSL http://127.0.0.1:8000/install.sh | SQLTAMER_INSTALL_DIR="$PWD/.runtime/downloaded-cli" sh
"$PWD/.runtime/downloaded-cli/sqltamer" auth init --server-url http://127.0.0.1:8080
"$PWD/.runtime/downloaded-cli/sqltamer" clickhouse add \
--name local-eval \
--upstream "$BASE_URL/" \
--min-in-list-items 3 \
--json
# local ClickHouse credentials for the playground harness
# username: sqltamer
# password: sqltamer
Smoke test
curl -X POST "$PROXY_BASE_URL?database=default" \
-H "X-API-Key: $SQLTAMER_API_KEY" \
-H "X-ClickHouse-User: analytics" \
-H "X-ClickHouse-Key: secret" \
-H "Content-Type: text/plain" \
--data 'SELECT count() FROM events WHERE user_id IN (101, 909, 4201, 9907, 12003, 15055)'
What to expect:
- The response body is a normal ClickHouse response, not a SQL Tamer wrapper
x-sqltamer-rewrite-applied: 1means a rewrite was appliedx-sqltamer-rewrite-applied: 0means the request was passed through unchanged- The hosted path-based proxy also requires
X-API-Key
What performance to expect
- SQL Tamer adds one HTTP hop plus rewrite work, so tiny queries can be flat or slightly slower end to end.
- The useful wins are on oversized query text: long dense integer lists, very large sparse lists, and large
CASEpayloads that shrink materially before ClickHouse sees them. x-sqltamer-rewrite-applied: 1only means a rewrite fired. It does not guarantee a speedup on every workload.- On the ClickHouse proxy path, the clear wins usually show up only once the original SQL text is already very large. Mid-size rewritten requests can still be neutral or slower than direct ClickHouse.
- Hosted ClickHouse defaults are tuned for the common large-query cases: dense integer compression and external-table
uploads are on, while
in_list_to_any_arraystays off unless you opt in.
Benchmark large queries fairly
When you compare direct ClickHouse against the proxy, keep the request envelope the same on both sides:
- Send the same SQL body.
- Pass the same ClickHouse query settings such as
max_query_size. - Compare the final result value, not just the latency.
- Record whether
x-sqltamer-rewrite-appliedwas0or1.
This is benchmark guidance, not a normal product requirement. Customers sending oversized SQL to the SQL Tamer proxy
should not need to raise ClickHouse max_query_size just to make the request work.
For very large requests, raw ClickHouse can reject the original SQL on max_query_size while the proxy still succeeds
because SQL Tamer rewrites it before forwarding the upstream request. That is valid product behavior. Only raise the
direct request limit when you are intentionally measuring direct ClickHouse against the proxy on the same oversized SQL.
Concrete A/B benchmark recipe
Save one large query body once:
size=2000000
start=1000001
end=$((start + size - 1))
query=$(mktemp)
printf 'SELECT count() FROM events WHERE user_id IN (%s) FORMAT TSV' \
"$(seq -f '%.0f' "$start" "$end" | paste -sd, -)" > "$query"
Send that exact file directly to ClickHouse:
curl -sS -o /tmp/direct.body -D /tmp/direct.headers -w '%{time_total}' \
-u analytics:secret \
--data-binary @"$query" \
'http://clickhouse.example.com:8123/?database=default&max_query_size=60000000'
Then send the same file through the SQL Tamer proxy:
curl -sS -o /tmp/proxy.body -D /tmp/proxy.headers -w '%{time_total}' \
-H "X-API-Key: $SQLTAMER_API_KEY" \
-H 'X-ClickHouse-User: analytics' \
-H 'X-ClickHouse-Key: secret' \
--data-binary @"$query" \
"$PROXY_BASE_URL?database=default&max_query_size=60000000"
Compare:
- response body value
- elapsed time
x-sqltamer-rewrite-applied
Use the same SQL file, the same query settings, and the same upstream credentials on both runs. For real checks, run each side more than once and reverse the order to reduce cache bias.
Optional rewrite flags
Add these to the normal query string only when you want them:
sqltamer_min_in_list_items=3sqltamer_enable_in_list_to_any_array=1Only for medium-size compatible lists. The hosted ClickHouse path keeps this off by default and prefers dense-range compression plus external-table uploads for very large lists.sqltamer_enable_case_to_temp_table=1sqltamer_enable_nested_case_to_temp_table=1
SQL Tamer consumes sqltamer_* params locally and does not forward them upstream.
Supported rewrites
dense_integer_in_list_compressionis enabled by defaultin_list_to_temp_tableis enabled by default for very large literalIN (...)listsin_list_to_any_arrayis opt-in and rewrites medium-size lists to typed ClickHouse query params It is not recommended as a default for huge sparse lists on the ClickHouse HTTP path.case_to_temp_tableandnested_case_to_temp_tableare opt-in and rewrite totransform(...)on ClickHouseinsert_values_splitis not supported on the ClickHouse path
SQL Tamer only rewrites SELECT or WITH requests it can safely extract. Everything else passes through unchanged.
Raw destination API
The CLI is the recommended control-plane interface. If you need the underlying HTTP endpoints instead, send X-API-Key to:
POST /v1/clickhouse/destinationsGET /v1/clickhouse/destinationsGET /v1/clickhouse/destinations/$DESTINATION_IDPATCH /v1/clickhouse/destinations/$DESTINATION_IDDELETE /v1/clickhouse/destinations/$DESTINATION_ID
If you want SQL Tamer to build the ClickHouse request but not forward it, use the ClickHouse Rewrite API.