This note captures practical translation patterns from Grafana Prometheus panels to Betterstack SQL charts when data is coming from a Prometheus push metrics source.
For Prometheus-style metrics in Betterstack, values are usually stored as aggregate states, not plain numeric columns.
Common columns:
value_avg(AggregateFunction(avg, Nullable(Float64)))rate_avg(AggregateFunction(avg, Nullable(Float64)))buckets(AggregateFunction(anyLast, Array(Tuple(String, Float64))))buckets_rate(AggregateFunction(anyLast, Array(Tuple(String, Nullable(Float64)))) )buckets_sum_rate,buckets_count_rateevents_count(for logs/events counting)
Implication: plain SELECT value_avg often looks null/binary. Use merge functions.
Grafana style:
metric{labels...}
Betterstack:
- filter by
name = 'metric' - filter labels with
metricTag('label') = 'value' - aggregate with
avgMerge(value_avg)
Pattern:
WITH per_series AS (
SELECT
{{time}} AS time,
avgMerge(value_avg) AS value
FROM {{source}}
WHERE name = 'my_metric'
AND dt BETWEEN {{start_time}} AND {{end_time}}
AND metricTag('network') = 'calibration'
GROUP BY time, series_id
)
SELECT time, avg(value) AS value
FROM per_series
GROUP BY time
ORDER BY timeIf Grafana divides by 1e18, do the same in SQL output:
SELECT time, value / 1e18 AS value ...Grafana style:
metric_a / metric_b
Betterstack:
- compute each metric independently by time
- join on time
- divide with
nullIf(denominator, 0)
For counter-like status metrics (...Status), compute deltas first.
Pattern:
- Aggregate cumulative value by
time, provider_id, status lagInFrameprevious cumulativegreatest(value - prev_value, 0)to handle resets- Aggregate deltas for totals/success rates
Typical success rate:
100.0 * sumIf(delta, status = 'success') / nullIf(sum(delta), 0)Common rule used in these dashboards:
- exclude
pendingfrom denominator when calculating success rate.
If you need exact provider filtering, use:
metricTag('providerId') = {{provider_id}}Avoid match(...) for exact id matching unless you truly want regex behavior.
Grafana histogram_quantile expects cumulative bucket behavior with rate over a window.
In Betterstack, quantiles can look wrong when:
- top finite bucket is too low (saturation/flat lines)
- you compute on already degraded bucket data
- binning window differs from Grafana
Practical approach:
- use consistent time bins (
toStartOfInterval(..., INTERVAL 30 minute)or similar) - reconstruct per-bucket rates carefully
- compute weighted quantiles from bucket distribution
- make sure histogram buckets are high enough to avoid clipping
Observed issue: with max finite bucket 5e8 (500 MiB/s), high quantiles collapsed near top bucket.
Fix:
- add denser buckets in expected operating range
- extend top finite bucket above observed peaks
Readable bucket notation in code:
const KiB = 1 << 10;
const MiB = 1 << 20;
const GiB = 1 << 30;Example throughput bucket shape:
- low-end:
1*KiB ... 100*MiB - dense range:
256*MiB ... 896*MiB - upper:
1*GiB, 1.25*GiB, 1.5*GiB, 1.75*GiB, 2*GiB, 2.5*GiB, 3*GiB
- Prefer URL string columns (HTML
<a>generally not rendered). - Betterstack may append dashboard vars like
vs[source]to internal dashboard links.
If vs[source] causes bad routing:
- rename destination dashboard variable
source-> e.g.metrics_source - update destination SQL
FROM {{metrics_source}} - build links with
vs[metrics_source]=...
- Confirm metric
nameexists in source. - Confirm label keys exist (
metricTag('providerId'), etc.). - Confirm using correct merge function (
avgMerge,anyLastMerge, ...). - Validate one metric with a minimal query first.
- Check denominator zero handling (
nullIf(..., 0)). - For counters, confirm delta logic is applied.
- For histograms, verify bucket cap and time binning.
For each Grafana panel:
- Copy PromQL expression.
- Identify metric names + label filters + math.
- Build Betterstack SQL in layers (CTEs):
- raw per-series aggregation
- per-time aggregation
- joins/ratios/deltas
- Add unit conversion in final
SELECT. - Validate against Grafana for same time window and filters.