Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PMM-13208 update grafana clickhouse datasource. #1636

Merged
merged 2 commits into from
Nov 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 107 additions & 0 deletions panels/grafana-clickhouse-datasource/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,112 @@
# Changelog

## 4.4.0

### Features

- Added "Labels" column selector to the log query builder
- Datasource OTel configuration will now set default table names for logs and traces.

### Fixes

- Added warning for when `uid` is missing in provisioned datasources.
- Map filters in the query builder now correctly show the key instead of the column name
- Updated and fixed missing `system.dashboards` dashboard in list of dashboards
- Updated the duration value in example traces dashboard to provide useful information
- Fix to display status codes from spans in trace queries (#950)

## 4.3.2

### Fixes

- Optimized performance for types dependent on the JSON converter
- Dependency updates

## 4.3.1

### Features

- Added preset dashboard from `system.dashboards` table

### Fixes

- Fix trace start times in trace ID mode (#900)
- Fixed OTel dashboard that waa failing to import (#908)

## 4.3.0

### Features

- Added OpenTelemetry dashboard (#884)

### Fixes

- Fix support for LowCardinality strings (#857)
- Update trace queries to better handle time fields (#890)
- Dependency bumps

## 4.2.0

### Features

- Added `$__dateTimeFilter()` macro for conveniently filtering a PRIMARY KEY composed of Date and DateTime columns.

## 4.1.0

### Features

- Added the ability to define column alias tables in the config, which simplifies query syntax for tables with a known schema.

## 4.0.8

### Fixes

- Fixed `IN` operator escaping the entire string (specifically with `Nullable(String)`), also added `FixedString(N)` (#830)
- Fixed query builder filter editor on alert rules page (#828)

## 4.0.7

- Upgrade dependencies

## 4.0.6

### Fixes

- Add support for configuring proxy options from context rather than environment variables (supported by updating `sqlds`) (#799)

## 4.0.5

### Fixes

- Fixed converter regex for `Nullable(IP)` and `Nullable(String)`. It won't match to `Array(Nullable(IP))` or `Array(Nullable(String))` any more. (#783)
- Updated `grafana-plugin-sdk-go` to fix a PDC issue. More details [here](https://github.com/grafana/grafana-plugin-sdk-go/releases/tag/v0.217.0) (#790)

## 4.0.4

### Fixes

- Changed trace timestamp table from the constant `otel_traces_trace_id_ts` to a suffix `_trace_id_ts` applied to the current table name.

## 4.0.3

### Features

- Added `$__fromTime_ms` macro that represents the dashboard "from" time in milliseconds using a `DateTime64(3)`
- Added `$__toTime_ms` macro that represents the dashboard "to" time in milliseconds using a `DateTime64(3)`
- Added `$__timeFilter_ms` macro that uses `DateTime64(3)` for millisecond precision time filtering
- Re-added query type selector in dashboard view. This was only visible in explore view, but somehow it affects dashboard view, and so it has been re-added. (#730)
- When OTel is enabled, Trace ID queries now use a skip index to optimize exact ID lookups on large trace datasets (#724)

### Fixes

- Fixed performance issues caused by `$__timeFilter` using a `DateTime64(3)` instead of `DateTime` (#699)
- Fixed trace queries from rounding span durations under 1ms to `0` (#720)
- Fixed AST error when including Grafana macros/variables in SQL (#714)
- Fixed empty builder options when switching from SQL Editor back to Query Editor
- Fix SQL Generator including "undefined" in `FROM` when database isn't defined
- Allow adding spaces in multi filters (such as `WHERE .. IN`)
- Fixed missing `AND` keyword when adding a filter to a Trace ID query

## 4.0.2

### Fixes
Expand Down
38 changes: 20 additions & 18 deletions panels/grafana-clickhouse-datasource/MANIFEST.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,32 +8,34 @@ Hash: SHA512
"signedByOrg": "grafana",
"signedByOrgName": "Grafana Labs",
"plugin": "grafana-clickhouse-datasource",
"version": "4.0.2",
"time": 1706893008714,
"version": "4.4.0",
"time": 1726245931902,
"keyId": "7e4d0c6a708866e7",
"files": {
"CHANGELOG.md": "7f1a12e90ac791bd16c252d0c2176c0b363d87fecc1773aee2ec6de729490502",
"CHANGELOG.md": "580cccb7707725e8c059fa60952bb7ea40d36b3153ea56f8418afb9b24b84a02",
"LICENSE": "cdd15e614b50e88443fe574ad56bde5ba697d958a45376431638eea816e3bfc3",
"README.md": "242b7431b473b1a10a0ed58339c41f0824fa5a37e0db56619d3e520115842163",
"dashboards/cluster-analysis.json": "03061d7ee3a2b245e28f9291e5a72147d0c2830301cbb7eec934b0d14170b24e",
"dashboards/data-analysis.json": "8d87d43424b1e6cd34d1b7fd2f894b5bac4c13aa2f531598c060a80b19804829",
"README.md": "01c2a56425fa4fb836dcb30b7575b0ed031be483567704025e160497e5002e24",
"dashboards/cluster-analysis.json": "7f83d4d09cc6f045768f5bf47485b864dd6da03094f8607808dec77cb96902e5",
"dashboards/data-analysis.json": "71695f08dfad47f3d4da4c2d331a09eb8fe5789520d5566fffeecd626079caa1",
"dashboards/opentelemetry-clickhouse.json": "7000bb0d91bf0474eb5da966ee65fece8a591785e4a06f8dc7c4431fef35314a",
"dashboards/query-analysis.json": "1b2006a3f4142e512e50156a7d2fd8cf03a178019aaccc964c0283c8559298b3",
"go_plugin_build_manifest": "7d8b428feb6ede5ccf12b2f569440ab379a8a03fa1468646d0d85935e1cc6310",
"gpx_clickhouse_linux_amd64": "a4ae09c037607bf14ec78178649c575a1effe6daa130e1dae319aaedd7ffe331",
"dashboards/system-dashboards.json": "a47eb47b9cd0bea82a7276fc805bba214164e677f68fb1e821b0278b17bee7f0",
"go_plugin_build_manifest": "7980a82c9b5646c237a4e240e3879ba6a836566b6dbe746cb68c907921188732",
"gpx_clickhouse_linux_amd64": "1d34927dde5dfc3ee4e44e7d3060cdb0d4132b8770515d4a038a4cb77e991df3",
"img/logo.svg": "838199055d86584ff105e5e91013203a6acb7d3f2061ae27678786125ab11f09",
"module.js": "d1f61c74b2bbfdc359a9183c1afb6a8b669b1cafa75a139523b27232190cfaf2",
"module.js.map": "a270713b83e35bdc2fe81d5afe604a5c626930fb1d5b36e658a0e713b3dbdc08",
"plugin.json": "61dcb5cad8937cce0d988c067ab2f7232d84f5e2e51e34a42d5cf078a6cc6530"
"module.js": "5479144fa6aa0d4254153c2e25568cd1bf11af7f91c4cbd308203a59f94e226c",
"module.js.map": "545a409ec85976c4ab66dc479eecf3de821d606eaf6883cda43f932a9a95f024",
"plugin.json": "c04b488966776c58a8deafbabc113acd44dc245035c9eaaf8776102019fca131"
}
}
-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v4.10.10
Version: OpenPGP.js v4.10.11
Comment: https://openpgpjs.org

wrgEARMKAAYFAmW9HtAAIQkQfk0ManCIZucWIQTzOyW2kQdOhGNlcPN+TQxq
cIhm53dVAgkBjRvAp83TqiWhhDbHjB/EWJogo9/pqPvoDsyc7aLb3+zm+geI
ICRlkIPUGcvLXmaWYjiIgZtnpmSJpz6cpWNUMIgCCJqKnaNLjQwhOd0T6o3H
lDnbWqbnaphxKAYx2ELP1OhyVpZDDKpCuIlPEWJtTjVXxXhsIiO418AYaYxt
zh/UOtXE
=39z5
wrkEARMKAAYFAmbkbCwAIQkQfk0ManCIZucWIQTzOyW2kQdOhGNlcPN+TQxq
cIhm55bwAgkAJRo3J8E0fgJhZDy4iWPdgdra/6ErAdqYc3YDTHEFCVWtU2j1
WNGq5SMAvakm59R7aJmCP5I+9zowsEAi93NkKi0CCQCuNfkaaYqDZoP71gSd
RkeAtao81WyrsKRx0SVho38UpQ+B2rEBfCpEtjinpgxuosQOcCaImJPGatnX
bcT6fbtWZg==
=9Dsz
-----END PGP SIGNATURE-----
43 changes: 24 additions & 19 deletions panels/grafana-clickhouse-datasource/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,15 +166,16 @@ If using the [Open Telemetry Collector and ClickHouse exporter](https://github.c

```sql
SELECT
TraceId AS traceID,
SpanId AS spanID,
SpanName AS operationName,
ParentSpanId AS parentSpanID,
ServiceName AS serviceName,
Duration / 1000000 AS duration,
Timestamp AS startTime,
arrayMap(key -> map('key', key, 'value', SpanAttributes[key]), mapKeys(SpanAttributes)) AS tags,
arrayMap(key -> map('key', key, 'value', ResourceAttributes[key]), mapKeys(ResourceAttributes)) AS serviceTags
TraceId AS traceID,
SpanId AS spanID,
SpanName AS operationName,
ParentSpanId AS parentSpanID,
ServiceName AS serviceName,
Duration / 1000000 AS duration,
Timestamp AS startTime,
arrayMap(key -> map('key', key, 'value', SpanAttributes[key]), mapKeys(SpanAttributes)) AS tags,
arrayMap(key -> map('key', key, 'value', ResourceAttributes[key]), mapKeys(ResourceAttributes)) AS serviceTags,
if(StatusCode IN ('Error', 'STATUS_CODE_ERROR'), 2, 0) AS statusCode
FROM otel.otel_traces
WHERE TraceId = '61d489320c01243966700e172ab37081'
ORDER BY startTime ASC
Expand All @@ -191,16 +192,20 @@ FROM test_data
WHERE $__timeFilter(date_time)
```

| Macro | Description | Output example |
|----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|
| *$__timeFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in milliseconds | `time >= toDateTime64(1480001790/1000, 3) AND time <= toDateTime64(1482576232/1000, 3) )` |
| *$__dateFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the date range of the panel | `date >= '2022-10-21' AND date <= '2022-10-23' )` |
| *$__fromTime* | Replaced by the starting time of the range of the panel casted to `DateTime64(3)` | `toDateTime64(1415792726371/1000, 3)` |
| *$__toTime* | Replaced by the ending time of the range of the panel casted to `DateTime64(3)` | `toDateTime64(1415792726371/1000, 3)` |
| *$__interval_s* | Replaced by the interval in seconds | `20` |
| *$__timeInterval(columnName)* | Replaced by a function calculating the interval based on window size in seconds, useful when grouping | `toStartOfInterval(toDateTime(column), INTERVAL 20 second)` |
| *$__timeInterval_ms(columnName)* | Replaced by a function calculating the interval based on window size in milliseconds, useful when grouping | `toStartOfInterval(toDateTime64(column, 3), INTERVAL 20 millisecond)` |
| *$__conditionalAll(condition, $templateVar)* | Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value. | `condition` or `1=1` |
| Macro | Description | Output example |
|----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| *$__dateFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the date range of the panel | `date >= toDate('2022-10-21') AND date <= toDate('2022-10-23')` |
| *$__timeFilter(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in seconds | `time >= toDateTime(1415792726) AND time <= toDateTime(1447328726)` |
| *$__timeFilter_ms(columnName)* | Replaced by a conditional that filters the data (using the provided column) based on the time range of the panel in milliseconds | `time >= fromUnixTimestamp64Milli(1415792726123) AND time <= fromUnixTimestamp64Milli(1447328726456)` |
| *$__dateTimeFilter(dateColumn, timeColumn)* | Shorthand that combines $__dateFilter() AND $__timeFilter() using separate Date and DateTime columns. | `$__dateFilter(dateColumn) AND $__timeFilter(timeColumn)` |
| *$__fromTime* | Replaced by the starting time of the range of the panel casted to `DateTime` | `toDateTime(1415792726)` |
| *$__toTime* | Replaced by the ending time of the range of the panel casted to `DateTime` | `toDateTime(1447328726)` |
| *$__fromTime_ms* | Replaced by the starting time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1415792726123)` |
| *$__toTime_ms* | Replaced by the ending time of the range of the panel casted to `DateTime64(3)` | `fromUnixTimestamp64Milli(1447328726456)` |
| *$__interval_s* | Replaced by the interval in seconds | `20` |
| *$__timeInterval(columnName)* | Replaced by a function calculating the interval based on window size in seconds, useful when grouping | `toStartOfInterval(toDateTime(column), INTERVAL 20 second)` |
| *$__timeInterval_ms(columnName)* | Replaced by a function calculating the interval based on window size in milliseconds, useful when grouping | `toStartOfInterval(toDateTime64(column, 3), INTERVAL 20 millisecond)` |
| *$__conditionalAll(condition, $templateVar)* | Replaced by the first parameter when the template variable in the second parameter does not select every value. Replaced by the 1=1 when the template variable selects every value. | `condition` or `1=1` |

The plugin also supports notation using braces {}. Use this notation when queries are needed inside parameters.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1376,7 +1376,7 @@
}
},
"queryType": "sql",
"rawSql": "SELECT concatAssumeInjective(database, '.', table) as db_table, mutation_id, command, create_time, parts_to_do_names, is_done, latest_failed_part, if(latest_fail_time = '1970-01-01 01:00:00', 'success', 'failure') as success, if(latest_fail_time = '1970-01-01 01:00:00', '-', CAST(latest_fail_time, 'String')) as fail_time, latest_fail_reason FROM system.mutations WHERE database IN (${database:singlequote}) ORDER BY is_done ASC, create_time DESC LIMIT 10",
"rawSql": "SELECT concatAssumeInjective(database, '.', table) as db_table, mutation_id, command, create_time, parts_to_do_names, is_done, latest_failed_part, if(latest_fail_time = '1970-01-01 00:00:00', 'success', 'failure') as success, if(latest_fail_time = '1970-01-01 00:00:00', '-', CAST(latest_fail_time, 'String')) as fail_time, latest_fail_reason FROM system.mutations WHERE database IN (${database:singlequote}) ORDER BY is_done ASC, create_time DESC LIMIT 10",
"refId": "Merges"
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1093,7 +1093,7 @@
}
},
"queryType": "sql",
"rawSql": "SELECT name,\n engine,\n tables,\n partitions,\n parts,\n formatReadableSize(bytes_on_disk) \"disk_size\",\n col_count,\n total_rows,\n formatReadableSize(data_uncompressed_bytes) as \"uncompressed_size\"\nFROM system.databases db\n LEFT JOIN ( SELECT database,\n uniq(table) \"tables\",\n uniq(table, partition) \"partitions\",\n count() AS parts,\n sum(bytes_on_disk) \"bytes_on_disk\",\n sum(data_compressed_bytes) as \"data_compressed_bytes\",\n sum(rows) as total_rows,\n max(col_count) as \"col_count\"\n FROM system.parts AS parts\n JOIN (SELECT database, count() as col_count\n FROM system.columns\n WHERE database IN (${database}) AND table IN (${table})\n GROUP BY database) as col_stats\n ON parts.database = col_stats.database\n WHERE database IN (${database}) AND active AND table IN (${table})\n GROUP BY database) AS db_stats ON db.name = db_stats.database\nWHERE database IN (${database}) AND lower(name) != 'information_schema'\nORDER BY bytes_on_disk DESC\nLIMIT 10;",
"rawSql": "SELECT name,\n engine,\n tables,\n partitions,\n parts,\n formatReadableSize(bytes_on_disk) \"disk_size\",\n col_count,\n total_rows,\n formatReadableSize(data_uncompressed_bytes) as \"uncompressed_size\"\nFROM system.databases db\n LEFT JOIN ( SELECT database,\n uniq(table) \"tables\",\n uniq(table, partition) \"partitions\",\n count() AS parts,\n sum(bytes_on_disk) \"bytes_on_disk\",\n sum(data_uncompressed_bytes) as \"data_uncompressed_bytes\",\n sum(rows) as total_rows,\n max(col_count) as \"col_count\"\n FROM system.parts AS parts\n JOIN (SELECT database, count() as col_count\n FROM system.columns\n WHERE database IN (${database}) AND table IN (${table})\n GROUP BY database) as col_stats\n ON parts.database = col_stats.database\n WHERE database IN (${database}) AND active AND table IN (${table})\n GROUP BY database) AS db_stats ON db.name = db_stats.database\nWHERE database IN (${database}) AND lower(name) != 'information_schema'\nORDER BY bytes_on_disk DESC\nLIMIT 10;",
"refId": "A"
}
],
Expand Down
Loading
Loading