Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add IME doc #19506

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -319,6 +319,7 @@
- [Tune TiKV Threads](/tune-tikv-thread-performance.md)
- [Tune TiKV Memory](/tune-tikv-memory-performance.md)
- [TiKV Follower Read](/follower-read.md)
- [TiKV MVCC In-Memory Engine](/tikv-in-memory-engine.md)
- [Tune Region Performance](/tune-region-performance.md)
- [Tune TiFlash Performance](/tiflash/tune-tiflash-performance.md)
- [Coprocessor Cache](/coprocessor-cache.md)
Expand Down
8 changes: 5 additions & 3 deletions analyze-slow-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
1. Among many queries, identify which type of queries are slow.
2. Analyze why this type of queries are slow.

You can easily perform step 1 using the [slow query log](/dashboard/dashboard-slow-query.md) and the [statement summary table](/statement-summary-tables.md) features. It is recommended to use [TiDB Dashboard](/dashboard/dashboard-intro.md), which integrates the two features and directly displays the slow queries in your browser.
You can easily perform step 1 using the [slow query log](/dashboard/dashboard-slow-query.md) and the [statement summary table](/statement-summary-tables.md) features. It is recommended to use [TiDB Dashboard](/dashboard/dashboard-intro.md), which integrates the two features and directly displays the slow queries in your browser.

This document focuses on how to perform step 2 - analyze why this type of queries are slow.

Expand Down Expand Up @@ -98,9 +98,9 @@

The log above shows that a `cop-task` sent to the `10.6.131.78` instance waits `110ms` before being executed. It indicates that this instance is busy. You can check the CPU monitoring of that time to confirm the cause.

#### Too many outdated keys
#### Expired MVCC versions and excessive keys
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

A TiKV instance has much outdated data, which needs to be cleaned up for data scan. This impacts the processing speed.
Too many expired MVCC versions on TiKV or long GC time can result in an accumulation of excessive MVCC versions, which will affect the scan speed due to the need to process these redundant MVCC versions.

Check warning on line 103 in analyze-slow-queries.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'many' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'many' because it may cause confusion.", "location": {"path": "analyze-slow-queries.md", "range": {"start": {"line": 103, "column": 5}}}, "severity": "INFO"}
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

Check `Total_keys` and `Processed_keys`. If they are greatly different, the TiKV instance has too many keys of the older versions.

Expand All @@ -110,6 +110,8 @@
...
```

TiDB v8.5.0 introduces the TiKV MVCC In-Memory Engine feature, which can accelerate this type of slow query. For more information, see [TiKV MVCC In-Memory Engine](/tikv-in-memory-engine.md).
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

### Other key stages are slow

#### Slow in getting timestamps
Expand Down
Binary file added media/tikv-ime-data-organization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 33 additions & 0 deletions tikv-configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -2501,3 +2501,36 @@ Configuration items related to [Load Base Split](/configure-load-base-split.md).

+ Specifies the amount of data sampled by Heap Profiling each time, rounding up to the nearest power of 2.
+ Default value: `512KiB`

## in-memory-engine <span class="version-mark">New in v8.5.0</span>

TiKV MVCC in-memory engine configuration items related to the storage layer.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

### `enable` <span class="version-mark">New in v8.5.0</span>

> **Note:**
>
> This configuration item cannot be queried via SQL statements but can be configured in the configuration file.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

+ Whether to enable the in-memory engine to accelerate multi-version queries. For more information about the in-memory engine, see [TiKV MVCC In-Memory Engine](/tikv-in-memory-engine.md)
+ Default value: false (the in-memory engine is disabled)
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

### `capacity` <span class="version-mark">New in v8.5.0</span>

> **Note:**
>
> + When the in-memory engine is enabled, `block-cache.capacity` will automatically decrease by 10%.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
> + When you manually configure `capacity`, `block-cache.capacity` will not automatically decrease. In this case, you need to manually adjust the value to avoid OOM.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

+ Configure the maximum memory size that the in-memory engine can use. The maximum value is 5 GiB. You can manually configure it to use more memory.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
+ Default value: 10% of the system memory.

### `gc-run-interval` <span class="version-mark">New in v8.5.0</span>

+ Control the time interval for the In-memory Engine GC cached MVCC versions. Reducing this parameter can speed up the GC frequency, reduce MVCC records, but will increase GC CPU consumption and increase the probability of in-memory engine cache miss.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
+ Default value: 3m
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

### `mvcc-amplification-threshold` <span class="version-mark">New in v8.5.0</span>

+ Control the threshold for the in-memory engine to select loading Region when MVCC read amplification occurs. The default value is `10`, indicating that when the number of MVCC versions processed for reading a row record in a certain Region exceeds 10, it might be loaded into the in-memory engine.
+ Default value: 10
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
131 changes: 131 additions & 0 deletions tikv-in-memory-engine.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
---
title: TiKV MVCC In-Memory Engine
summary: Learn the applicable scenarios and working principles of the in-memory engine, and how to use the in-memory engine to accelerate queries for MVCC versions.
---

# TiKV MVCC In-Memory Engine

TiKV MVCC In-Memory Engine (IME) is primarily used to accelerate queries that need to scan a large number of MVCC historical versions, that is, [the total number of versions scanned (total_keys) is much greater than the number of versions processed (processed_keys)](/analyze-slow-queries.md#expired-mvcc-versions-and-excessive-keys).

Check warning on line 8 in tikv-in-memory-engine.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion.", "location": {"path": "tikv-in-memory-engine.md", "range": {"start": {"line": 8, "column": 92}}}, "severity": "INFO"}

Check warning on line 8 in tikv-in-memory-engine.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion.", "location": {"path": "tikv-in-memory-engine.md", "range": {"start": {"line": 8, "column": 199}}}, "severity": "INFO"}
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

TiKV MVCC in-memory engine is suitable for the following scenarios:

- An application requires frequent queries on frequently updated or deleted records.
- An application requires adjusting the [`tidb_gc_life_time`](/garbage-collection-configuration.md#garbage-collection-configuration) to make TiDB retain historical versions for a longer period (for example, 24 hours).
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

## Working principles
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

TiKV MVCC in-memory engine caches the latest written MVCC versions in memory and implements an MVCC GC mechanism independent of TiDB, allowing it to quickly perform GC on MVCC versions in memory, reducing the number of versions scanned during queries, and achieving the effect of reducing request latency and CPU overhead.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

The following diagram illustrates how TiKV organizes MVCC versions.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

<div style="text-align: center;"><img src="./media/tikv-ime-data-organization.png" alt="IME caches recent versions to reduce CPU overhead" width="400" /></div>

The diagram shows two rows of records, each with 9 MVCC versions. The behavior is compared between the cases with and without IME enabled:
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

- On the left, without IME enabled, the table records are stored in RocksDB in ascending order by primary key, with the same row's MVCC versions adjacent to each other.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved
- On the right, with IME enabled, the data in RocksDB is consistent with the left side, and IME caches the latest 2 MVCC versions of the 2 rows of records.
- When TiKV processes a scan request with a range of `[k1, k2]` and a start timestamp of `8`, the left side without IME enabled needs to process 11 MVCC versions, while the right side with IME enabled only needs to process 4 MVCC versions, reducing request latency and CPU consumption.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
- When TiKV processes a scan request with a range of `[k1, k2]` and a start timestamp of `7`, because the right side lacks the historical versions that need to be read, the IME cache becomes invalid, and it falls back to reading data from RocksDB.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

## Usage

Enabling IME requires adjusting the TiKV configuration and restarting. The following example explains the configuration items:
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

```toml
[in-memory-engine]
# This parameter is the switch for the in-memory engine feature, which is disabled by default. You can set it to true to enable it.
enable = false
#
# This parameter controls the memory capacity that in-memory engine can use. The default value is 10% of the system memory, and the maximum value is 5 GiB.
# You can manually configure it to use more memory.
# Note: When in-memory engine is enabled, block-cache.capacity will be reduced by 10%.
#capacity = "5GiB"
#
# This parameter controls the time interval for in-memory engine to GC the cached MVCC versions.
# The default value is 3 minutes, representing that GC is performed every 3 minutes on the cached MVCC versions.
# Decreasing the value of this parameter can speed up the GC frequency, reduce MVCC versions, but will increase GC CPU consumption and increase the probability of cache miss.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
#gc-run-interval = "3m"
#
# This parameter controls the threshold for in-memory engine to select and load Regions based on MVCC read amplification.
# The default value is 10, indicating that when the number of MVCC versions processed for a row of records in a Region exceeds 10, it might be loaded into the in-memory engine.
lilin90 marked this conversation as resolved.
Show resolved Hide resolved
#mvcc-amplification-threshold = 10
hfxsd marked this conversation as resolved.
Show resolved Hide resolved
```

> **Note:**
>
> + The in-memory engine is disabled by default. After you enable it, you need to restart TiKV.
> + Except for `enable`, all the other configuration items can be dynamically adjusted.

### Automatic loading

After you enable the in-memory engine, Regions will be automatically loaded based on their read traffic and MVCC amplification. The process is as follows:
lilin90 marked this conversation as resolved.
Show resolved Hide resolved

1. Regions are sorted by the number of next (RocksDB Iterator next API) and prev (RocksDB Iterator next API) operations in the recent time period.
2. Regions are filtered using `mvcc-amplification-threshold` (`10` by default. MVCC amplification measures read amplification, calculated as (next + prev) / processed_keys).
3. The top N Regions with severe MVCC amplification are loaded, where N is based on memory estimation.

IME also periodically performs Region eviction. The process is as follows:

1. IME evicts Regions with low read traffic or low MVCC amplification.
2. If memory usage reaches 90% of `capacity` and new Regions need to be loaded, IME will filter Regions based on read traffic and evict Regions.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

## Compatibility

+ [BR](/br/br-use-overview.md): IME can be used with BR, but BR restore will evict IME Regions involved in the restore. After BR restore is complete, if the corresponding Region is still a hotspot, it will be automatically loaded by IME.
+ [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md): IME can be used with TiDB Lightning, but TiDB Lightning's physical import mode will evict IME Regions involved in the import. After TiDB Lightning completes the import, if the corresponding Region is still a hotspot, it will be automatically loaded by IME.
+ [Follower Read](/develop/dev-guide-use-follower-read.md) and [Stale Read](/develop/dev-guide-use-stale-read.md): IME can be enabled with these two features, but IME can only accelerate Leader coprocessor requests and cannot accelerate Follower Read and Stale Read.
+ [`FLASHBACK CLUSTER`](/sql-statements/sql-statement-flashback-cluster.md): IME can be used with Flashback, but Flashback will cause IME cache invalidation. After Flashback is complete, IME will automatically load hotspot Regions.
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

## FAQ

### Can in-memory engine reduce write latency and increase write throughput?

No, in-memory engine can only accelerate read requests that scan a large number of MVCC versions.

Check warning on line 83 in tikv-in-memory-engine.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion.", "location": {"path": "tikv-in-memory-engine.md", "range": {"start": {"line": 83, "column": 66}}}, "severity": "INFO"}
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

### How to determine if in-memory engine can improve my scenario?
hfxsd marked this conversation as resolved.
Show resolved Hide resolved

You can execute the following SQL statement to check if there are slow queries with `Total_keys` much greater than `Process_keys`.

Check warning on line 87 in tikv-in-memory-engine.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion.", "location": {"path": "tikv-in-memory-engine.md", "range": {"start": {"line": 87, "column": 98}}}, "severity": "INFO"}

```sql
SELECT
Time,
DB,
Index_names,
Process_keys,
Total_keys,
CONCAT(
LEFT(REGEXP_REPLACE(Query, '\\s+', ' '), 20),
'...',
RIGHT(REGEXP_REPLACE(Query, '\\s+', ' '), 10)
) as Query,
Query_time,
Cop_time,
Process_time
FROM
INFORMATION_SCHEMA.SLOW_QUERY
WHERE
Is_internal = 0
AND Cop_time > 1
AND Process_keys > 0
AND Total_keys / Process_keys >= 10
AND Time >= NOW() - INTERVAL 10 MINUTE
ORDER BY Total_keys DESC
LIMIT 5;
```

Example:

The following result shows that there are queries with severe MVCC amplification on the `db1.tbl1` table. TiKV processes 1358517 MVCC versions and only returns 2 versions.

```
+----------------------------+-----+-------------------+--------------+------------+-----------------------------------+--------------------+--------------------+--------------------+
| Time | DB | Index_names | Process_keys | Total_keys | Query | Query_time | Cop_time | Process_time |
+----------------------------+-----+-------------------+--------------+------------+-----------------------------------+--------------------+--------------------+--------------------+
| 2024-11-18 11:56:10.303228 | db1 | [tbl1:some_index] | 2 | 1358517 | SELECT * FROM tbl1 ... LIMIT 1 ; | 1.2581352350000001 | 1.25651062 | 1.251837479 |
| 2024-11-18 11:56:11.556257 | db1 | [tbl1:some_index] | 2 | 1358231 | SELECT * FROM tbl1 ... LIMIT 1 ; | 1.252694002 | 1.251129038 | 1.240532546 |
| 2024-11-18 12:00:10.553331 | db1 | [tbl1:some_index] | 2 | 1342914 | SELECT * FROM tbl1 ... LIMIT 1 ; | 1.473941872 | 1.4720495900000001 | 1.3666103170000001 |
| 2024-11-18 12:01:52.122548 | db1 | [tbl1:some_index] | 2 | 1128064 | SELECT * FROM tbl1 ... LIMIT 1 ; | 1.058942591 | 1.056853228 | 1.023483875 |
| 2024-11-18 12:01:52.107951 | db1 | [tbl1:some_index] | 2 | 1128064 | SELECT * FROM tbl1 ... LIMIT 1 ; | 1.044847031 | 1.042546122 | 0.934768555 |
+----------------------------+-----+-------------------+--------------+------------+-----------------------------------+--------------------+--------------------+--------------------+
5 rows in set (1.26 sec)
```
4 changes: 4 additions & 0 deletions troubleshoot-hot-spot-issues.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,3 +184,7 @@
## Scatter read hotspots

In a read hotspot scenario, the hotspot TiKV node cannot process read requests in time, resulting in the read requests queuing. However, not all TiKV resources are exhausted at this time. To reduce latency, TiDB v7.1.0 introduces the load-based replica read feature, which allows TiDB to read data from other TiKV nodes without queuing on the hotspot TiKV node. You can control the queue length of read requests using the [`tidb_load_based_replica_read_threshold`](/system-variables.md#tidb_load_based_replica_read_threshold-new-in-v700) system variable. When the estimated queue time of the leader node exceeds this threshold, TiDB prioritizes reading data from follower nodes. This feature can improve read throughput by 70% to 200% in a read hotspot scenario compared to not scattering read hotspots.

## Use TiKV MVCC in-memory engine to mitigate read hotspots caused by high MVCC read amplification

During long GC times or frequent updates and deletions, read hotspots might occur due to scanning a large number of MVCC versions. To alleviate this type of hotspot, you can enable the in-memory engine feature. For more information, see [TiKV MVCC In-Memory Engine](/tikv-in-memory-engine.md).

Check warning on line 190 in troubleshoot-hot-spot-issues.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'a large number of' because it may cause confusion.", "location": {"path": "troubleshoot-hot-spot-issues.md", "range": {"start": {"line": 190, "column": 99}}}, "severity": "INFO"}
lilin90 marked this conversation as resolved.
Show resolved Hide resolved