From 4aafd0bad5256b4d331a7752f70a8079f3ba67f1 Mon Sep 17 00:00:00 2001 From: kennytm Date: Wed, 8 Apr 2026 00:44:22 +0800 Subject: [PATCH] *: convert remaining sequence diagrams from PNG to Mermaid --- br/br-log-architecture.md | 54 +++++++++++++- br/br-snapshot-architecture.md | 45 +++++++++++- dm/feature-shard-merge-pessimistic.md | 30 +++++++- pessimistic-transaction.md | 101 +++++++++++++++++++++++++- 4 files changed, 222 insertions(+), 8 deletions(-) diff --git a/br/br-log-architecture.md b/br/br-log-architecture.md index c35c13aedfc14..27e2c09d51ac0 100644 --- a/br/br-log-architecture.md +++ b/br/br-log-architecture.md @@ -17,7 +17,35 @@ The log backup and PITR architecture is as follows: The process of a cluster log backup is as follows: -![BR log backup process design](/media/br/br-log-backup-ts.png) +```mermaid +sequenceDiagram + actor User + participant BR + participant PD + participant TiKV + participant TiDB + participant Storage + + User->>BR: Run `br log start` + BR->>PD: Register log backup task + TiKV->>PD: Fetch log backup task + par TiKV handle the local log backup task + loop + TiKV->>TiKV: Read kv change data + TiKV->>PD: Fetch global checkpoint ts + TiKV->>TiKV: Generate local metadata + TiKV->>Storage: Upload log data & metadata + TiKV->>PD: Configure GC + end + and + loop + TiDB->>TiKV: Watch backup progress + TiDB->>PD: Report global checkpoint ts + end + end + User->>BR: Run `br log status` + BR->>PD: Fetch status of log backup task +``` System components and key concepts involved in the log backup process: @@ -57,7 +85,29 @@ The complete backup process is as follows: The process of PITR is as follows: -![Point-in-time recovery process design](/media/br/pitr-ts.png) +```mermaid +sequenceDiagram + actor User + participant BR + participant TiKV + participant PD + participant Storage + + User->>BR: Run `br restore point` + BR->>TiKV: Restore full data + loop restore log data + BR->>Storage: Read backup data + BR->>PD: Fetch Region info + BR->>TiKV: Request TiKV to restore data + loop TiKV handle restore request + TiKV->>Storage: Download KVs + TiKV->>TiKV: Rewrite KVs + TiKV->>TiKV: Apply KVs + end + TiKV-->>BR: Report restore result + BR->>BR: Handle all restore results + end +``` The complete PITR process is as follows: diff --git a/br/br-snapshot-architecture.md b/br/br-snapshot-architecture.md index 5af26ce3112cf..798d1ade802ad 100644 --- a/br/br-snapshot-architecture.md +++ b/br/br-snapshot-architecture.md @@ -17,7 +17,28 @@ The TiDB snapshot backup and restore architecture is as follows: The process of a cluster snapshot backup is as follows: -![snapshot backup process design](/media/br/br-snapshot-backup-ts.png) +```mermaid +sequenceDiagram + actor User + participant BR + participant PD + participant TiKV + participant Storage + + User->>BR: Run `br backup full` + BR->>PD: Pause GC + BR->>PD: Fetch TiKV and Region info + BR->>TiKV: Request TiKV to back up data + loop TiKV handle the local snapshot backup task + TiKV->>TiKV: Scan KVs + TiKV->>TiKV: Generate SST + TiKV->>Storage: Upload SST + end + TiKV-->>BR: Report backup result + BR->>BR: Handle all backup results + BR->>TiKV: Back up schemas + BR->>Storage: Upload backup metadata +``` The complete backup process is as follows: @@ -54,7 +75,27 @@ The complete backup process is as follows: The process of a cluster snapshot restore is as follows: -![snapshot restore process design](/media/br/br-snapshot-restore-ts.png) +```mermaid +sequenceDiagram + actor User + participant BR + participant PD + participant TiKV + participant Storage + + User->>BR: Run `br restore` + BR->>PD: Pause Region schedule + BR->>TiKV: Restore schema + BR->>PD: Split and scatter Regions + BR->>TiKV: Request TiKV to restore data + loop TiKV handle restore request + TiKV->>Storage: Download SST + TiKV->>TiKV: Rewrite KVs + TiKV->>TiKV: Ingest SST + end + TiKV-->>BR: Report restore result + BR->>BR: Handle all restore results +``` The complete restore process is as follows: diff --git a/dm/feature-shard-merge-pessimistic.md b/dm/feature-shard-merge-pessimistic.md index 349cbee9cdd8e..87594073595e1 100644 --- a/dm/feature-shard-merge-pessimistic.md +++ b/dm/feature-shard-merge-pessimistic.md @@ -57,7 +57,35 @@ Assume that the DDL statements of sharded tables are not processed during the mi This section shows how DM migrates DDL statements in the process of merging sharded tables based on the above example in the pessimistic mode. -![shard-ddl-flow](/media/dm/shard-ddl-flow.png) +```mermaid +--- +config: + themeCSS: | + /* hide the ugly borders */ + rect.rect { + stroke: none; + } +--- +sequenceDiagram + autonumber + box rgba(0,255,0,0.08) + participant Worker1 as DM-worker 1 + end + box rgba(255,255,0,0.08) + participant Master as DM-master + end + box rgba(0,255,0,0.08) + participant Worker2 as DM-worker 2 + end + + Worker1->>Master: 1. DDL info + Master->>Worker1: 2. DDL lock info + Worker2->>Master: 3. DDL info + Master->>Worker2: 4. DDL lock info + Master->>Worker1: 5. DDL execute request + Worker1->>Master: 6. DDL executed + Master-->>Worker2: 7. DDL ignore request +``` In this example, `DM-worker-1` migrates the data from MySQL instance 1 and `DM-worker-2` migrates the data from MySQL instance 2. `DM-master` coordinates the DDL migration among multiple DM-workers. Starting from `DM-worker-1` receiving the DDL statements, the DDL migration process is simplified as follows: diff --git a/pessimistic-transaction.md b/pessimistic-transaction.md index 0f4d09fedffea..d36dd305f46d3 100644 --- a/pessimistic-transaction.md +++ b/pessimistic-transaction.md @@ -147,7 +147,39 @@ TiDB supports the following two isolation levels in the pessimistic transaction In the transaction commit process, pessimistic transactions and optimistic transactions have the same logic. Both transactions adopt the two-phase commit (2PC) mode. The important adaptation of pessimistic transactions is DML execution. -![TiDB pessimistic transaction commit process](/media/pessimistic-transaction-commit.png) +```mermaid +--- +config: + themeCSS: | + /* workaround for https://github.com/mermaid-js/mermaid/issues/523 */ + /* mark the two "Lock" arrows as red, by restyling the dashed arrows */ + line.messageLine1 { + stroke: #d32f2f; + stroke-dasharray: none !important; + } + /* make sure the arrow heads inherit the stroke color (another bug in mermaid) */ + #arrowhead path { + fill: context-stroke; + stroke: context-stroke; + } +--- +sequenceDiagram + participant Client + participant TiDB + participant TiKV + + Client->>TiDB: BEGIN + rect rgba(255, 0, 0, 0.08) + Client->>TiDB: DML + TiDB-->>TiKV: Lock + TiDB-->>TiKV: Lock + end + rect rgba(0, 0, 0, 0.04) + Client->>TiDB: COMMIT + TiDB->>TiKV: Prewrite + TiDB->>TiKV: Commit + end +``` The pessimistic transaction adds an `Acquire Pessimistic Lock` phase before 2PC. This phase includes the following steps: @@ -155,7 +187,52 @@ The pessimistic transaction adds an `Acquire Pessimistic Lock` phase before 2PC. 2. When the TiDB server receives a writing request from the client, the TiDB server initiates a pessimistic lock request to the TiKV server, and the lock is persisted to the TiKV server. 3. (Same as the optimistic transaction mode) When the client sends the commit request, TiDB starts to perform the two-phase commit similar to the optimistic transaction mode. -![Pessimistic transactions in TiDB](/media/pessimistic-transaction-in-tidb.png) +```mermaid +--- +title: Pessimistic Transaction in TiDB +--- +sequenceDiagram + participant client + participant TiDB + participant PD + participant TiKV + + client->>TiDB: begin + TiDB->>PD: get ts as start_ts + loop excute SQL + rect rgba(0, 0, 0, 0.04) + alt do read + TiDB->>TiKV: get data from TiKV with start_ts + TiDB-->>client: return read result + else do write + rect rgba(255, 0, 0, 0.08) + loop if has write conflict + TiDB->>PD: get ts as for_update_ts + TiDB->>TiDB: write in cache + TiDB->>TiKV: acquire pessimistic locks parallelly + end + end + TiDB-->>client: return write result + end + end + end + client->>TiDB: commit + opt start 2PC + rect rgba(0, 0, 0, 0.04) + par prewrite + TiDB->>TiKV: prewrite each key in cache with start_ts parallelly + end + end + rect rgba(0, 0, 0, 0.04) + par commit + TiDB->>PD: get ts as commit_ts + TiDB->>TiKV: commit primary_key with commit_ts first + TiDB-->>client: success + TiDB->>TiKV: commit each secondary_key with commit_ts parallelly + end + end + end +``` ## Pipelined locking process @@ -171,7 +248,25 @@ To reduce the overhead of locking, TiKV implements the pipelined locking process If the application logic relies on the locking or lock waiting mechanisms, or if you want to guarantee as much as possible the success rate of transaction commits even in the case of TiKV cluster anomalies, you should disable the pipelined locking feature. -![Pipelined pessimistic lock](/media/pessimistic-transaction-pipelining.png) +```mermaid +--- +title: pipelined pessimistic lock +--- +sequenceDiagram + participant Client + participant TiDB + participant TiKV1 + participant TiKV2 + participant TiKV3 + + loop + Client->>TiDB: DML + TiDB->>TiKV1: Acquire pessimistic locks + TiKV1-->>TiDB: OK + TiKV1-)TiKV2: Log replication + TiKV1-)TiKV3: Log replication + end +``` This feature is enabled by default. To disable it, modify the TiKV configuration: