You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: timescaledb/how-to-guides/compression/decompress-chunks.md
+111-1Lines changed: 111 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ additional storage capacity for decompressing chunks if you need to.
12
12
These are the main steps for decompressing chunks in preparation for inserting
13
13
or backfilling data:
14
14
1. Temporarily turn off any existing compression policy. This stops the policy
15
-
trying to compress chunks that you are currently working on.
15
+
trying to compress chunks that you are currently working on.
16
16
1. Decompress chunks.
17
17
1. Perform the insertion or backfill.
18
18
1. Re-enable the compression policy. This will re-compress the chunks you worked on.
@@ -45,3 +45,113 @@ SELECT tableoid::regclass FROM metrics
45
45
------------------------------------------
46
46
_timescaledb_internal._hyper_72_37_chunk
47
47
```
48
+
49
+
# Backfill historical data on compressed chunks
50
+
When you backfill data, you are inserting data that has a timestamp in the past into a corresponding chunk that has already been compressed. If you need to insert a batch of backfilled data, the [TimescaleDB extras][timescaledb-extras] GitHub repository includes functions for [backfilling batch data to compressed chunks][timescaledb-extras-backfill]. By "backfill", we
51
+
mean inserting data corresponding to a timestamp well in the past, which given
52
+
its timestamp, already corresponds to a compressed chunk.
53
+
54
+
<highlight type="warning"
55
+
Compression alters data on your disk, so always back up before you start!
56
+
</highlight>
57
+
58
+
In the below example, we backfill data into a temporary table; such temporary
59
+
tables are short-lived and only exist for the duration of the database
60
+
session. Alternatively, if backfill is common, one might use a normal table for
61
+
this instead, which would allow multiple writers to insert into the table at
62
+
the same time before the `decompress_backfill` process.
63
+
64
+
To use this procedure:
65
+
66
+
1. Create a table with the same schema as the hypertable (in
67
+
this example, `cpu`) that we are backfilling into:
68
+
69
+
```sql
70
+
CREATE TEMPORARY TABLE cpu_temp ASSELECT*FROM cpu WITH NO DATA;
71
+
```
72
+
73
+
1. Insert data into the backfill table.
74
+
75
+
1. Use a supplied backfill procedure to perform the above steps: halt
76
+
compression policy, identify those compressed chunks to which the backfilled
77
+
data corresponds, decompress those chunks, insert data from the backfill
78
+
table into the main hypertable, and then re-enable compression policy:
0 commit comments