How to Migrate TimescaleDB in Docker (Without Losing Your Data)
How to Migrate TimescaleDB in Docker (Without Losing Your Data)
March 25, 2026 (today)
Written by David Viejo
Last updated March 25, 2026 (today)
If you're running TimescaleDB in Docker with the timescale/timescaledb-ha image, migrating to a new container isn't as simple as docker stop && docker run. There are several traps that can silently destroy your data:
- The volume mount mismatch -- the HA image uses
PGDATA=/home/postgres/pgdata/data, not the standard/var/lib/postgresql/data pg_dumpalldoesn't export hypertable chunk data -- your regular tables survive, but TimescaleDB hypertables come back empty- Continuous aggregates are restored as regular views -- they lose their TimescaleDB-specific functionality
POSTGRES_DBandPOSTGRES_*env vars are ignored on existing volumes -- they only run on firstinitdb
This guide walks through the correct migration process -- including the gotchas that almost cost us 44GB of production data.
TL;DR: Migrating TimescaleDB in Docker requires three separate steps:
pg_dumpallfor schema and regular tables,COPY (SELECT * FROM hypertable) TO STDOUTfor hypertable data (plainCOPYreturns 0 rows), and manual recreation of continuous aggregates. Thetimescaledb-haimage uses/home/postgres/pgdata/data-- mounting to the wrong path triggersinitdband overwrites your volume.
Prerequisites
- A running TimescaleDB container (source)
- Docker installed on the target host
- Enough disk space for the dump (check your database sizes first)
Step 1: Assess Your Database
Before doing anything, understand what you're working with.
Check database sizes
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname))
FROM pg_database ORDER BY pg_database_size(pg_database.datname) DESC;"
Identify your hypertables
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT hypertable_name FROM timescaledb_information.hypertables;"
Check table sizes and find bloat
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT
schemaname || '.' || tablename AS table,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) AS data_size,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename) - pg_relation_size(schemaname || '.' || tablename)) AS index_size
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema', '_timescaledb_catalog', '_timescaledb_config', '_timescaledb_internal')
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC
LIMIT 20;"
Count rows in hypertables
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT 'events' AS tbl, count(*) FROM events
UNION ALL SELECT 'error_events', count(*) FROM error_events
UNION ALL SELECT 'status_checks', count(*) FROM status_checks;"
Save these counts -- you'll need them to verify the migration.
Step 2: Clean Up Before Migrating
Large tables with old data will slow your migration significantly. Truncate or trim what you don't need.
# Check how much you can trim (example: session replay events older than 15 days)
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT
count(*) FILTER (WHERE timestamp < extract(epoch from now() - interval '15 days') * 1000) AS to_delete,
count(*) FILTER (WHERE timestamp >= extract(epoch from now() - interval '15 days') * 1000) AS to_keep
FROM session_replay_events;"
# Truncate if you don't need the data
docker exec YOUR_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "TRUNCATE session_replay_events;"
Step 3: Dump Regular Tables with pg_dumpall
This captures your schema, roles, regular table data, and database structure -- but not hypertable chunk data.
docker exec YOUR_CONTAINER pg_dumpall -U YOUR_USER | gzip > /root/timescale_full_backup.sql.gz
ls -lh /root/timescale_full_backup.sql.gz
You'll see warnings about circular foreign-key constraints on hypertables, chunks, and continuous aggregates. These are normal and harmless.
pg_dumpall will NOT export data from TimescaleDB hypertables. The chunks are stored internally and regular COPY from the parent table returns 0 rows. We handle hypertable data separately in Step 6.
Step 4: Create the New Container
Create a fresh volume and container. Pay attention to the volume mount path -- this is where most people get burned.
# Create a new volume
docker volume create timescaledb_data_v2
# Start the new container
docker run -d \
--name timescaledb-new \
--network YOUR_NETWORK \
--restart unless-stopped \
--shm-size 2g \
-p 5453:5432 \
-v timescaledb_data_v2:/home/postgres/pgdata/data \
-e POSTGRES_USER=YOUR_USER \
-e POSTGRES_PASSWORD='YOUR_PASSWORD' \
-e POSTGRES_DB=YOUR_DB \
-e POSTGRES_HOST_AUTH_METHOD=scram-sha-256 \
-e TIMESCALEDB_TELEMETRY=off \
--health-cmd 'pg_isready -U YOUR_USER -d YOUR_DB' \
--health-interval 10s \
--health-timeout 5s \
--health-start-period 60s \
--health-retries 5 \
timescale/timescaledb-ha:pg17
Critical: Volume Mount Path
| Image | Correct Mount Path |
|---|---|
timescale/timescaledb-ha | /home/postgres/pgdata/data |
timescale/timescaledb | /var/lib/postgresql/data |
If you mount to the wrong path, the entrypoint runs initdb and creates a fresh empty cluster. Your volume gets overwritten with a blank database and your data is gone.
Use a different port
Mount on a different port (e.g., 5453 instead of 5452) so both containers can coexist. This gives you an easy rollback path.
Wait for the container to be healthy:
docker inspect --format='{{.State.Health.Status}}' timescaledb-new
Step 5: Restore the Dump
gunzip -c /root/timescale_full_backup.sql.gz | docker exec -i timescaledb-new psql -U YOUR_USER -d YOUR_DB
You'll see harmless errors like "role already exists" and "database already exists" -- the init process already created these from the env vars.
Step 6: Migrate Hypertable Data
This is the step most guides miss. Hypertable data lives in internal chunks, so you need COPY (SELECT * FROM ...) syntax -- a plain COPY table_name TO ... returns 0 rows.
Pipe data directly from old to new, one table at a time:
# events
docker exec OLD_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY (SELECT * FROM events) TO STDOUT WITH CSV HEADER" | \
docker exec -i NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY events FROM STDIN WITH CSV HEADER"
# error_events
docker exec OLD_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY (SELECT * FROM error_events) TO STDOUT WITH CSV HEADER" | \
docker exec -i NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY error_events FROM STDIN WITH CSV HEADER"
# status_checks
docker exec OLD_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY (SELECT * FROM status_checks) TO STDOUT WITH CSV HEADER" | \
docker exec -i NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "COPY status_checks FROM STDIN WITH CSV HEADER"
Repeat for every hypertable. If you get duplicate key errors, the pg_dumpall restore already inserted some data -- truncate the table on the new container first, then re-run the COPY.
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "TRUNCATE table_name;"
Step 7: Recreate Continuous Aggregates
pg_dump restores continuous aggregates as regular views -- they lose their TimescaleDB functionality. Drop them and recreate as proper continuous aggregates.
Drop the regular views
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
DROP VIEW IF EXISTS events_hourly CASCADE;
DROP VIEW IF EXISTS error_events_hourly CASCADE;
DROP VIEW IF EXISTS error_events_daily CASCADE;"
Note: use DROP VIEW, not DROP MATERIALIZED VIEW -- they were restored as regular views.
Recreate as continuous aggregates
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
CREATE MATERIALIZED VIEW events_hourly
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', timestamp) AS bucket,
project_id,
COUNT(DISTINCT visitor_id) FILTER (WHERE visitor_id IS NOT NULL) AS unique_visitors,
COUNT(DISTINCT session_id) AS unique_sessions,
COUNT(*) FILTER (WHERE event_type = 'page_view') AS page_views,
COUNT(*) AS total_events
FROM events
GROUP BY bucket, project_id
WITH NO DATA;
SELECT add_continuous_aggregate_policy('events_hourly',
start_offset => INTERVAL '3 hours',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '10 minutes');"
Run each CREATE MATERIALIZED VIEW as a separate command -- they can't be batched in a single transaction.
Backfill historical data
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB \
-c "CALL refresh_continuous_aggregate('events_hourly', NOW() - INTERVAL '30 days', NOW());"
Run each refresh separately -- refresh_continuous_aggregate() cannot run inside a transaction block.
Step 8: Verify
Compare row counts between old and new:
# Old container
docker exec OLD_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT 'events' AS tbl, count(*) FROM events
UNION ALL SELECT 'error_events', count(*) FROM error_events
UNION ALL SELECT 'status_checks', count(*) FROM status_checks;"
# New container
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT 'events' AS tbl, count(*) FROM events
UNION ALL SELECT 'error_events', count(*) FROM error_events
UNION ALL SELECT 'status_checks', count(*) FROM status_checks;"
Counts should match (small differences are expected if your app is writing to both).
Verify continuous aggregates exist:
docker exec NEW_CONTAINER psql -U YOUR_USER -d YOUR_DB -c "
SELECT view_name FROM timescaledb_information.continuous_aggregates;"
Verify data is actually on the volume:
docker run --rm -v timescaledb_data_v2:/data alpine du -sh /data
This should show a size proportional to your database -- not 70MB (which would indicate a fresh initdb).
Step 9: Swap to Production
Once verified, swap the new container to the production port:
docker stop OLD_CONTAINER
docker rename OLD_CONTAINER timescaledb-old
docker stop NEW_CONTAINER
docker rm NEW_CONTAINER
docker run -d \
--name timescaledb-1 \
--network YOUR_NETWORK \
--restart unless-stopped \
--shm-size 2g \
-p 5452:5432 \
-v timescaledb_data_v2:/home/postgres/pgdata/data \
-e POSTGRES_USER=YOUR_USER \
-e POSTGRES_PASSWORD='YOUR_PASSWORD' \
-e POSTGRES_DB=YOUR_DB \
-e POSTGRES_HOST_AUTH_METHOD=scram-sha-256 \
-e TIMESCALEDB_TELEMETRY=off \
--health-cmd 'pg_isready -U YOUR_USER -d YOUR_DB' \
--health-interval 10s \
--health-timeout 5s \
--health-start-period 60s \
--health-retries 5 \
timescale/timescaledb-ha:pg17
Step 10: Clean Up
Once everything is confirmed working:
# Remove old container
docker rm timescaledb-old
# Remove old volume (only after you're sure!)
docker volume rm old_timescaledb_data
# Keep the backup for a week
ls -lh /root/timescale_full_backup.sql.gz
Post-Migration: Tune PostgreSQL
The timescaledb-ha image runs timescaledb_tune on first init, but the POSTGRES_* env vars for tuning (like POSTGRES_SHARED_BUFFERS) are likely ignored. Tune via SQL instead:
docker exec timescaledb-1 psql -U YOUR_USER -d YOUR_DB -c "
ALTER SYSTEM SET shared_buffers = '2GB';
ALTER SYSTEM SET max_connections = 200;
ALTER SYSTEM SET effective_cache_size = '3GB';
ALTER SYSTEM SET work_mem = '16MB';
ALTER SYSTEM SET maintenance_work_mem = '256MB';"
docker restart timescaledb-1
Then verify:
docker exec timescaledb-1 psql -U YOUR_USER -d YOUR_DB -c "
SHOW shared_buffers; SHOW max_connections; SHOW work_mem;"
Lessons Learned
-
Always verify your volume mount path. The
timescaledb-haimage uses/home/postgres/pgdata/data, not/var/lib/postgresql/data. Getting this wrong meansinitdbruns and overwrites your volume. -
pg_dumpalldoesn't dump hypertable data. You must useCOPY (SELECT * FROM hypertable) TO STDOUTto export hypertable data -- plainCOPY table_name TO STDOUTreturns 0 rows because data lives in internal chunks. -
Continuous aggregates need manual recreation. They're restored as regular views, not as TimescaleDB continuous aggregates. Drop and recreate them with the
WITH (timescaledb.continuous)clause. -
Run each continuous aggregate refresh separately.
refresh_continuous_aggregate()cannot run inside a transaction block. -
Keep the old container until you've fully verified. Don't
docker rmit until you've compared row counts and confirmed the dashboard works. -
POSTGRES_*tuning env vars may be ignored. Thetimescaledb-haimage runstimescaledb_tunewhich writes directly topostgresql.conf. UseALTER SYSTEM SETfor reliable configuration. -
Always use a different port for the new container. Running both old and new simultaneously makes rollback trivial -- just stop the new one and start the old one.