Back to Overview

Backup & Restore

Back up and restore your Tymeslot data. All scheduling data lives in PostgreSQL — a simple pg_dump is all you need.

Luka Breitig — Technical Product Builder & AI Developer
Luka Breitig

Technical Product Builder & AI Developer

📋 Before You Begin

Requirements

  • Tymeslot running with Docker Compose
  • pg_dump available via the Postgres container (no local install needed)
  • Root or sudo access on the host

What you will back up

  • PostgreSQL database — all your data
  • Uploads volume — user avatar images

Outcome: By the end of this guide, you will have a complete backup procedure and know how to restore from it, including how to test that your backups are valid.

📦 What to Back Up

Two things hold your Tymeslot data. Both have different risk profiles and therefore different backup priorities:

🗄️

PostgreSQL database — back this up on a schedule

All users, event types, meetings, availability windows, calendar integrations, and settings. This is your critical data. Losing it means losing everything. A daily automated backup is the minimum; before any upgrade is mandatory.

🖼️

Uploads volume — back up weekly

User avatar images stored in the tymeslot_uploads Docker volume. Less critical — users can re-upload if lost. Back up weekly rather than daily to keep storage costs low.

1 Back Up the Database

Run pg_dump inside the Postgres container and redirect the output to a timestamped file on the host:

docker-compose exec -T postgres pg_dump -U tymeslot tymeslot \
  > backup-$(date +%Y%m%d-%H%M%S).sql

You should see the command return immediately with no output — the dump is written to the file. Confirm the backup is non-empty: ls -lh backup-*.sql. A typical dump is several megabytes even for a small instance.

If pg_dump fails with "role does not exist"

The -U tymeslot flag must match the database username configured in your .env file. Check the value of POSTGRES_USER and use that instead:

grep POSTGRES_USER .env
# Then rerun with the correct username, e.g.:
docker-compose exec -T postgres pg_dump -U myuser tymeslot > backup.sql

Store Backups Off-Server

Copy backup files to a remote location — S3, Backblaze B2, a Hetzner Storage Box, or any off-site storage. Local-only backups provide zero protection against disk failure, server compromise, or accidental docker-compose down -v. Tools like rclone or restic make this straightforward to automate.

2 Automate Database Backups

Add a cron job to run the backup automatically every day at 02:00. Open the crontab editor with crontab -e and add the following line, replacing the path with the directory where your docker-compose.yml lives:

0 2 * * * cd /path/to/tymeslot && \
  docker-compose exec -T postgres \
  pg_dump -U tymeslot tymeslot \
  > /backups/tymeslot-$(date +\%Y\%m\%d).sql 2>&1

The 2>&1 at the end redirects errors to the same file, so a failed dump is still detectable (the file will contain the error message rather than SQL). Create the /backups directory first: mkdir -p /backups.

Prune old backups automatically

Add a second cron entry to delete backups older than 30 days and prevent unbounded disk growth:

0 3 * * * find /backups -name 'tymeslot-*.sql' -mtime +30 -delete

3 Back Up the Uploads Volume

The uploads volume is a Docker-managed volume. Use a temporary Alpine container to mount it and create a compressed archive:

docker run --rm \
  -v tymeslot_uploads:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/uploads-$(date +%Y%m%d).tar.gz -C /data .

The archive is written to the current directory. You should see no output from the command if it succeeds. Verify: ls -lh uploads-*.tar.gz

Volume Name May Differ

Docker Compose prefixes volume names with the project name, which defaults to the directory name. If your compose directory is not named tymeslot, the volume may be named differently. Run docker volume ls | grep uploads to find the exact name.

4 Restore the Database

Follow these steps in order. Stopping the application first is critical — writing to the database during a restore will produce corrupted data.

1. Stop the application

docker-compose stop tymeslot

The Postgres container keeps running — you need it for the restore. Only the application container stops.

2. Restore from the backup file

docker-compose exec -T postgres psql -U tymeslot tymeslot \
  < backup-20260101-020000.sql

You should see a stream of SQL statements being executed. The command will return to the prompt when the restore is complete. Errors during restore are printed inline — look for any ERROR: lines.

If restore shows "database already exists" or constraint violations

The target database has existing data that conflicts with the dump. Drop and recreate it first, then restore:

docker-compose exec postgres psql -U tymeslot -c "DROP DATABASE tymeslot;"
docker-compose exec postgres psql -U tymeslot -c "CREATE DATABASE tymeslot;"
docker-compose exec -T postgres psql -U tymeslot tymeslot < backup-20260101-020000.sql

3. Start the application

docker-compose start tymeslot

Watch the logs: docker-compose logs -f tymeslot. You should see the application start cleanly and report Running 0 migrations (the restored schema is already up to date).

Restore Overwrites All Existing Data

Restoring a backup replaces all data in the database. There is no partial restore — everything is overwritten. Make sure you are restoring to the correct instance and that the application is stopped before beginning.

5 Restore the Uploads Volume

Extract the uploads archive back into the Docker volume using a temporary Alpine container:

docker run --rm \
  -v tymeslot_uploads:/data \
  -v $(pwd):/backup \
  alpine tar xzf /backup/uploads-20260101.tar.gz -C /data

No output means success. Verify files were restored by listing the volume contents: docker run --rm -v tymeslot_uploads:/data alpine ls /data

🧪 Test Your Backups

A backup you have never restored is a backup you cannot trust. Automated jobs can fail silently — the backup file exists but contains an error message instead of SQL, or it was created from a locked table and is incomplete. Discovering a broken backup only when you actually need it is the worst possible moment.

Run a test restore into a separate, isolated container at least once a month. It takes under five minutes and gives you confidence that the backup works:

1. Start an isolated Postgres container

docker run -d \
  --name tymeslot-restore-test \
  -e POSTGRES_USER=tymeslot \
  -e POSTGRES_PASSWORD=testpassword \
  -e POSTGRES_DB=tymeslot \
  postgres:16

Wait a few seconds for Postgres to initialise inside the container.

2. Restore the backup into the test container

docker exec -i tymeslot-restore-test \
  psql -U tymeslot tymeslot \
  < backup-20260101-020000.sql

3. Spot-check critical data

Query a key table to confirm data is present:

docker exec -it tymeslot-restore-test \
  psql -U tymeslot tymeslot \
  -c "SELECT COUNT(*) AS user_count FROM users;"

You should see a non-zero row count matching your user count. A count of zero means the restore failed or the table is empty in the backup.

4. Clean up the test container

docker rm -f tymeslot-restore-test

Schedule Regular Tests

Add a calendar reminder to test restore once a month. Consider scripting steps 1–4 above into a shell script and running it as part of your maintenance routine. The whole process takes under five minutes and gives you confidence when you need it most.

Frequently Asked Questions

My pg_dump command produces no output — did it work?

Silence is success. pg_dump writes the SQL dump to stdout, which the shell redirects into your file. There is nothing to print to the terminal. Confirm the backup completed and is non-empty by running:

ls -lh backup-*.sql

A typical dump is several megabytes even for a small instance. A zero-byte or missing file means the redirect failed — check that you have write permission in the current directory.

I restored the database but the application won't start — what happened?

The most common cause is a schema mismatch between the backup and the application version. Check the logs for the specific error:

docker-compose logs -f tymeslot

Look for migration errors or messages referencing missing columns or tables. This typically happens when you restore a backup taken from a newer version of Tymeslot onto an older image — the schema is ahead of what the old code expects. Ensure you are restoring a backup taken from the same or an older version of the application.

Can I restore a backup onto a newer version of Tymeslot?

Yes. Tymeslot runs all pending database migrations automatically on startup, before accepting any traffic. Restoring an older backup onto a newer image is safe — the application will apply the missing migrations on the next start. You will see Running X migrations in the startup logs confirming this happened. The direction to avoid is the reverse: restoring a backup taken from a newer version onto an older image.

How much disk space will my backups use?

SQL dumps are plain text and compress exceptionally well. A typical Tymeslot instance with a few hundred users produces a dump that compresses to under 10 MB with gzip. Running gzip backup-*.sql immediately after each dump is recommended for long-term storage. The daily pruning cron entry handles the .sql files — update the pattern to tymeslot-*.sql.gz if you compress them.

The cron job runs but the backup file is empty or contains an error message instead of SQL.

This almost always means the Docker service name in the cron command does not match the name of the running container. The docker-compose exec command will exit non-zero and write the error to the file via the 2>&1 redirect. Run the following to verify the correct service name:

docker-compose ps

Update the cron command to use the exact service name shown in the output. Also confirm the working directory path in the cron entry is correct — cron does not use your shell's current directory.

Verify Your Setup

Confirm each of the following before considering your backup strategy complete:
  • Manual backup completes without errors. Run the pg_dump command manually and confirm the output file is non-empty.
  • Cron job is scheduled. Run crontab -l and confirm the backup entry is present.
  • A test restore succeeds. Follow the test procedure above and confirm the users table contains the expected row count.
  • Backups are stored off-server. At least the most recent backup is copied to a remote location outside the production server.
  • Old backups are pruned automatically. Confirm the cleanup cron entry is in place, or verify that your off-site storage has a retention policy configured.

🔗 Related Articles

Read Upgrading Tymeslot

Upgrading Tymeslot

Keep your Tymeslot instance up to date. Database migrations run automatically on startup — upgrades are a single command.

Read Docker Self-Hosting

Docker Self-Hosting

Deploy Tymeslot using Docker and Docker Compose. Perfect for VPS hosting, home servers, or any environment with Docker support.

Read Cloudron Deployment

Cloudron Deployment

One-click installation on Cloudron. Automated backups, SSL certificates, and updates handled automatically.