Upgrading Tymeslot
Keep your Tymeslot instance up to date. Database migrations run automatically on startup — upgrades are a single command.
Back up and restore your Tymeslot data. All scheduling data lives in PostgreSQL — a simple pg_dump is all you need.
Technical Product Builder & AI Developer
Requirements
pg_dump available via the Postgres container (no local install needed)sudo access on the hostWhat you will back up
Outcome: By the end of this guide, you will have a complete backup procedure and know how to restore from it, including how to test that your backups are valid.
Two things hold your Tymeslot data. Both have different risk profiles and therefore different backup priorities:
PostgreSQL database — back this up on a schedule
All users, event types, meetings, availability windows, calendar integrations, and settings. This is your critical data. Losing it means losing everything. A daily automated backup is the minimum; before any upgrade is mandatory.
Uploads volume — back up weekly
User avatar images stored in the tymeslot_uploads Docker volume. Less critical — users can re-upload if lost. Back up weekly rather than daily to keep storage costs low.
Run pg_dump inside the Postgres container and redirect the output to a timestamped file on the host:
docker-compose exec -T postgres pg_dump -U tymeslot tymeslot \
> backup-$(date +%Y%m%d-%H%M%S).sql
You should see the command return immediately with no output — the dump is written to the file. Confirm the backup is non-empty: ls -lh backup-*.sql. A typical dump is several megabytes even for a small instance.
If pg_dump fails with "role does not exist"
The -U tymeslot flag must match the database username configured in your .env file. Check the value of POSTGRES_USER and use that instead:
grep POSTGRES_USER .env
# Then rerun with the correct username, e.g.:
docker-compose exec -T postgres pg_dump -U myuser tymeslot > backup.sql
docker-compose down -v. Tools like rclone or restic make this straightforward to automate.
Add a cron job to run the backup automatically every day at 02:00. Open the crontab editor with crontab -e and add the following line, replacing the path with the directory where your docker-compose.yml lives:
0 2 * * * cd /path/to/tymeslot && \
docker-compose exec -T postgres \
pg_dump -U tymeslot tymeslot \
> /backups/tymeslot-$(date +\%Y\%m\%d).sql 2>&1
The 2>&1 at the end redirects errors to the same file, so a failed dump is still detectable (the file will contain the error message rather than SQL). Create the /backups directory first: mkdir -p /backups.
Add a second cron entry to delete backups older than 30 days and prevent unbounded disk growth:
0 3 * * * find /backups -name 'tymeslot-*.sql' -mtime +30 -delete
The uploads volume is a Docker-managed volume. Use a temporary Alpine container to mount it and create a compressed archive:
docker run --rm \
-v tymeslot_uploads:/data \
-v $(pwd):/backup \
alpine tar czf /backup/uploads-$(date +%Y%m%d).tar.gz -C /data .
The archive is written to the current directory. You should see no output from the command if it succeeds. Verify: ls -lh uploads-*.tar.gz
tymeslot, the volume may be named differently. Run docker volume ls | grep uploads to find the exact name.
Follow these steps in order. Stopping the application first is critical — writing to the database during a restore will produce corrupted data.
1. Stop the application
docker-compose stop tymeslot
The Postgres container keeps running — you need it for the restore. Only the application container stops.
2. Restore from the backup file
docker-compose exec -T postgres psql -U tymeslot tymeslot \
< backup-20260101-020000.sql
You should see a stream of SQL statements being executed. The command will return to the prompt when the restore is complete. Errors during restore are printed inline — look for any ERROR: lines.
If restore shows "database already exists" or constraint violations
The target database has existing data that conflicts with the dump. Drop and recreate it first, then restore:
docker-compose exec postgres psql -U tymeslot -c "DROP DATABASE tymeslot;"
docker-compose exec postgres psql -U tymeslot -c "CREATE DATABASE tymeslot;"
docker-compose exec -T postgres psql -U tymeslot tymeslot < backup-20260101-020000.sql
3. Start the application
docker-compose start tymeslot
Watch the logs: docker-compose logs -f tymeslot. You should see the application start cleanly and report Running 0 migrations (the restored schema is already up to date).
Extract the uploads archive back into the Docker volume using a temporary Alpine container:
docker run --rm \
-v tymeslot_uploads:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/uploads-20260101.tar.gz -C /data
No output means success. Verify files were restored by listing the volume contents: docker run --rm -v tymeslot_uploads:/data alpine ls /data
A backup you have never restored is a backup you cannot trust. Automated jobs can fail silently — the backup file exists but contains an error message instead of SQL, or it was created from a locked table and is incomplete. Discovering a broken backup only when you actually need it is the worst possible moment.
Run a test restore into a separate, isolated container at least once a month. It takes under five minutes and gives you confidence that the backup works:
docker run -d \
--name tymeslot-restore-test \
-e POSTGRES_USER=tymeslot \
-e POSTGRES_PASSWORD=testpassword \
-e POSTGRES_DB=tymeslot \
postgres:16
Wait a few seconds for Postgres to initialise inside the container.
docker exec -i tymeslot-restore-test \
psql -U tymeslot tymeslot \
< backup-20260101-020000.sql
Query a key table to confirm data is present:
docker exec -it tymeslot-restore-test \
psql -U tymeslot tymeslot \
-c "SELECT COUNT(*) AS user_count FROM users;"
You should see a non-zero row count matching your user count. A count of zero means the restore failed or the table is empty in the backup.
docker rm -f tymeslot-restore-test
Silence is success. pg_dump writes the SQL dump to stdout, which the shell redirects into your file. There is nothing to print to the terminal. Confirm the backup completed and is non-empty by running:
ls -lh backup-*.sql
A typical dump is several megabytes even for a small instance. A zero-byte or missing file means the redirect failed — check that you have write permission in the current directory.
The most common cause is a schema mismatch between the backup and the application version. Check the logs for the specific error:
docker-compose logs -f tymeslot
Look for migration errors or messages referencing missing columns or tables. This typically happens when you restore a backup taken from a newer version of Tymeslot onto an older image — the schema is ahead of what the old code expects. Ensure you are restoring a backup taken from the same or an older version of the application.
Yes. Tymeslot runs all pending database migrations automatically on startup, before accepting any traffic. Restoring an older backup onto a newer image is safe — the application will apply the missing migrations on the next start. You will see Running X migrations in the startup logs confirming this happened. The direction to avoid is the reverse: restoring a backup taken from a newer version onto an older image.
SQL dumps are plain text and compress exceptionally well. A typical Tymeslot instance with a few hundred users produces a dump that compresses to under 10 MB with gzip. Running gzip backup-*.sql immediately after each dump is recommended for long-term storage. The daily pruning cron entry handles the .sql files — update the pattern to tymeslot-*.sql.gz if you compress them.
This almost always means the Docker service name in the cron command does not match the name of the running container. The docker-compose exec command will exit non-zero and write the error to the file via the 2>&1 redirect. Run the following to verify the correct service name:
docker-compose ps
Update the cron command to use the exact service name shown in the output. Also confirm the working directory path in the cron entry is correct — cron does not use your shell's current directory.
pg_dump command manually and confirm the output file is non-empty.
crontab -l and confirm the backup entry is present.
users table contains the expected row count.
Keep your Tymeslot instance up to date. Database migrations run automatically on startup — upgrades are a single command.
Deploy Tymeslot using Docker and Docker Compose. Perfect for VPS hosting, home servers, or any environment with Docker support.
One-click installation on Cloudron. Automated backups, SSL certificates, and updates handled automatically.