Dokploy provides built-in automated backup functionality for all supported databases. Backups are stored in S3-compatible destinations and can be scheduled with cron expressions.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dokploy/dokploy/llms.txt
Use this file to discover all available pages before exploring further.
Supported Databases
Automated backups are available for:- PostgreSQL: Using
pg_dump - MySQL: Using
mysqldump - MariaDB: Using
mariadb-dump - MongoDB: Using
mongodump - Compose Services: Volume-based backups
Redis typically doesn’t require separate backups - use RDB/AOF persistence instead. See Redis documentation for persistence configuration.
Backup Destinations
Before creating backups, configure a backup destination:S3-Compatible Storage
Dokploy uses Rclone to support various S3-compatible storage providers:- Amazon S3
- Backblaze B2
- Cloudflare R2
- DigitalOcean Spaces
- MinIO
- Wasabi
- Any S3-compatible service
Add Destination
Click Add Destination and provide:
- Name: Identifier for this destination
- Provider: S3-compatible provider type
- Bucket: Bucket name
- Region: Bucket region
- Access Key ID: S3 access key
- Secret Access Key: S3 secret key
- Endpoint (optional): Custom endpoint URL
Creating a Backup
Create Backup Configuration
Click Add Backup and configure:Basic Settings:Schedule:Retention:
Use crontab.guru to help construct cron expressions.
Backup Configuration
Schedule Syntax
Cron expression format:minute hour day month weekday
Common schedules:
| Schedule | Cron Expression | Use Case |
|---|---|---|
| Every hour | 0 * * * * | High-frequency changes |
| Every 6 hours | 0 */6 * * * | Active databases |
| Daily at 2 AM | 0 2 * * * | Standard production |
| Daily at midnight | 0 0 * * * | End-of-day backup |
| Weekly (Sunday 2 AM) | 0 2 * * 0 | Lower-priority databases |
| Monthly (1st at 2 AM) | 0 2 1 * * | Archives |
Backup Retention
The Keep Latest setting determines how many backups to retain:- Older backups are automatically deleted
- Deletion happens after successful new backup
- Helps manage storage costs
- Recommended: 7-30 backups depending on change frequency
Backup Path Structure
Backups are stored in S3 with this structure:Database-Specific Backup Details
PostgreSQL Backups
PostgreSQL backups usepg_dump with gzip compression:
- Complete database schema
- All table data
- Indexes
- Constraints
- Views
- Functions and triggers
MySQL/MariaDB Backups
MySQL and MariaDB usemysqldump or mariadb-dump:
- Database schema
- Table data
- Indexes
- Triggers
- Views
- Stored procedures
- Root password required for backup
- Large tables may impact performance
- Consider
--single-transactionfor InnoDB
MongoDB Backups
MongoDB backups usemongodump with compression:
- All collections
- Indexes
- Collection metadata
- Database configuration
- Requires admin user credentials
- Large datasets may take time
- Locks collections briefly during dump
Compose Backups
Compose service backups create volume archives:- All files in specified volumes
- File permissions and ownership
- Directory structure
Manual Backups
Run a backup manually outside the schedule:- Navigate to Backups tab
- Find your backup configuration
- Click Run Now
- Wait for backup to complete
- Check destination for new backup file
- Before major database changes
- Before version upgrades
- Before data migrations
- To test backup configuration
Restoring Backups
Restore a database from a backup:Select Backup
Choose the backup configuration and browse available backup files:
- Select the Destination
- Browse available backups
- Choose the backup file to restore
Configure Restore
Specify restore options:Database Name:Credentials:
- For PostgreSQL: User password
- For MySQL/MariaDB: Root password
- For MongoDB: Admin password
Start Restore
Click Restore to begin:
- Database will be stopped
- Backup file downloaded from S3
- Data restored to database
- Database restarted
Restore Process Details
PostgreSQL Restore
MySQL/MariaDB Restore
MongoDB Restore
Monitoring Backups
Backup Status
Check backup status in the dashboard:- Last Run: Timestamp of last backup
- Status: Success or error
- File Size: Size of last backup
- Next Run: Scheduled next backup time
Backup Logs
View detailed backup logs:- Navigate to Backups tab
- Click on a backup configuration
- View Backup History
- Check logs for any errors
Backup Best Practices
Frequency
- Production databases: Daily or more frequent
- Development databases: Weekly or on-demand
- Critical data: Every 6 hours or hourly
- Archives: Monthly or quarterly
Retention
- Hot backups: 7-14 days for quick recovery
- Warm backups: 30-90 days for compliance
- Cold backups: Long-term archives (manual)
Storage
- Use different regions: Store backups in different region than database
- Lifecycle policies: Use S3 lifecycle rules for long-term storage
- Monitor costs: Track storage usage and optimize retention
- Test restores: Regularly test restore procedures
Security
- Encrypt backups: Use S3 server-side encryption
- Secure credentials: Rotate S3 access keys regularly
- Limit access: Use IAM policies to restrict backup access
- Audit logs: Monitor backup and restore operations
Troubleshooting
Backup not running
Backup not running
If backups aren’t executing:
- Check if enabled: Verify backup configuration is enabled
- Verify cron schedule: Ensure cron expression is valid
- Check database status: Database must be running
- Review logs: Check backup logs for errors
- Test destination: Verify S3 destination is accessible
Backup fails with authentication error
Backup fails with authentication error
Database credential issues:
- Verify database password: Check password is correct
- Check user permissions: User needs backup privileges
- PostgreSQL: User must own database or have pg_dump rights
- MySQL/MariaDB: Need PROCESS and LOCK TABLES privileges
- MongoDB: User must have backup role
Cannot connect to S3
Cannot connect to S3
S3 destination connectivity issues:
- Verify credentials: Check access key and secret key
- Check bucket: Ensure bucket exists and is accessible
- Test permissions: Verify write permissions on bucket
- Check endpoint: Confirm endpoint URL if using custom provider
- Network access: Ensure firewall allows S3 connections
Backup files too large
Backup files too large
Large backup file issues:
- Compression: Backups are gzipped but may still be large
- Check database size: Use
duto check volume size - Increase timeout: Adjust backup timeout if needed
- Consider incremental: Not currently supported - contact support
- Storage limits: Ensure S3 bucket has sufficient space
Restore fails
Restore fails
Restore operation issues:
- Check credentials: Verify restore user has proper rights
- Database exists: Ensure target database exists
- Stop applications: Stop apps using database during restore
- Sufficient space: Verify disk space for restored data
- Compatible version: Backup and restore versions should match
Old backups not deleted
Old backups not deleted
Retention policy not working:
- Check retention setting: Verify “Keep Latest” is configured
- Wait for next backup: Deletion happens after successful backup
- Manual cleanup: Delete old backups manually from S3 if needed
- Check logs: Review backup logs for deletion errors
- S3 permissions: Ensure delete permission on bucket
Disaster Recovery
Prepare for disaster recovery scenarios:Recovery Plan
- Document procedures: Write step-by-step restore instructions
- Test regularly: Practice restores monthly
- Multiple destinations: Use multiple S3 buckets/regions
- Monitor backups: Set up alerts for failed backups
- Automate testing: Script restore tests to temporary databases
RTO and RPO
Recovery Time Objective (RTO):- How quickly can you restore?
- Depends on backup size and network speed
- Test to establish realistic RTO
- How much data loss is acceptable?
- Determined by backup frequency
- More frequent backups = lower RPO
Example Recovery Scenario
Advanced Backup Strategies
Multi-Region Backups
For critical data, maintain backups in multiple regions:- Create multiple backup destinations (different regions)
- Configure separate backup jobs for each destination
- Use same schedule for consistency
- Monitor all backup jobs
Backup Verification
Automatically verify backup integrity:- Periodically restore to temporary database
- Run integrity checks
- Verify row counts
- Test application queries
- Automate with scripts
Backup Compression
All backups are automatically gzipped for efficiency:Next Steps
PostgreSQL
Back to PostgreSQL documentation
Database Overview
View all database options