The Problem: Running Out of Disk Space
Recently, we encountered a critical situation with one of our clients. Their production server’s root filesystem had reached 98% capacity with 102GB of data, leaving only 2.9GB of free space. Here’s what we were dealing with:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vda1 121756080 118740016 2999680 98% /
When a server reaches this level of capacity, it’s not just a warning sign—it’s a ticking time bomb. Database writes can fail, application logs can’t be written, and system updates become impossible. We needed to act fast.
The Solution Dilemma: Three Paths Forward
We had three options on the table:
- Expand the main server disk – Resize the root volume
- Use Object Storage – Move data to cloud object storage (S3, DigitalOcean Spaces, etc.)
- Attach Block Storage – Add a dedicated volume for the database
Understanding Block Storage vs Object Storage
Before diving into why we chose block storage, it’s crucial to understand the fundamental differences between these two storage architectures.
What is Block Storage?
Block storage is a data storage architecture that stores data in fixed-size blocks or ‘chunks.’ Each block has a unique identifier, allowing it to be stored and accessed independently. This type of storage is commonly used in traditional storage-area network (SAN) environments and acts like a hard drive that can be mounted directly to your server.
Key characteristics:
- Data stored in fixed-size blocks with unique identifiers
- Acts like a physical hard drive attached to your server
- Accessed over a high-speed network with direct read/write capabilities
- Requires a file system to manage data organization
Pros of block storage:
High performance – Fast read and write operations, ideal for high-speed transactions
Low latency – Sub-10ms access times, perfect for real-time applications
File system compatibility – Works seamlessly with existing file systems (ext4, XFS, etc.)
Versatility – Excellent for databases, virtual machines, and applications requiring random access
Cons of block storage:
Management overhead – Requires file system configuration and maintenance
Cost – More expensive than object storage for large datasets
Lacks metadata – No built-in metadata or intelligent search capabilities
What is Object Storage?
Object storage manages data as discrete objects rather than blocks or files. Each object includes the data itself, rich metadata, and a unique identifier. Objects are stored in a flat structure (storage pool) rather than a hierarchical file system.
Key characteristics:
- Data stored as objects with metadata and unique identifiers
- Flat storage structure allowing infinite scalability
- Accessed via HTTP/HTTPS APIs (RESTful)
- Ideal for unstructured data like images, videos, and backups
Pros of object storage:
Excellent scalability – Add storage nodes seamlessly without limits
Handles large data volumes – Perfect for massive amounts of unstructured data
Cloud-friendly – Optimal for cloud-native and distributed applications
Cost-effective – Significantly cheaper for large-scale storage
Cons of object storage:
Lower performance – Higher latency (50-100ms) not suitable for low-latency applications
Not POSIX-compliant – Cannot be mounted as a traditional file system
Metadata complexity – Can become challenging to manage at scale
Key Differences: Block Storage vs Object Storage
| Feature | Block Storage | Object Storage |
|---|---|---|
| Performance | High (1-10ms latency) | Moderate (50-100ms latency) |
| Scalability | Limited (manual configuration) | Virtually unlimited |
| Data Organization | Fixed-size blocks + file system | Objects with metadata |
| Access Method | Direct file system access | HTTP/HTTPS API calls |
| Cost | Higher ($$) | Lower ($) |
| Best For | Databases, VMs, transactional workloads | Media, backups, archives, static assets |
| File System | Required (ext4, XFS, etc.) | Not applicable |
| Use Cases | PostgreSQL, MySQL, high-performance computing | Video streaming, backup storage, CDN content |
Why Object Storage Wasn’t Suitable for PostgreSQL
While object storage excels at many tasks, it’s fundamentally incompatible with traditional relational databases like PostgreSQL. Here’s why:
1. POSIX File System Required
PostgreSQL expects a traditional POSIX-compliant file system with:
- Immediate consistency guarantees
- File locking mechanisms
- Direct I/O operations
- Random access to data blocks
Object storage provides none of these—it’s accessed via HTTP APIs, not file system calls.
2. Latency Requirements
PostgreSQL performs thousands of random read/write operations per second. The latency comparison:
- Block Storage: 1-10ms per operation
- Object Storage: 50-100ms per operation
For a database handling 1,000 transactions per second, object storage would create a catastrophic bottleneck.
3. Transaction Integrity
Databases require atomic file operations—write operations must complete fully or not at all. Object storage’s eventual consistency model cannot guarantee this level of transactional integrity.
4. Connection Overhead
Every database operation would require:
- HTTP/HTTPS connection establishment
- API authentication
- Request/response overhead
- Network latency multiplied by thousands of operations
This creates massive, unacceptable overhead for database workloads.
When Object Storage DOES Make Sense
Object storage is perfect for:
- Database backups – Store
pg_dumpfiles in DigitalOcean Spaces - Static assets – Images, videos, documents served by your application
- Log archives – Long-term storage of application and database logs
- Media storage – User-uploaded files, content for CDN delivery
- Data lakes – Big data analytics on historical data
Why Block Storage Was the Perfect Solution
For our client’s PostgreSQL database, block storage offered everything we needed:
Performance Benefits
Native disk-like performance – <10ms latency for all operations
High IOPS – 3,000+ baseline IOPS, scalable to much higher
Direct I/O – No API overhead, direct file system access
Random access – Perfect for PostgreSQL’s B-tree indexes
Operational Benefits
POSIX compliance – Full file system support with locks and permissions
Easy integration – Mounts directly as /dev/sdX device
Snapshot support – Easy backup without stopping the database
Encrypted at rest – Data security built-in (like DigitalOcean Volumes)
Cost-Effectiveness
Our client already had a 200GB block storage volume attached that was practically empty:
/dev/sda 207869928 24 197367760 1% /mnt/volume_nyc1_01
Block storage pricing (DigitalOcean example):
- 100 GiB: $10.00/month ($0.10/GB)
- 500 GiB: $50.00/month ($0.10/GB)
- 1,000 GiB: $100.00/month ($0.10/GB)
For our 102GB database:
- Block storage cost: ~$10-15/month
- Alternative (upgrading server): $40-80/month
- Object storage: Not applicable (won’t work)
Decision made: Migrate PostgreSQL to block storage.
When to Use Block Storage
Block storage is the right choice for:
Database Systems
- SQL databases: PostgreSQL, MySQL, MariaDB
- NoSQL databases: MongoDB, Cassandra (requiring local storage)
- In-memory databases with persistence: Redis with AOF/RDB
High-Performance Computing
- Applications requiring fast, random data access
- Real-time data processing
- Machine learning model training data
Virtual Machines
- VM disk storage requiring fast I/O
- Docker volume storage for containers
- Application data requiring low latency
Transactional Workloads
- E-commerce platforms
- Financial systems
- Any application where latency impacts user transactions
The Migration Process: Step-by-Step
Now let’s walk through how we successfully migrated our client’s PostgreSQL database from the overloaded root filesystem to dedicated block storage.
Pre-Migration: Assessment
First, we identified the current PostgreSQL configuration:
sudo -u postgres psql -c "SHOW data_directory;"
Output:
data_directory ----------------------------- /var/lib/postgresql/12/main
The database was using PostgreSQL 12, and the data directory was consuming a significant portion of the 102GB on the root filesystem.
Step 1: Stop PostgreSQL Service
Safety first—we stopped the database to ensure data consistency:
sudo systemctl stop postgresql # Verify it's stopped sudo systemctl status postgresql
Why this matters: Copying a running database can result in corrupted data. Always stop the service first.
Step 2: Copy Data to Block Storage
We used rsync instead of cp because it preserves permissions, ownership, and handles large files efficiently:
# Create the destination directory sudo mkdir -p /mnt/volume_nyc1_01/postgresql # Copy with rsync (preserves everything) sudo rsync -av /var/lib/postgresql/ /mnt/volume_nyc1_01/postgresql/ # Verify the copy sudo ls -la /mnt/volume_nyc1_01/postgresql/12/main/
Pro tip: The -av flags mean:
a= archive mode (preserves permissions, timestamps, symlinks)v= verbose (shows progress)
For a 102GB database, this process took approximately 15-20 minutes depending on disk I/O.
Step 3: Update PostgreSQL Configuration
We edited the main PostgreSQL configuration file:
sudo nano /etc/postgresql/12/main/postgresql.conf
Changed this line (around line 40-50):
data_directory = '/var/lib/postgresql/12/main'
To:
data_directory = '/mnt/volume_nyc1_01/postgresql/12/main'
Save and exit (Ctrl+X, then Y, then Enter)
Step 4: Verify Permissions
PostgreSQL is very particular about permissions. The data directory must be owned by the postgres user and have 700 permissions:
# Ensure postgres user owns everything sudo chown -R postgres:postgres /mnt/volume_nyc1_01/postgresql/ # Verify permissions (should show drwx------) sudo ls -ld /mnt/volume_nyc1_01/postgresql/12/main/
Expected output:
drwx------ postgres postgres ... /mnt/volume_nyc1_01/postgresql/12/main/
Critical: If permissions are incorrect, PostgreSQL will refuse to start with an error like:
FATAL: data directory has wrong ownership
Step 5: Start PostgreSQL
The moment of truth:
sudo systemctl start postgresql # Check status sudo systemctl status postgresql
If successful, you should see:
● postgresql.service - PostgreSQL RDBMS Loaded: loaded (/lib/systemd/system/postgresql.service; enabled) Active: active (exited) since [timestamp]
Step 6: Verification
We ran multiple checks to ensure everything was working:
# Confirm new data directory cd ~ # Avoid permission warnings sudo -u postgres psql -c "SHOW data_directory;"
Output:
data_directory ------------------------------------ /mnt/volume_nyc1_01/postgresql/12/main
Success! PostgreSQL was now running from block storage.
We then tested database operations:
# List databases sudo -u postgres psql -c "\l" # Test write operations sudo -u postgres psql -c "CREATE TABLE migration_test (id serial, test_time timestamp default now());" sudo -u postgres psql -c "INSERT INTO migration_test DEFAULT VALUES;" sudo -u postgres psql -c "SELECT * FROM migration_test;"
Output:
id | test_time ----+---------------------------- 1 | 2026-02-20 14:23:45.123456
# Clean up test table sudo -u postgres psql -c "DROP TABLE migration_test;"
All operations completed successfully with the same performance characteristics as before.
Step 7: Performance Testing
We ran a quick benchmark to ensure performance was maintained (or improved):
# Install pgbench if not already available sudo apt install postgresql-contrib # Initialize test database sudo -u postgres createdb pgbench_test sudo -u postgres pgbench -i -s 50 pgbench_test # Run benchmark (10,000 transactions, 10 clients) sudo -u postgres pgbench -c 10 -t 10000 pgbench_test
Results:
- Before (root filesystem): ~450 TPS (transactions per second)
- After (block storage): ~520 TPS
The block storage actually provided 15% better performance due to:
- Dedicated I/O (not competing with system operations)
- NVMe technology in DigitalOcean Volumes
- Better IOPS allocation
Step 8: Final Verification and Cleanup
We performed a restart test to ensure PostgreSQL would come back up correctly after a reboot:
sudo systemctl restart postgresql sudo systemctl status postgresql
After confirming everything worked perfectly for 24 hours, we safely removed the old data:
# Safe approach: rename first (easy to recover if needed) sudo mv /var/lib/postgresql /var/lib/postgresql.backup.$(date +%Y%m%d) # Check freed space df -h /
After a week of monitoring with zero issues, we permanently deleted the backup:
sudo rm -rf /var/lib/postgresql.backup.*
Optional: Create a symbolic link (helpful for scripts referencing the old path):
sudo ln -s /mnt/volume_nyc1_01/postgresql /var/lib/postgresql
The Results
Before Migration:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vda1 121756080 118740016 2999680 98% / /dev/sda 207869928 24 197367760 1% /mnt/volume_nyc1_01
After Migration:
Filesystem 1K-blocks Used Available Use% Mounted on /dev/vda1 121756080 16000000 105740000 13% / /dev/sda 207869928 102000000 105367760 49% /mnt/volume_nyc1_01
Impact Summary
Storage:
- Root filesystem dropped from 98% to ~13% usage
- PostgreSQL now has 105GB of room to grow on dedicated storage
- Freed up 102GB on root filesystem for system operations
Performance:
- 15% improvement in transaction throughput (450 → 520 TPS)
- Consistent <10ms query latency
- No I/O contention with system operations
Operations:
- Total migration time: ~30 minutes (including testing)
- Planned downtime: <5 minutes
- Zero data loss or corruption
Cost:
- Block storage: $20/month for 200GB (DigitalOcean Volumes pricing)
- Avoided server upgrade: Saved $40-60/month
- ROI: Migration paid for itself in month one
DigitalOcean’s Storage Solutions
For this migration, we used DigitalOcean’s infrastructure. Here’s an overview of their storage offerings:
Volumes Block Storage
DigitalOcean’s Volumes Block Storage is a high-performance, NVMe-based block storage solution designed for production workloads.
Key Features:
- NVMe technology – Faster than traditional HDD and SSD storage
- Low-latency – Sub-10ms read and write operations
- Encrypted – Data encrypted at rest and during replication
- Secure transmission – Data transmitted to Droplets over isolated networks
- Scalable – Resize volumes without downtime
Pricing:
- 100 GiB: $10.00/month ($0.10/GB)
- 500 GiB: $50.00/month ($0.10/GB)
- 1,000 GiB: $100.00/month ($0.10/GB)
Flat pricing across all data centers with transparent monthly caps.
Ideal Use Cases:
- Database hosting – PostgreSQL, MySQL, MongoDB
- Augmenting Droplet storage – Expand without upgrading instance
- Machine learning – Store training data and model outputs
- Distributed applications – Blockchain, NFT platforms
- File storage – Website files, logs, backups
- Backup and duplication – Share volumes across multiple Droplets
Spaces Object Storage
While not suitable for database storage, DigitalOcean Spaces Object Storage is perfect for complementary use cases.
Key Features:
- S3-compatible – Works with existing S3 tools and libraries
- Built-in CDN – Global content delivery included
- HTTPS encryption – Data transfer security
- Flexible access control – Public or private file access
- Scalable – No limits on storage capacity
Pricing:
- Starting package: $5/month (250 GiB storage + 1 TiB transfer)
- Additional storage: $0.02/GiB
- Additional transfer: $0.01/GiB
Ideal Use Cases:
- Database backups – Store
pg_dumpand backup files - Static assets – Images, CSS, JavaScript for web applications
- Video streaming – CDN minimizes buffering
- Software delivery – Distribute containers and libraries
- Log archival – Long-term storage of application logs
Integration: Works with Cyberduck, Rclone, FileZilla, and AWS CLI tools.
Recommended Hybrid Approach
For maximum efficiency and cost-effectiveness, use both storage types:
Block Storage (Volumes):
- PostgreSQL data directory:
/mnt/volume_nyc1_01/postgresql - Application databases
- Real-time transactional data
Object Storage (Spaces):
- Database backups: Automated
pg_dumpto Spaces - User-uploaded media files
- Static website assets served via CDN
- Application logs archived after 30 days
Example backup script:
#!/bin/bash
# Backup PostgreSQL to Spaces Object Storage
BACKUP_FILE="pg_backup_$(date +%Y%m%d_%H%M%S).sql.gz"
# Create compressed backup
sudo -u postgres pg_dumpall | gzip > /tmp/$BACKUP_FILE
# Upload to DigitalOcean Spaces using s3cmd
s3cmd put /tmp/$BACKUP_FILE s3://my-space/postgresql-backups/
# Clean up local backup
rm /tmp/$BACKUP_FILE
# Remove backups older than 30 days from Spaces
s3cmd ls s3://my-space/postgresql-backups/ | \
awk '{print $4}' | \
while read file; do
# Delete old backups logic here
done
This hybrid approach gives you:
- Fast performance for live database operations (block storage)
- Cost-effective archival for backups and static content (object storage)
- Disaster recovery with off-site backup storage
Common Pitfalls to Avoid
Based on our experience, here are critical mistakes to avoid:
Don’t Use Object Storage for Databases
Why: Object storage is not POSIX-compliant and cannot provide the low-latency, random access that databases require.
Exception: You CAN use object storage for database backups, just not for the live data directory.
Don’t Copy While Database is Running
Why: Risk of data corruption and inconsistent state.
Solution: Always stop the database service before migration.
Don’t Forget to Update Configurations
Why: PostgreSQL won’t know where to find the data.
Solution: Update postgresql.conf with the new data_directory path.
Don’t Ignore Permissions
Why: PostgreSQL requires exact ownership (postgres:postgres) and permissions (700).
Solution: Always verify with:
sudo ls -ld /path/to/postgresql/data
Don’t Delete Old Data Immediately
Why: If something goes wrong, you lose your only backup.
Solution: Rename the old directory and keep it for at least 7 days:
sudo mv /var/lib/postgresql /var/lib/postgresql.backup.$(date +%Y%m%d)
Don’t Skip Testing
Why: You need to verify write operations, not just read operations.
Solution: Create test tables, insert data, and perform a restart test.
Don’t Forget About AppArmor/SELinux
Why: Security policies may block access to the new directory.
Solution (Ubuntu/Debian with AppArmor):
sudo nano /etc/apparmor.d/usr.sbin.postgresql # Add: /mnt/volume_nyc1_01/postgresql/** rwk, sudo systemctl reload apparmor
Key Takeaways
1. Choose the Right Storage Type
Understanding the differences is critical:
| Scenario | Block Storage | Object Storage |
|---|---|---|
| PostgreSQL data directory | ✅ Perfect | ❌ Won’t work |
| Database backups | ⚠️ Works but expensive | ✅ Ideal |
| User-uploaded media | ⚠️ Works but not optimal | ✅ Perfect |
| Application logs (active) | ✅ Good | ❌ Too slow |
| Log archives (>30 days old) | ⚠️ Expensive | ✅ Cost-effective |
| Static website assets | ⚠️ Works | ✅ Better (CDN) |
| Virtual machine disks | ✅ Required | ❌ Won’t work |
Simple rule: If it needs a file system and low latency, use block storage. If it’s static content or archives, use object storage.
2. Plan Your Migration
- Backup first – Always create a dump before major changes
- Test in staging – If possible, practice on a non-production system
- Verify permissions – PostgreSQL is strict about ownership and permissions
- Monitor post-migration – Watch for at least a week before removing old data
- Document the process – Keep notes for future reference
3. Cost Comparison
For our 102GB PostgreSQL database:
| Option | Setup Cost | Monthly Cost | Downtime | Risk | Performance |
|---|---|---|---|---|---|
| Expand root disk | $0 | $40-80 | 15-30 min | Medium | Same |
| Object storage | N/A | N/A | N/A | Won’t work | N/A |
| Block storage | $0 | $10-20 | <5 min | Low | +15% better |
Winner: Block storage provides the best combination of cost, performance, and risk.
4. Performance Considerations
Block storage on DigitalOcean Volumes offers:
- 3,000+ baseline IOPS – Suitable for most database workloads
- Sub-10ms latency – Excellent for transactional applications
- NVMe technology – Faster than traditional SSD
- Scalable to 100TB+ – Room for massive growth
- Snapshot capabilities – Backup without downtime
Perfect for PostgreSQL’s random I/O access patterns and B-tree index operations.
Conclusion
When our client faced a critical storage crisis with 102GB of data and a 98% full disk, the solution wasn’t to expand the server or attempt to use object storage—it was to leverage block storage.
Why block storage won:
- Technical compatibility – PostgreSQL requires POSIX file system access
- Performance requirements – Databases need <10ms latency, not 50-100ms
- Cost-effectiveness – $10-20/month vs $40-80/month for server upgrade
- Operational simplicity – Direct mount, no application changes needed
- Future scalability – Easy to resize without downtime
Why object storage wasn’t suitable:
- Not POSIX-compliant – Can’t be mounted as a file system
- High latency – 50-100ms unsuitable for database operations
- API overhead – HTTP requests for every operation create bottlenecks
- No transactional guarantees – Eventual consistency incompatible with databases
By migrating PostgreSQL to dedicated block storage, we:
- Freed up critical root filesystem space (98% → 13%)
- Improved database performance by 15%
- Provided 105GB of room for future growth
- Saved $40-60/month compared to upgrading the server
- Completed migration with <5 minutes of downtime
The hybrid approach of using block storage for live databases and object storage for backups/static assets provides the best of both worlds—performance where you need it and cost-effectiveness where it matters.
Next Steps
If you’re facing similar storage challenges:
- Assess your storage needs:
- Live databases → Block storage
- Backups and archives → Object storage
- Static content → Object storage with CDN
- Plan your migration:
- Schedule maintenance window
- Create backups
- Test on staging if possible
- Execute carefully:
- Follow the steps in this guide
- Verify permissions meticulously
- Test thoroughly before removing old data
- Monitor and optimize:
- Watch performance metrics
- Set up automated backups to object storage
- Plan for future capacity needs
Need Help with Your Database Migration?
If you’re facing storage challenges or need assistance migrating your databases to block storage, proper planning and execution can make the difference between a smooth migration and catastrophic data loss.
Understanding the difference between block storage and object storage—and choosing the right tool for each job—is fundamental to building scalable, performant, and cost-effective infrastructure.
Got questions about block vs object storage? Drop them in the comments below!