Disk Space Management
ClickHouse can store massive amounts of data efficiently thanks to its columnar storage and compression. UptimeDock provides detailed disk space monitoring at the server, database, and table levels, helping you manage storage and plan for growth.
Overview
Disk space monitoring in UptimeDock gives you visibility into:
- Overall disk usage and capacity
- Per-database storage breakdown
- Per-table disk usage with compression statistics
- Parts count for merge optimization insights
Running out of disk space can cause data loss and server instability. Set up alerts well before reaching capacity limits (recommended: alert at 75-80% usage).
Disk Dashboard
The Disk tab provides a quick overview of your ClickHouse server's storage status. Data is refreshed periodically (shown as "X minutes ago received").
Disk Metrics Explained
| Metric | Description | What to Watch |
|---|---|---|
| Total Capacity | Total disk space available to ClickHouse | Baseline for usage calculations |
| Used | Amount of disk space currently in use | Primary growth indicator |
| Free | Available disk space remaining | Low values indicate urgent action needed |
| Usage % | Percentage of total capacity in use | Primary alert metric; watch for values above 75% |
| Databases | Number of databases on this server | Useful for multi-tenant environments |
| Tables | Total number of tables across all databases | High counts may indicate schema sprawl |
| Rows | Total row count across all tables | Data volume indicator (displayed in M/B format) |
Usage percentage is color-coded: green for healthy, yellow/orange for warning, and red for critical levels.
Databases View
Switch to the Databases tab to see storage breakdown by database. This helps identify which databases consume the most space.
Understanding Database Columns
| Column | Description |
|---|---|
| Database | Name of the database. Click the eye icon for detailed view. |
| Total Rows | Total number of rows across all tables in this database. |
| Disk Size | Actual disk space used by this database (compressed). |
| Compression | Compression ratio achieved (e.g., 6.57x means data is stored at 1/6.57 of original size). |
| Tables | Number of tables in this database. |
| Parts | Total data parts across all tables. High counts may indicate merge issues. |
Use the search box to filter databases by name. Click Show Settings for additional filtering options.
The system database contains ClickHouse's internal tables like query_log, part_log, and text_log. These can grow large and should be monitored separately.
Tables View
The Tables tab shows storage for individual tables, sorted by size. This is essential for identifying which tables consume the most space.
Understanding Table Columns
| Column | Description |
|---|---|
| Table | Full table name in database.table format. |
| Total Rows | Number of rows in this table. |
| Disk Size | Compressed size on disk. |
| Parts | Number of data parts. Many parts can slow down queries. |
| % of Total | This table's percentage of total disk usage. |
Common large tables in a typical ClickHouse installation include:
system.text_log– Server text logssystem.query_log– Query execution historysystem.part_log– Part merge/mutation historysystem.processors_profile_log– Query processor profiling
System log tables can consume significant space over time. Consider setting TTL (Time To Live) policies to automatically clean old data:
-- Set TTL to keep only 30 days of query logs
ALTER TABLE system.query_log
MODIFY TTL event_date + INTERVAL 30 DAY;
-- Check current TTL settings
SELECT database, table, engine, partition_key
FROM system.tables
WHERE database = 'system' AND name LIKE '%log%';Charts View
The Charts tab provides visual representations of disk usage trends over time. Use this to:
- Identify growth patterns
- Predict when you'll need more storage
- Correlate disk usage with specific events
Setting Up Disk Alerts
Click the Alerts button to configure disk space notifications. You can set alerts at different levels:
| Alert Level | Threshold | Action |
|---|---|---|
| Warning | 75% | Start planning capacity expansion or cleanup |
| Critical | 90% | Immediate action required to prevent data loss |
The All Alerts button (with badge count) shows all active alerts across Disk and Memory metrics in one place.
Capacity Planning
Use disk monitoring data to plan for future storage needs:
- Track growth rate – Monitor how much data is added daily/weekly
- Estimate runway – Calculate how long until you reach capacity
- Plan ahead – Scale storage before hitting 80% usage
-- Calculate data growth rate
SELECT
toStartOfDay(event_date) AS day,
formatReadableSize(sum(bytes_on_disk)) AS daily_size
FROM system.parts
WHERE event_date >= today() - 30
GROUP BY day
ORDER BY day;
-- Estimate days until disk full
SELECT
formatReadableSize(free_space) AS free,
formatReadableSize(total_space) AS total,
round(free_space / (total_space - free_space) * 30, 0) AS days_at_current_rate
FROM system.disks
WHERE name = 'default';Troubleshooting
Common disk space issues and solutions:
| Issue | Cause | Solution |
|---|---|---|
| Rapid disk growth | High data ingestion or system logs | Set TTL policies; check for unexpected data sources |
| High parts count | Many small inserts; slow merges | Batch inserts; check merge settings; run OPTIMIZE TABLE |
| Low compression ratio | Random data; poor column ordering | Reorder columns; use appropriate codecs |
| system.* tables too large | No TTL; high logging verbosity | Set TTL on system tables; adjust log level |
| Disk nearly full | Unexpected growth or missed alerts | Drop old partitions; move data to cold storage; expand disk |
-- Find tables with too many parts (potential merge issues)
SELECT
database,
table,
count() AS parts_count,
formatReadableSize(sum(bytes_on_disk)) AS size
FROM system.parts
WHERE active = 1
GROUP BY database, table
HAVING parts_count > 100
ORDER BY parts_count DESC;
-- Optimize a specific table (force merge)
OPTIMIZE TABLE database.table_name FINAL;Best Practices
- Set TTL policies on all log tables to automatically clean old data
- Monitor parts count – keep it under 100 per table for optimal performance
- Use partitioning to easily drop old data
- Review compression ratios – low ratios may indicate optimization opportunities
- Set alerts at 75% usage to allow time for remediation
Recommended TTL settings for system tables:
-- Keep query_log for 14 days
ALTER TABLE system.query_log MODIFY TTL event_date + INTERVAL 14 DAY;
-- Keep text_log for 7 days
ALTER TABLE system.text_log MODIFY TTL event_date + INTERVAL 7 DAY;
-- Keep part_log for 30 days
ALTER TABLE system.part_log MODIFY TTL event_date + INTERVAL 30 DAY;For memory monitoring, see Memory & Resources. For query optimization, see Query Performance Monitoring.