On a headless Raspberry Pi, the simplest way is just via SSH and a couple of built-in Linux commands—no GUI needed.
Quick & easiest (recommended)
Run:
df -h
- Shows all mounted drives
-h= human-readable (GB/MB instead of blocks)
You’ll get something like:
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 12G 15G 45% /
👉 Focus on / (root filesystem) — that’s your main SD card usage.
Even simpler (just the main disk)
df -h /
Check folder sizes (what’s filling it up)
If space is low:
du -h --max-depth=1 /
Or for your home directory:
du -h --max-depth=1 ~
Nice quick summary (cleaner view)
lsblk
Shows disks and partitions in a tidy tree.
Optional: install a better tool (worth it)
If you want something more visual in terminal:
sudo apt install ncdu
ncdu /
- Interactive
- Lets you drill down into what’s using space
One-line command that logs disk usage over time (useful for long SDR recordings or interferometry data on your setup):
Given you’re doing SDR/interferometry, logging disk usage over time is a really good idea—those data files creep up fast.
Simple one-line logger (append to a file)
Run this:
while true; do date "+%Y-%m-%d %H:%M:%S" >> ~/disk_usage.log; df -h / >> ~/disk_usage.log; echo "----------------------" >> ~/disk_usage.log; sleep 600; done
What it does
- Logs timestamp + disk usage every 10 minutes
- Writes to:
~/disk_usage.log - Keeps appending (so you get a history)
Run it in background (so it survives logout)
Use:
nohup bash -c 'while true; do date "+%Y-%m-%d %H:%M:%S"; df -h /; echo "----------------------"; sleep 600; done' >> ~/disk_usage.log 2>&1 &
Check it later
tail -f ~/disk_usage.log
Better version (CSV format for plotting)
If you want to graph it later:
while true; do echo "$(date "+%Y-%m-%d %H:%M:%S"),$(df / | awk 'NR==2 {print $3","$4","$5}')" >> ~/disk_usage.csv; sleep 600; done
This logs:
timestamp,used_kb,available_kb,use%
Bonus: auto-start on boot (recommended for your setup)
Edit crontab:
crontab -e
Add:
@reboot nohup bash -c 'while true; do date "+%Y-%m-%d %H:%M:%S"; df -h /; echo "-----"; sleep 600; done' >> /home/pi/disk_usage.log 2>&1 &
Practical tip for your use case
With SDR data (especially continuous recordings), watch for:
- Sudden jumps in usage → runaway recording
- Gradual fill → normal accumulation
- Plateaus → processing/cleanup working
If you want, you can also:
�Add an auto-warning when disk > 90% full
Or �a script that auto-deletes oldest data files safely (useful for radio astronomy runs)
Above is exactly the kind of safeguard you want for unattended SDR runs.
Below is a simple, robust setup that:
- Warns you when disk usage is high
- Optionally deletes the oldest files in a data directory
🔔 1. Disk warning script (lightweight + reliable)
Create a script:
nano ~/disk_monitor.sh
Paste this:
#!/bin/bash
THRESHOLD=90
EMAIL="" # leave blank unless you configure mail
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
MESSAGE="WARNING: Disk usage is at ${USAGE}% on $(hostname) at $(date)"
echo "$MESSAGE" >> ~/disk_alert.log
# Optional: terminal broadcast
wall "$MESSAGE"
# Optional: email (requires setup)
if [ ! -z "$EMAIL" ]; then
echo "$MESSAGE" | mail -s "Disk Alert" "$EMAIL"
fi
fi
Make it executable:
chmod +x ~/disk_monitor.sh
⏱️ Run it automatically every 5 minutes
crontab -e
Add:
*/5 * * * * /home/pi/disk_monitor.sh
🧹 2. Auto-cleanup (delete oldest SDR files)
⚠️ Only use this if your data is safe to prune.
Create:
nano ~/disk_cleanup.sh
Example (customise path!):
#!/bin/bash
DATA_DIR="/home/pi/InterferometryData"
THRESHOLD=90
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
echo "Cleaning started at $(date)" >> ~/disk_cleanup.log
# Delete oldest files first
find "$DATA_DIR" -type f -printf '%T@ %p\n' | sort -n | while read line; do
FILE=$(echo $line | cut -d' ' -f2-)
rm -f "$FILE"
echo "Deleted $FILE" >> ~/disk_cleanup.log
# Re-check disk usage after each deletion
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -lt "$THRESHOLD" ]; then
break
fi
done
fi
Make executable:
chmod +x ~/disk_cleanup.sh
Add to cron:
*/10 * * * * /home/pi/disk_cleanup.sh
🧠 Practical tweaks for your SDR setup
Given your interferometry pipeline:
- Set threshold lower (e.g. 85%) if writing continuously
- Consider keeping:
- Raw data (short retention)
- Processed results (long retention)
🧪 Safer alternative (recommended first)
Instead of deleting immediately, try:
find "$DATA_DIR" -type f -mtime +3 -delete
👉 Only deletes files older than 3 days
🚀 If you want to go further
Help me set up auto-plot disk usage from my CSV log:
Nice—this turns your Pi into a self-monitoring system 📈
We’ll set up a simple Python script + cron job that automatically reads your CSV log and generates a plot image you can view anytime (even over SSH or a web server).
🧰 1. Install required package
sudo apt update
sudo apt install python3-matplotlib
🧾 2. Create the plotting script
nano ~/plot_disk_usage.py
Paste this:
import matplotlib.pyplot as plt
from datetime import datetime
log_file = "/home/pi/disk_usage.csv"
output_file = "/home/pi/disk_usage.png"
timestamps = []
used = []
available = []
percent = []
with open(log_file, "r") as f:
for line in f:
try:
t, u, a, p = line.strip().split(",")
timestamps.append(datetime.strptime(t, "%Y-%m-%d %H:%M:%S"))
used.append(int(u))
available.append(int(a))
percent.append(int(p.replace("%","")))
except:
continue # skip bad lines
# Plot percentage usage (simplest + most useful)
plt.figure()
plt.plot(timestamps, percent)
plt.xlabel("Time")
plt.ylabel("Disk Usage (%)")
plt.title("Disk Usage Over Time")
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig(output_file)
▶️ 3. Test it
python3 ~/plot_disk_usage.py
You should get:
~/disk_usage.png
View it:
ls -lh ~/disk_usage.png
⏱️ 4. Automate (update every 10 minutes)
crontab -e
Add:
*/10 * * * * /usr/bin/python3 /home/pi/plot_disk_usage.py
🌐 5. Optional: view remotely (very useful)
Quick way (from your main computer):
scp pi@your_pi_ip:/home/pi/disk_usage.png .
Or install a tiny web server:
cd ~
python3 -m http.server 8000
Then open in browser:
http://your_pi_ip:8000/disk_usage.png
🧠 Improvements (worth doing next)
Smooth the graph (less noisy)
Replace:
plt.plot(timestamps, percent)
with:
plt.plot(timestamps[-200:], percent[-200:])
👉 Shows only recent data
Add warning line (e.g. 90%)
Add:
plt.axhline(y=90)
Plot used space in GB instead
Replace:
used.append(int(u))
with:
used.append(int(u)/1024/1024)
And label:
plt.ylabel("Used Space (GB)")
🧪 Practical tip for your setup
For SDR/interferometry runs, this plot lets you instantly see:
- 📈 steady climb → normal recording
- ⚠️ sharp spikes → runaway process
- 🧹 drops → cleanup working
Help me set up cleanup rules based on file type (.c64 vs processed):
That’s a smart refinement—treating raw .c64 differently from processed data is exactly how you avoid losing valuable results while still controlling disk growth.
Let’s set up rule-based cleanup that:
- 🧹 Deletes old raw
.c64files aggressively - 💾 Keeps processed outputs much longer
- ⚖️ Only triggers when disk usage is high
🧠 Strategy (recommended for SDR work)
Typical policy:
.c64(raw SDR data) → short lifespan (e.g. 1–3 days)- Processed files (FFT, plots, CSV, etc.) → longer (e.g. 14–30 days)
🧾 1. Create smart cleanup script
nano ~/smart_cleanup.sh
Paste this:
#!/bin/bash
DATA_DIR="/home/pi/InterferometryData"
THRESHOLD=85
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
echo "Cleanup triggered at $(date), usage=${USAGE}%" >> ~/cleanup.log
# 🧹 1. Delete old RAW data (.c64) - aggressive
find "$DATA_DIR" -type f -name "*.c64" -mtime +2 -print -delete >> ~/cleanup.log
# 🧹 2. Delete older intermediate files
find "$DATA_DIR" -type f \( -name "*.dat" -o -name "*.bin" \) -mtime +5 -print -delete >> ~/cleanup.log
# 💾 3. Keep processed results longer
find "$DATA_DIR" -type f \( -name "*.png" -o -name "*.csv" -o -name "*.fits" \) -mtime +20 -print -delete >> ~/cleanup.log
# 🔁 4. If still full → delete oldest .c64 regardless of age
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$USAGE" -ge "$THRESHOLD" ]; then
echo "Still above threshold, removing oldest .c64 files" >> ~/cleanup.log
find "$DATA_DIR" -type f -name "*.c64" -printf '%T@ %p\n' | sort -n | while read line; do
FILE=$(echo "$line" | cut -d' ' -f2-)
rm -f "$FILE"
echo "Deleted $FILE" >> ~/cleanup.log
USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
[ "$USAGE" -lt "$THRESHOLD" ] && break
done
fi
fi
🔧 2. Make executable
chmod +x ~/smart_cleanup.sh
⏱️ 3. Automate it
crontab -e
Add:
*/10 * * * * /home/pi/smart_cleanup.sh
🔍 4. Test safely (IMPORTANT)
Before letting it delete anything, test with:
bash -x ~/smart_cleanup.sh
Or simulate:
find /home/pi/InterferometryData -name "*.c64" -mtime +2
🧠 Advanced tuning (very useful for you)
🎯 Protect “important” processed files
If you tag key outputs:
*_KEEP.png
Then exclude them:
! -name "*KEEP*"
📊 Separate directories (best practice)
If possible:
InterferometryData/
├── raw/
├── processed/
Then rules become cleaner:
find "$DATA_DIR/raw" -name "*.c64" -mtime +2 -delete
find "$DATA_DIR/processed" -mtime +30 -delete
🚨 Add safety buffer
Instead of reacting at 85%, aim lower:
THRESHOLD=80
👉 Gives your SDR recording headroom
🧪 What this gives you
- Raw data never silently fills the disk
- Processed science results are preserved longer
- System self-recovers during long unattended runs
Help me set up rolling storage that keeps last N GB instead of time-based:
This is the best approach for SDR systems—much more reliable than time-based deletion because it directly controls disk usage.
We’ll build a rolling storage system that:
- Keeps total data under N GB
- Deletes oldest files first
- Can prioritise deleting
.c64before processed data
🧠 Concept
Instead of:
“delete files older than X days”
We do:
“keep total size ≤ N GB”
🧾 1. Create rolling storage script
nano ~/rolling_storage.sh
Paste this:
#!/bin/bash
DATA_DIR="/home/pi/InterferometryData"
MAX_SIZE_GB=50 # 🔧 SET THIS (e.g. keep last 50 GB)
MAX_SIZE_BYTES=$((MAX_SIZE_GB * 1024 * 1024 * 1024))
get_size() {
du -sb "$DATA_DIR" | awk '{print $1}'
}
CURRENT_SIZE=$(get_size)
echo "[$(date)] Current size: $CURRENT_SIZE bytes" >> ~/rolling.log
# 🧹 Step 1: Remove oldest RAW files first (.c64)
if [ "$CURRENT_SIZE" -gt "$MAX_SIZE_BYTES" ]; then
echo "Reducing size via .c64 cleanup" >> ~/rolling.log
find "$DATA_DIR" -type f -name "*.c64" -printf '%T@ %p\n' | sort -n | while read line; do
FILE=$(echo "$line" | cut -d' ' -f2-)
rm -f "$FILE"
echo "Deleted RAW $FILE" >> ~/rolling.log
CURRENT_SIZE=$(get_size)
[ "$CURRENT_SIZE" -le "$MAX_SIZE_BYTES" ] && break
done
fi
# 🧹 Step 2: If still too big → remove oldest processed files
if [ "$CURRENT_SIZE" -gt "$MAX_SIZE_BYTES" ]; then
echo "Reducing size via processed cleanup" >> ~/rolling.log
find "$DATA_DIR" -type f ! -name "*.c64" -printf '%T@ %p\n' | sort -n | while read line; do
FILE=$(echo "$line" | cut -d' ' -f2-)
rm -f "$FILE"
echo "Deleted PROC $FILE" >> ~/rolling.log
CURRENT_SIZE=$(get_size)
[ "$CURRENT_SIZE" -le "$MAX_SIZE_BYTES" ] && break
done
fi
echo "Final size: $(get_size) bytes" >> ~/rolling.log
echo "-------------------------" >> ~/rolling.log
🔧 2. Make executable
chmod +x ~/rolling_storage.sh
▶️ 3. Test it
bash ~/rolling_storage.sh
Check log:
tail -f ~/rolling.log
⏱️ 4. Automate (recommended every 5–10 mins)
crontab -e
Add:
*/10 * * * * /home/pi/rolling_storage.sh
⚙️ Key tuning for your setup
🎯 Choose a safe limit
If your SD card is 64GB:
MAX_SIZE_GB=50
👉 Leaves headroom for OS + temp files
⚡ Faster performance (important)
If dataset grows large, replace:
du -sb
with:
du -s --block-size=1M
👉 Much faster, slightly less precise (fine in practice)
🧠 Advanced (highly recommended)
🧬 Protect “important” files
Add this before deletion:
! -name "*KEEP*"
📁 Separate raw vs processed (ideal)
DATA_DIR/
├── raw/
├── processed/
Then modify:
find "$DATA_DIR/raw" ...
find "$DATA_DIR/processed" ...
👉 Much cleaner + safer
🚨 Emergency hard stop (optional)
Prevent corruption during recording:
if [ "$CURRENT_SIZE" -gt $((MAX_SIZE_BYTES + 5*1024*1024*1024)) ]; then
pkill -f your_sdr_process
fi
🧪 What this gives you
- Disk usage never exceeds your limit
- Raw SDR data is sacrificed first (correct priority)
- Fully autonomous operation for long runs