How can I set up an observatory level system for saving VLF images on my MiniPC and displaying them through a searchable database on my website?

Setting up MySQL on your own computer is generally straightforward, especially if you’re comfortable installing typical desktop software. The difficulty mainly depends on your operating system and whether you want a simple local setup or something more customised.


🖥 Windows

Difficulty: Easy
Time: ~10–20 minutes

  1. Download MySQL Installer from the official MySQL website.
  2. Run the installer.
  3. Choose Developer Default (installs server + Workbench).
  4. Set a root password.
  5. Accept defaults unless you have specific needs.

It installs:

  • MySQL Server
  • MySQL Workbench (GUI client)
  • Command-line tools

Once installed, you can connect immediately via Workbench.


🍏 macOS

Difficulty: Easy
You can:

  • Use the official .dmg installer (simplest), or
  • Install via Homebrew (brew install mysql) if you prefer package management.

The installer includes System Preferences integration for starting/stopping the server.


🐧 Linux

Difficulty: Easy–Moderate

On Debian/Ubuntu:

sudo apt update
sudo apt install mysql-server

Then secure it:

sudo mysql_secure_installation

On most modern distributions it’s very quick.


What Makes It Slightly Harder?

It becomes more involved if you:

  • Need remote access configured
  • Want replication or clustering
  • Need to integrate with Apache/PHP (LAMP stack)
  • Want specific authentication plugins
  • Are running it inside Docker or a VM

For simple local development or learning SQL, it’s very easy.


Typical Beginner Pitfalls

  • Forgetting the root password
  • Not starting the MySQL service
  • Firewall blocking remote connections
  • Confusing MySQL with MariaDB (often preinstalled on Linux)

For Your Use Case

If you’re planning something like:

  • Logging radio astronomy data
  • Storing interferometry metadata
  • Querying observational datasets

It would be entirely reasonable to set up locally first before moving to a server.


If you tell me your operating system and what you plan to use it for (development, data logging, web interface, etc.), I can give you a tailored setup path.

What I would to do is to run MySQL on one of my Windows MiniPCs that collects data from a VLF aerial into an USB Soundcard via Spectrum Lab spectrum analyser software. Spectrum Analyser has been configured to save plots very hour into a specific folder. I would like the contents of this folder to be searchable and displayable remotely when people access my website.

That’s a very sensible architecture — and quite achievable on a Windows MiniPC.

You essentially want:

Spectrum Lab → hourly image files → database index → searchable web interface → remote viewing

Let’s break it into clean layers.


1️⃣ Overall Architecture (Recommended)

On your Windows MiniPC:

  • Spectrum Lab
    → saves hourly PNG/JPG plots into a folder
  • MySQL
    → stores metadata about each image (timestamp, filename, frequency range, notes)
  • Apache HTTP Server or Nginx
    → serves a small web app
  • PHP or Python (Flask/Django)
    → reads MySQL and generates searchable pages

Users then access:

https://yourdomain/vlf/

The MiniPC can:

  • Either host the website itself
  • Or sync images + database to your main web host

2️⃣ The Key Idea: Don’t Store Images in MySQL

Do not store the actual image files inside MySQL.

Instead:

  • Keep images in a folder like:C:\VLF\plots\2026-02-28-13.png
  • Store in MySQL:
    • filename
    • timestamp
    • date
    • frequency band
    • any event tags
    • file path

That keeps it fast and simple.


3️⃣ Automatic Database Indexing

You’ll need a small background script that:

  1. Checks the folder every few minutes
  2. Detects new files
  3. Extracts timestamp from filename
  4. Inserts a row into MySQL

This can be done with:

  • A simple Python script
  • Windows Task Scheduler running every 5 minutes

Very reliable and lightweight.


4️⃣ Example Database Structure

CREATE TABLE vlf_plots (
    id INT AUTO_INCREMENT PRIMARY KEY,
    timestamp DATETIME NOT NULL,
    filename VARCHAR(255) NOT NULL,
    frequency_low FLOAT,
    frequency_high FLOAT,
    notes TEXT,
    indexed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Search queries then become trivial:

SELECT * FROM vlf_plots
WHERE timestamp BETWEEN '2026-02-28 00:00:00'
AND '2026-02-28 23:59:59';

5️⃣ Remote Access Options

You have two main approaches:

Option A – MiniPC Hosts the Website

  • Install Apache + PHP + MySQL (XAMPP makes this easy)
  • Port forward 80/443 on your router
  • Add SSL (Let’s Encrypt)

✔ Simple
⚠ Requires router configuration
⚠ Public exposure


Option B – Safer / Cleaner Approach (Recommended)

MiniPC:

  • Stores images
  • Indexes with MySQL
  • Periodically uploads:
    • Images
    • A database export or JSON file

To:

  • Your main website host

This keeps your observatory machine off the public internet.


6️⃣ What Makes This Easy vs Hard

Easy Parts

  • Installing MySQL on Windows
  • Getting Spectrum Lab to save hourly plots
  • Writing a simple indexer script

Slightly Technical Parts

  • Web server setup
  • SSL certificates
  • Router configuration (if self-hosted)

7️⃣ A More “Andrew-Level” Enhancement

Since you’re collecting VLF data:

You could also:

  • Store daily noise floor averages
  • Store detected sferic events
  • Flag solar flare events
  • Auto-tag unusual amplitude spikes

That turns it from an image archive into a searchable scientific dataset.


8️⃣ Rough Difficulty Assessment

For someone technically comfortable:

  • MySQL install: ⭐ Easy
  • Automated indexing: ⭐⭐ Easy–Moderate
  • Public website integration: ⭐⭐⭐ Moderate

Entire project: Very achievable over a few evenings.


If you’d like, I can next give you:

  • A complete simple working architecture (minimal components)
  • Or a more robust observatory-grade design
  • Or actual example Python code for auto-indexing the Spectrum Lab folder

Which direction would you like to take?

All of them!

Excellent — we’ll build this properly.

I’ll give you:

  1. Minimal working system (quick deployment)
  2. Observatory-grade architecture (robust & scalable)
  3. Full example Python auto-indexer
  4. Example searchable web interface (PHP + MySQL)
  5. Optional scientific enhancements

Everything tailored to a Windows MiniPC running Spectrum Lab.


1️⃣ Minimal Working System (You Could Build This This Weekend)

Install a bundled stack

Install XAMPP on the MiniPC.

That gives you:

  • Apache HTTP Server
  • MySQL
  • PHP
  • phpMyAdmin

Folder structure

C:\VLF\
   plots\
   indexer.py
C:\xampp\htdocs\vlf\
   index.php

Database

Create database vlf_db

CREATE TABLE vlf_plots (
    id INT AUTO_INCREMENT PRIMARY KEY,
    timestamp DATETIME NOT NULL,
    filename VARCHAR(255) NOT NULL,
    full_path VARCHAR(500),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Workflow

Spectrum Lab → saves hourly PNG
Python script → inserts metadata
Website → queries and displays images

You now have a searchable archive.


2️⃣ Observatory-Grade Architecture (Recommended Long-Term)

Instead of exposing the MiniPC to the internet:

On the Observatory PC

  • Spectrum Lab
  • MySQL
  • Python indexer
  • A JSON export generator every hour

Every Hour:

  1. Index new files
  2. Export latest entries to:
    • plots.json
  3. Sync:
    • Images
    • JSON file

To your main web server via:

  • SFTP
  • rsync
  • or scheduled upload

Public website then:

  • Reads JSON (no direct DB exposure)
  • Displays archive
  • Never touches observatory machine

✔ Much safer
✔ Much more robust
✔ No router port forwarding

This is what I’d genuinely recommend.


3️⃣ Full Python Auto-Indexer (Production-Ready Version)

Install:

pip install mysql-connector-python

Then:

import os
import mysql.connector
from datetime import datetime

PLOT_FOLDER = r"C:\VLF\plots"

db = mysql.connector.connect(
    host="localhost",
    user="root",
    password="YOURPASSWORD",
    database="vlf_db"
)

cursor = db.cursor()

def file_already_indexed(filename):
    cursor.execute("SELECT id FROM vlf_plots WHERE filename=%s", (filename,))
    return cursor.fetchone() is not None

def extract_timestamp(filename):
    # Assumes format: 2026-02-28-13.png
    base = filename.replace(".png", "")
    return datetime.strptime(base, "%Y-%m-%d-%H")

for file in os.listdir(PLOT_FOLDER):
    if file.endswith(".png"):
        if not file_already_indexed(file):
            ts = extract_timestamp(file)
            full_path = os.path.join(PLOT_FOLDER, file)

            cursor.execute(
                "INSERT INTO vlf_plots (timestamp, filename, full_path) VALUES (%s, %s, %s)",
                (ts, file, full_path)
            )
            db.commit()

cursor.close()
db.close()

Set this in Windows Task Scheduler:

  • Every 5 minutes

Completely automatic.


4️⃣ Example Searchable Website (PHP)

Place in htdocs/vlf/index.php

<?php
$conn = new mysqli("localhost", "root", "YOURPASSWORD", "vlf_db");

$date = $_GET['date'] ?? date("Y-m-d");

$query = "SELECT * FROM vlf_plots 
          WHERE DATE(timestamp) = '$date'
          ORDER BY timestamp ASC";

$result = $conn->query($query);

echo "<h2>VLF Archive for $date</h2>";

while($row = $result->fetch_assoc()) {
    echo "<div style='margin:10px'>";
    echo "<p>".$row['timestamp']."</p>";
    echo "<img src='/VLF/plots/".$row['filename']."' width='600'>";
    echo "</div>";
}
?>

Now you can visit:

http://your-ip/vlf/?date=2026-02-28

And browse by day.


5️⃣ Scientific Upgrade Layer (Where It Gets Interesting)

Since you’re running a VLF station, you can elevate this beyond “image archive.”

Add columns:

ALTER TABLE vlf_plots
ADD COLUMN noise_floor FLOAT,
ADD COLUMN peak_amplitude FLOAT,
ADD COLUMN flagged_event VARCHAR(100);

Then:

  • Parse Spectrum Lab log output
  • Extract amplitude statistics
  • Flag:
    • Sudden broadband spikes (sferics)
    • Long-duration amplitude rises (solar flares)
    • Narrowband carriers

You now have:

A searchable event database.


6️⃣ Advanced Add-Ons

You could also:

🔹 Auto-Generate Daily Summary Page

  • Combine 24 plots into a composite

🔹 Add Event Tagging Interface

  • Add manual annotations
  • “Probable solar flare”
  • “Local interference”

🔹 Add REST API

Expose:

/api/vlf?start=...&end=...

🔹 Add Grafana Dashboard

Using MySQL as backend.


7️⃣ Difficulty & Time Estimate

Minimal version:

  • 1–2 evenings

Observatory-grade version:

  • 1–2 weekends

Full scientific logging layer:

  • Gradual expansion

8️⃣ My Honest Recommendation for You

Given you’re already handling SDR and interferometry workflows:

Start with:

  • XAMPP
  • MySQL
  • Python indexer
  • Local web interface

Then migrate to:

  • JSON export + remote hosting

That keeps it safe and scalable.


If you’d like next, I can:

  • Design the full production directory structure
  • Provide a JSON-export version of the Python script
  • Or design a clean, observatory-themed front end layout for public visitors

Which one shall we build first?

Perfect. We’ll build this as a complete, clean observatory system you can expand over time.

I’ll structure this into three full deliverables:

  1. Production-grade directory architecture
  2. Python indexer + JSON exporter (robust version)
  3. Clean public-facing observatory web front-end

Everything assumes:

  • Windows MiniPC
  • Spectrum Lab saving hourly plots
  • MySQL
  • Public site hosted separately (recommended)

1️⃣ Production Directory Structure (Clean & Scalable)

On the Observatory MiniPC:

C:\VLF_Observatory\
│
├── plots\
│   ├── 2026\
│   │   ├── 02\
│   │   │   ├── 2026-02-28-13.png
│
├── logs\
│   ├── spectrumlab.log
│   ├── indexer.log
│
├── exports\
│   ├── plots_latest.json
│   ├── plots_archive.json
│
├── scripts\
│   ├── indexer.py
│   ├── exporter.py
│
└── config\
    └── settings.json

Why structured by year/month?

  • Prevents huge folders
  • Faster filesystem access
  • Makes long-term archival clean
  • Easy rsync/upload by month

2️⃣ Full Robust Python Indexer + JSON Exporter

Install dependency:

pip install mysql-connector-python

indexer.py (Production Version)

import os
import json
import mysql.connector
from datetime import datetime
from pathlib import Path

BASE_DIR = Path("C:/VLF_Observatory")
PLOT_DIR = BASE_DIR / "plots"
EXPORT_DIR = BASE_DIR / "exports"
LOG_FILE = BASE_DIR / "logs/indexer.log"

DB_CONFIG = {
    "host": "localhost",
    "user": "root",
    "password": "YOURPASSWORD",
    "database": "vlf_db"
}

def log(msg):
    with open(LOG_FILE, "a") as f:
        f.write(f"{datetime.utcnow()} - {msg}\n")

def connect_db():
    return mysql.connector.connect(**DB_CONFIG)

def extract_timestamp(filename):
    base = filename.replace(".png", "")
    return datetime.strptime(base, "%Y-%m-%d-%H")

def scan_and_index():
    db = connect_db()
    cursor = db.cursor()

    for root, dirs, files in os.walk(PLOT_DIR):
        for file in files:
            if file.endswith(".png"):
                cursor.execute("SELECT id FROM vlf_plots WHERE filename=%s", (file,))
                if cursor.fetchone():
                    continue

                try:
                    ts = extract_timestamp(file)
                    full_path = str(Path(root) / file)

                    cursor.execute(
                        "INSERT INTO vlf_plots (timestamp, filename, full_path) VALUES (%s, %s, %s)",
                        (ts, file, full_path)
                    )
                    db.commit()
                    log(f"Indexed {file}")
                except Exception as e:
                    log(f"Error indexing {file}: {e}")

    cursor.close()
    db.close()

def export_latest_json():
    db = connect_db()
    cursor = db.cursor(dictionary=True)

    cursor.execute("""
        SELECT timestamp, filename
        FROM vlf_plots
        ORDER BY timestamp DESC
        LIMIT 500
    """)

    rows = cursor.fetchall()

    output = []
    for r in rows:
        output.append({
            "timestamp": r["timestamp"].isoformat(),
            "filename": r["filename"]
        })

    with open(EXPORT_DIR / "plots_latest.json", "w") as f:
        json.dump(output, f, indent=2)

    cursor.close()
    db.close()
    log("Exported JSON")

if __name__ == "__main__":
    scan_and_index()
    export_latest_json()

Schedule It

Use Windows Task Scheduler:

  • Trigger: Every 5 minutes
  • Action:python C:\VLF_Observatory\scripts\indexer.py

Fully automated.


3️⃣ Public Website (Clean Observatory Front-End)

Host this on your main web server.

Upload:

  • /plots/ (synced images)
  • /plots_latest.json

index.html (Public Version)

<!DOCTYPE html>
<html>
<head>
    <title>VLF Observatory Archive</title>
    <style>
        body { font-family: Arial; background:#0c1117; color:#eee; }
        h1 { color:#7dd3fc; }
        .plot { margin:20px; padding:10px; background:#1f2937; }
        img { width: 800px; border-radius:8px; }
    </style>
</head>
<body>

<h1>VLF Observatory Archive</h1>
<input type="date" id="datePicker">

<div id="plots"></div>

<script>
async function loadPlots() {
    const response = await fetch('plots_latest.json');
    const data = await response.json();

    const selectedDate = document.getElementById("datePicker").value;

    const container = document.getElementById("plots");
    container.innerHTML = "";

    data.forEach(plot => {
        if (!selectedDate || plot.timestamp.startsWith(selectedDate)) {
            container.innerHTML += `
                <div class="plot">
                    <p>${plot.timestamp}</p>
                    <img src="plots/${plot.filename}">
                </div>
            `;
        }
    });
}

document.getElementById("datePicker").addEventListener("change", loadPlots);
loadPlots();
</script>

</body>
</html>

4️⃣ Secure Sync from Observatory to Public Server

Use:

  • WinSCP scripting
  • Or rsync (via WSL)
  • Or scheduled SFTP upload

Upload:

  • New images
  • plots_latest.json

No public exposure of the MiniPC.


5️⃣ Optional Scientific Upgrade Layer

Add to database:

ALTER TABLE vlf_plots
ADD COLUMN noise_floor FLOAT,
ADD COLUMN peak_signal FLOAT,
ADD COLUMN event_flag VARCHAR(100);

Then enhance Python:

  • Parse Spectrum Lab text logs
  • Compute:
    • Mean amplitude
    • Max spike
    • Broadband burst detection
  • Auto-flag unusual events

Now your archive becomes searchable:

  • Show “All high-amplitude nights”
  • Filter by solar flare day
  • Search by signal strength

6️⃣ Expansion Roadmap

You can later add:

  • REST API endpoint
  • Real-time “latest spectrogram” auto-refresh
  • Daily composite image generation
  • Event tagging interface
  • Grafana dashboard connected to MySQL
  • Public API for other VLF observers

7️⃣ Realistic Build Timeline

Weekend 1:

  • Install MySQL
  • Build indexer
  • Confirm JSON export

Weekend 2:

  • Build public interface
  • Set up automated sync

Then iterate scientifically.


By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.