How to figure flush value for GMU Radio GRC flowgraph which changes frequencies regularly and where saved data must not include data from previous frequency?
Great question — this bites a lot of people the first time they start hopping frequencies in GNU Radio.
Short version: you can’t rely on a magic “flush value” alone. You need to explicitly manage latency in the flowgraph so that when you change frequency, you discard the old samples that are still in flight before you start saving again.
Here’s how to think about it and how to do it cleanly.
What’s actually happening (the mental model)
When you retune a source (SDR, signal source, etc.):
- Old samples are still buffered in:
- the source driver
- GNU Radio block buffers
- FIR filters / resamplers (big one)
- Those samples will continue flowing for a short time after the frequency change
- If you immediately write to file, you get a mix of:
- tail of old frequency ❌
- start of new frequency ✅
So the “flush value” is really:
How many samples are still in the pipeline after a retune?
Step 1: Identify your worst-case pipeline delay
You need the maximum group delay in samples.
Common contributors
- FIR filters
- Group delay ≈
(taps - 1) / 2
- Group delay ≈
- Rational resampler
- Delay = filter delay × interpolation factor
- SDR hardware buffering
- Usually a few ms (vendor dependent)
- GNU Radio buffers
- Default: ~32k samples per block (can be tuned)
Example
Let’s say:
- Sample rate = 2 MS/s
- FIR filter = 129 taps
→ delay ≈ 64 samples - SDR hardware latency ≈ 5 ms
→ 10,000 samples - Safety margin = 2×
Flush size ≈ 20,000 samples
Step 2: Implement an explicit “discard” phase
The robust pattern (recommended)
Instead of trying to flush a file sink:
- Retune
- Discard N samples
- Then enable saving
This works every time.
Method A: Gate the data stream (best practice)
Insert a Multiply Const or Selector before the File Sink.
Flowgraph idea
[SDR Source]
|
(filters)
|
[Selector] ---> File Sink
^
|
control message
Control logic (Python / embedded)
# retune
sdr.set_center_freq(new_freq)
# disable saving
selector.set_input_index(0) # connected to null source
# wait for flush samples
time.sleep(flush_samples / sample_rate)
# enable saving
selector.set_input_index(1)
✔️ Old samples go to /dev/null
✔️ New frequency data is clean
Method B: Use Head block as a discard
Insert a Head block before the File Sink:
- Set
num_items = flush_samples - Let it run
- Then reconnect or reset flowgraph
This works but is clunkier if you retune often.
Method C: Timestamp-based discard (advanced, elegant)
If you’re already tagging streams:
- Use
rx_timeor a customfreq_changetag - Custom block ignores samples until:
current_time > freq_change_time + latency
This is excellent for interferometry or scanning receivers.
Step 3: Compute flush time dynamically (important!)
Since your frequency changes regularly, compute this once and reuse it:
flush_time = max_pipeline_delay / sample_rate
Or with margin:
flush_time = 2.0 * max_pipeline_delay / sample_rate
Then:
time.sleep(flush_time)
What not to do ❌
- ❌ Rely on
File Sink“unbuffered” mode - ❌ Assume retune is instantaneous
- ❌ Try to trim data afterward unless you have timestamps
- ❌ Use tiny flush values “that seem to work”
Those all fail intermittently.
Quick sanity test
After implementing:
- Tune between two very different frequencies
- Plot the first 50 ms of saved data
- If you see any energy from the old signal → flush too short
TL;DR (practical answer)
- There is no universal “flush value”
- Compute worst-case pipeline delay in samples
- Discard data for that duration after every retune
- Use a selector or gate before the File Sink
- Add a safety margin
Please give me methods for calculating worst case pipeline delay for following blocks: Stream to vector, FFT, Complex to mag2?
Nice, these three are actually very well-behaved compared to FIRs and resamplers 👍
The key thing to remember is that only blocks that need to accumulate items introduce delay. Of your list, only one really matters.
I’ll go block-by-block, then give you a clean worst-case formula you can drop straight into your retune logic.
1. Stream to Vector
What it does
- Buffers N samples
- Outputs one vector only after the buffer is full
Delay
This is the dominant latency.
If:
vlen = N- sample rate =
Fs
Then:
- Minimum delay:
N − 1samples - Worst-case delay:
Nsamples (safe value)
Time equivalent
delay_time ≈ N / Fs
Example
vlen = 4096Fs = 2 MS/s
delay ≈ 4096 samples ≈ 2.05 ms
✔️ This is the number you must account for.
2. FFT (Vector → Vector)
What it does
- Consumes one complete vector
- Produces one output vector
- No internal buffering beyond the vector itself
Delay
- 0 additional sample delay
- Output appears as soon as the input vector arrives
Worst-case contribution
0 samples
FFT looks scary, but in GNU Radio it’s effectively instantaneous once the vector exists.
3. Complex to Mag²
What it does
- Element-wise operation
- One output per input item
Delay
- 0 samples
- Stateless, streaming
Worst-case contribution
0 samples
Putting it all together
Worst-case pipeline delay (samples)
D_total = N_stream_to_vector
That’s it.
If you have one Stream-to-Vector block with vlen = N:
D_total = N
If you have multiple Stream-to-Vector blocks in series:
D_total = max(vlen_i)
(not the sum — only the longest buffering dominates)
Safe flush time after retune
Samples
flush_samples = N
Time
flush_time = N / Fs
Recommended margin (real world)
I strongly suggest:
flush_samples = 1.5 × N or 2 × N
Because:
- scheduler buffering
- SDR driver latency
- OS jitter
Concrete example
- Sample rate:
2 MS/s - Stream to Vector:
8192 - FFT size:
8192
flush_samples = 2 × 8192 = 16384
flush_time = 16384 / 2e6 ≈ 8.2 ms
Discard at least the first 8–10 ms of data after each retune before saving.
Quick verification trick (very useful)
Add a debug FFT display:
- Tune from a strong carrier → empty band
- Retune
- Watch how many FFT frames still show the old signal
That number × FFT period should match your calculated delay.
Bottom line
| Block | Worst-case delay |
|---|---|
| Stream to Vector | N samples ✅ |
| FFT | 0 samples |
| Complex to Mag² | 0 samples |
Having worked out the time delay how do I then add it in flowgraph?
Yep — this is where people usually underestimate things, so let’s do it properly and conservatively.
I’ll break it into three additive contributors, then give you a single worst-case formula you can actually trust in a hopping system.
Total latency model (what you must flush)
After a retune, old samples remain due to:
- Hardware / driver buffering
- GNU Radio scheduler buffers
- Algorithmic buffering (your Stream→Vector, etc.)
You must wait until all three are empty before saving.
1️⃣ Hardware latency (SDR + driver)
This is the least deterministic part.
What contributes
- SDR FPGA / USB / Ethernet buffering
- Driver ring buffers
- OS USB/network stack
Typical real-world values (order of magnitude)
| Hardware type | Typical latency |
|---|---|
| RTL-SDR (USB2) | 5–20 ms |
| SDRplay (USB) | 5–15 ms |
| USRP B2xx | 1–5 ms |
| USRP N/E (Ethernet) | 2–10 ms |
⚠️ Retune latency ≠ streaming latency — both matter.
Convert to samples
D_hw = Fs × T_hw
Example:
- Fs = 2 MS/s
- Hardware latency = 10 ms
D_hw = 2e6 × 0.010 = 20,000 samples
If unsure: assume 10–20 ms and sleep well at night.
2️⃣ GNU Radio scheduler buffers
This one is poorly documented but important.
How it works (simplified)
- Each connection has:
- Output buffer on upstream block
- Input buffer on downstream block
- Default buffer size is often:
- ~32k items, sometimes larger
- Scheduler may allow multiple buffers in flight
Worst-case rule of thumb (safe)
Assume one full buffer of old data survives past retune.
D_sched ≈ max_buffer_items
Common safe assumptions:
- 32,768 samples
- 65,536 samples (very conservative)
If you explicitly set buffer sizes, use that value.
Setting buffer size (optional but good practice)
In GRC:
- Right-click connection
- Set Buffer Size
- Make it known and bounded
3️⃣ Algorithmic buffering (your blocks)
From earlier analysis:
| Block | Delay |
|---|---|
| Stream to Vector (N) | N samples |
| FFT | 0 |
| Complex → Mag² | 0 |
So:
D_algo = max(vlen)
Example:
- FFT size = 8192
D_algo = 8192 samples
4️⃣ Putting it all together (this is the key bit)
Worst-case total pipeline delay
D_total = D_hw + D_sched + D_algo
Flush time
flush_time = D_total / Fs
Worked example (realistic)
Let’s say:
- Fs = 2 MS/s
- SDRplay-class device → assume 10 ms hardware latency
- Scheduler buffer = 32,768 samples
- Stream→Vector = 8192
Convert everything to samples
D_hw = 2e6 × 0.010 = 20,000
D_sched = 32,768
D_algo = 8,192
------------------------
D_total ≈ 60,960 samples
Convert to time
flush_time ≈ 60,960 / 2e6 ≈ 30.5 ms
Add safety margin (recommended)
flush_time ≈ 40–50 ms
That value will be rock solid, even with OS jitter.
5️⃣ How this maps to the flowgraph (important)
You do not delay samples.
You do:
- Selector → Null
- Retune hardware
- Sleep
flush_time - Selector → File Sink
This ensures:
- Hardware buffers drain
- Scheduler buffers drain
- Stream→Vector finishes its old frame
6️⃣ How to verify your estimate experimentally (strongly recommended)
Do this once per system:
- Tune to a strong CW carrier
- Retune to empty band
- Record continuously
- Measure:
- How long old spectral energy persists
That observed duration ≈ flush_time
Then add 25–50% margin and lock it in.
7️⃣ Final rule-of-thumb (if you want one line)
For FFT-based pipelines on USB SDRs:
Flush for 30–50 ms after each retune at MHz sample rates
If you’re at lower rates, scale accordingly.
TL;DR (copy this)
D_total = Fs·T_hw + buffer_size + vlen
flush_time = D_total / Fs
use 1.5× margin
gate stream with Selector