Your Antenna Review Sucks! Here Is How to Actually Get Real Data.

Your antenna review sucks. There, it has been said. And it is probably not because the video is bad. Plenty of antenna reviews do a fine job covering features, portability, and how easy it is to deploy in the field. The problem is that most antenna reviews never answer the one question everyone actually has: how well does it perform? Taking an antenna to a park, making a handful of contacts, and saying "yeah, that works" is not a test. It is a vibe. And the trust-me-bro approach to antenna reviews is not going to cut it anymore.

📌 TL;DR - The method in plain terms

  • The problem: Signal reports from contacts are almost never honest data. A 59 means nothing if the operator asked for your callsign three times first.
  • The tool: WSPR collects automated reception reports from stations worldwide, with timestamps, SNR readings, and distance data.
  • The system: An automated antenna switch alternates between two antennas during WSPR transmit cycles, logging which antenna is active at each timestamp.
  • The reference antenna: An end-fed halfwave (EFHW) serves as the known baseline for every comparison.
  • The analysis: WSPR Rocks exports reception data as JSON. Upload that plus the antenna log to an AI tool and generate comparison tables automatically.

The Problem With Most Antenna Reviews

Walk through any antenna review and you will usually see the same formula. The reviewer describes the build quality, shows the connectors, talks about how quick it is to set up, makes a few contacts, and wraps up with something like "I was impressed with how it performed." That is all fine, but it answers the wrong question.

The problem runs deeper when you factor in how on-air signal reports actually work. We have all heard the pattern: the other station asks for your callsign two or three times, then gives you a 59. That is not data. That is courtesy. Operator-given signal reports introduce enough human noise to make them nearly useless for comparing two antennas, and yet that is what most reviews rely on.

Why WSPR Is the Right Tool for This

WSPR (Weak Signal Propagation Reporter) is a digital mode designed for low-power propagation testing. It transmits a short, coded beacon on a schedule, and receiving stations around the world automatically decode and upload reports to aggregator websites like WSPR Rocks. Each report includes the receiving station's callsign, the timestamp, the signal-to-noise ratio (SNR), and the distance between the two stations.

That last part is what makes WSPR so useful for antenna testing. The receiving stations do not know you are running a comparison. They are just reporting what they hear. There is no courtesy 59. The SNR is a measurement, not a social gesture.

For this setup, WSPR runs at four watts using a QDX transmitter. Low power actually works in your favor here. It keeps nearby receivers from saturating and gives a cleaner signal picture across the network of worldwide listeners. Sessions run in the morning, midday, and evening to capture how band conditions vary throughout the day, which matters when you are trying to build a fair comparison rather than cherry-pick a good run.

Picking a Reference Antenna

WSPR data is excellent for a single antenna, but it becomes much more useful when you have something to compare against. Without a baseline, you know how far your signal is reaching but you have no way to know if that is good, average, or disappointing for your location and band conditions.

The obvious choice for a reference is a dipole. It is a well-understood antenna with predictable, documented performance, which makes it the kind of baseline that everyone can relate to. The catch is space: not every yard or operating location can fit a proper dipole.

A practical alternative is the end-fed halfwave (EFHW). It is a common antenna that many operators already own and use for both home and POTA operations, which means comparisons against it carry practical meaning. Once you establish the EFHW as your reference, any antenna you bring in for testing gets compared against that same known baseline using the same conditions, the same power, and the same worldwide listener network.

The Automated Antenna Switch Setup

The trickiest part of comparing two antennas with WSPR is making sure the switching is fair and documented. If you manually swap cables between tests, you introduce timing differences and the risk of running one antenna during better conditions than the other. The solution is automating the switch so it alternates antennas during each WSPR transmit cycle.

The hardware side is straightforward. RG8 coax feeds into a relay-based antenna switch that routes the signal to either antenna 1 or antenna 2 depending on which relay is energized. An ESP32 microcontroller handles the relay control over Wi-Fi. Rather than building a separate control interface, the ESP32 connects to Home Assistant, which already handles a dashboard covering lights, blinds, and other home automation. Adding the antenna switch to that same dashboard keeps everything in one place.

The software side is a small C console application that listens to the UDP stream from WSPR. It watches for the end of each transmit cycle, then calls the Home Assistant API to flip the antenna switch. At each switch event, the app writes a log entry that records which antenna is now active and the timestamp. That log becomes one half of the comparison dataset.

Collecting and Analyzing the Data

After running WSPR sessions across multiple times of day, the next step is pulling the data from WSPR Rocks. The site lets you enter your callsign and download a JSON file containing all your reception reports, each with a timestamp that lines up with your antenna switch log.

Matching the two files manually is possible but slow. Using an AI tool speeds that up considerably. Uploading both files and running a prompt that requests formatted comparison tables gives you a breakdown of how antenna 1 and antenna 2 performed across spot count, average SNR, and maximum distance, segmented by band and time of day if your dataset supports it.

The result is a table that shows real, measured performance differences between the two antennas under the same conditions. Not a feeling. Not a courtesy report. An actual number that tells you whether the antenna you are testing is ahead of, behind, or roughly equivalent to your reference.

How to Replicate This System

✅ HOW TO - Set up your own WSPR antenna comparison system

  1. Set up WSPR with a low-power transmitter: A QDX at four watts is a proven starting point, but any WSPR-capable setup will work. Aim for a consistent, low power level you will use for every test.
  2. Pick a reference antenna: A dipole is ideal. An EFHW is a solid practical alternative that most operators already own. Stick with the same reference for every future comparison so results are meaningful across tests.
  3. Build or acquire an antenna switch: A relay-based switch with RG8 coax input is the mechanical core of the system. The switching hardware itself is not complicated.
  4. Automate the switching: Wire an ESP32 to control the relay and connect it to Home Assistant (or any HTTP-accessible automation platform). Write a small console app that listens to the WSPR UDP stream and triggers a switch at the end of each transmit cycle, logging the timestamp and active antenna.
  5. Run sessions at multiple times of day: Morning, midday, and evening. Band conditions change throughout the day and you want both antennas to get tested across the same range of conditions.
  6. Download your WSPR Rocks JSON file: After each session, pull your reception reports from WSPR Rocks with your callsign. This is the second half of your dataset.
  7. Match the files and generate your comparison tables: Upload both files to an AI tool and use a prompt that produces formatted comparison tables. Antenna 1 versus antenna 2, by spot count, SNR, and distance.

Frequently Asked Questions

Why are signal reports from contacts not useful for antenna testing?

Operator-given signal reports are almost always courtesy reports. A station that asks for your callsign three times may still give you a 59. WSPR collects automated, timestamped reports from calibrated receivers worldwide, which removes the human politeness variable entirely.

What is a good reference antenna for WSPR comparisons?

A dipole is the classic reference antenna because it is well-understood and predictable. If space does not allow a dipole, an end-fed halfwave (EFHW) is a widely used and consistent alternative that many operators already have on hand.

How much power does WSPR need for useful results?

Very little. Four watts with a QDX transmitter produces more than enough spots from worldwide receivers to generate meaningful comparison data. Lower power actually helps avoid saturating nearby receivers and gives a cleaner picture of antenna performance.

Is this system perfect?

No, and it is worth saying that upfront. Band conditions introduce variability even when both antennas share the same session. The automated switching helps control for that, but it does not eliminate it entirely. The goal is a repeatable, documented method that produces far better data than on-air signal reports, not a laboratory-grade controlled experiment. If you have ideas to improve it, the comments are open.

Bottom Line

Most antenna reviews answer the wrong question. They tell you what an antenna looks like and how easy it is to deploy, but they skip the part that actually matters: how does it perform compared to something else? WSPR changes that by giving you worldwide, automated reception data with timestamps, SNR readings, and distance. Pair it with an automated antenna switch that alternates between your test antenna and a known reference, log the switch events, and you have the raw material for a real comparison. Let AI handle the table generation and you have a faster, cleaner analysis pipeline than anything involving manual spreadsheet work.

The system described here is not perfect and the builder would be the first to say so. But it is a genuine improvement over the trust-me-bro approach, and it gives both reviewers and their audiences something they rarely get from antenna content: a number.

Useful Links

Loading files...