The
Northern Utah WebSDR
Latest news and current issues
Recent events and resolved issues:
27 March, 2024 - General update (Power and eclipse)
Site power:
Although
commercial power was restored on 25 March, there have been a few minor
interruptions since then related to the concluding work - mostly
related to testing the new UPS unit.
Eclipse-related things:
As you surely know by now, there will be a total solar eclipse (that's the one where the moon blocks the sun)
on 8 April, 2024. At the Northern Utah WebSDR's remote site, the
amount of "obscuration" will be around 46% at around 1230 MDT (1833 UTC)
- but enough to make the environs look "strange" and affect propagation
- particularly from signals that emanate from and beyond the other side
of the path of totality.
We
have on site a number of GPS-locked receivers operating that should
produce excellent, research-quality data not only during the eclipse,
but also for periods of time before and after: As stability and
precision will be within a few milliHertz it should be possible to analyze both the absolute frequency of received transmissions due to Doppler shift, but also other aspects of the signal, including Doppler spread and absolute signal level at the antenna terminals.
While
the main emphasis will be on WSPR reception - which, when properly
configured includes frequency and Doppler information - but spectral
recordings will be made, including:
Recordings of a few kHz around the WSPR frequencies
Narrow and kHz-wide recordings around the WWV/H, WWVB and CHU time stations
A "wideband" recording that will include the entire spectrum from a few kHz through 10 meters. (Yes, that will be terrabytes of data!)
The
amateur band and "wideband" recordings above will also include
locally-generated GPS-based frequency and absolute time references.
Why do this monitoring? There are several compelling reasons:
Hams are usually beset with scientific curiosity: All of this is fun to do!
The
Utah SDR receive location has, in the past, been used for scientific
research - and we are pleased to continue this tradition.
As a 501(c)3 non-profit organization, it is incumbent on us to include bona-fide scientific research and collaboration to help justify our non-profit status.
We
are testing new software/hardware/techniques that are at the cutting
edge of software-defined radio technology that should not only go
toward scientific progress, but also the technical progression within
amateur radio itself - right in line with the Basis and Purpose section in FCC's part 97 rules.
This same hardware is candidate for future upgrades/enhancements of the Utah SDR site's receivers.
Meanwhile,
a group related to the Utah WebSDR will find themselves in Texas during
the event, not only for the "eclipse experience" but also to set up a
temporary transmitter that will operate simultaneously on 80, 40, 30,
20, 17, 15, 12 and 10 meters that will radiate WSPR and other signals
referenced to local atomic frequency standards and GPS timing.
Those signals - and other similarly-equipped stations along and
outside the eclipse path - will be received at the monitoring site at
the Northern Utah WebSDR but at other sites that are also equipped with GPS-referenced receivers.
Remember to look up on April 8 - but be sure to do so safely!
26 March: UPS replaced and lightning strike from "Thundersnow"
New UPS:
On
this day our local correspondent was on site, having received the new
UPS for the remote receiver site, testing it. This UPS seems to
be a bit better-built than its predecessor and we hope that it lasts
while.
Flash and boom!
During this testing - with the old UPS offline (of course!) he was more than startled when there was a simultaneous FLASH and BOOM if "Snow lightning" that
thundered through the building, knocking out power - but it returned
after a few seconds after a "recloser" sequence from the utility.
Since there was no UPS connected at that moment, all
servers lost power. He commented that he could discern almost no
difference in time between the "flash" and "boom" indicating that the
strike was certainly within a few hundred feet of where he was sitting.
Upon
return of power mains power, things started rebooting and remotely, I
did a quick check of all of the receivers and servers and found that
everything seemed to be working again with the exception of the LF (0-400 kHz)
signal path which was chock full of noise from what appeared to be 30
kHz and harmonics from a switching supply. A bit of sleuthing
indicated that this seemed to be originating outside the building
indicating that there might be an open/intermittent coax shield
somewhere along the signal path.
Just
a few minutes after strike, the snow squall had stopped as quickly as
it had started and it was sunny, so he ventured out - his heart still
racing from the adrenalin surge - to see what had happened. About
30 feet (10 meters) south of
the building he found a small, freshly-excavated divot in the ground
surrounded with still-smoldering grass and greasewood indicating that
at least part
of the strike had hit there. Stamping out the smoking vegetation
he proceeded with a visual inspection of the power line, the antennas
and guy wires.
Since (pretty much)
everything seemed to be working and there was not obviously anything
amiss, he continued the installation of the new UPS - which mainly
involved connecting to the external, on-site battery banks. The
new UPS is now online and appears to be working.
As
a final check before he departed the site I did a remote check of all
receivers and noted that the LF branch was again working - although
something about it doesn't "seem" to be quite right: The
overwhelming 30 kHz switcher energy and harmonics was now gone, but it
seemed as though signal levels aren't quite what they should be and may
be inconsistent: An inspection of the RF signal path for this
antenna will have to wait another day.
I
suspect that the time, effort and money we invested in the spring of
2022 to overhaul and upgrade our lightning protection and grounding
just paid for itself!
23-24 March, 2024: Power outage
At around 2024 MDT on 23 March (0224Z on 24 March)
the remote receive site of the Northern Utah WebSDR went offline -
this, after several hours (about 3.75) of being on battery. Prior to this, a
very violent storm with high winds and lightning - including "thunder
snow" - went through the area.
Our
logging shows that the mains voltage dropped from 122 to 108 volts just
after 1625 MDT which likely pushed the local utility's voltage
regulator to its maximum boost voltage (e.g. the distribution voltage was likely a fair bit lower than 108 volts would indicate)
and finally dropped to about 13 volts a bit after 1640 MDT and remained
at that voltage until about 0437 MDT when it finally went to zero.
Fortunately,
our local correspondent was in a nearby town and was able to get to the
building to verify a power failure, he noted that one of the old Dell
LCD monitors was powered up indicating a lack of signals: On that phase
- which is different from the one used to monitor/log the mains voltage
- there was about 20 volts: To be safe, everything not going through the UPS was disconnected.
"Driving
the line" back toward town - which was also in the direction of his home
- our correspondent noticed at least two locations where the high
voltage lines had jumped off/broken insulators and were hanging lower
than normal and/or tangled with each other. Visual inspection
also revealed possible damage to other insulators, lightning arrestors,
burn tracks on poles and cross arms, and several instances where the
failure had caused a localized fire of the vegetation around the pole -
which had apparently been extinguished by the rain and snow that followed.
The
pole numbers near the problems were noted and reported to the local
power company - which was apparently overwhelmed with outages related
to this weather event was unable to start an investigation on this
incident until hours later.
As
it turns out, the two parallel runs of transmission lines "flip"
positions on the top of a nearby mountain so the worst of the observed
damage was not on the line that feeds the WebSDR site.
The
more immediate problem was much closer to the WebSDR site. The
word from the power company revealed that a pole within the power
enclosure of a nearby customer fell onto the Utility's lines and caused
a faulting out of the circuit: As of the morning of 24 March,
there was a hive of activity to remedy the situation.
A
generator was brought out to the site to restore power some hours
later, but it's unknown if it'll be able to run things long enough for
the utility to effect repairs. While this generator is capable of
powering all computer gear on site, we shed the "scientific" loads (e.g. WSPR monitoring, noise and signal analysis, etc.)
to maximize run-time. The biggest single load on the generator in
this configuration is the "bulk charger" which can put about 20 amps
into the battery bank (the UPS itself is capable of about 10% of that!) so that if the generator goes offline there will at least be (e.g. runs out of fuel) some run time on battery.
Expect occasional outages until commercial power is restored.
For
reasons currently unknown, the UPS's output will occasionally "chatter"
causing the "A/B" transfer switch to do likewise which will sometimes
cause one or more of the on-site servers to reboot.
The mains power was restored in the early hours of Monday, 25 March.
20 March, 2024: More site visiting - brief outage, expect more!
On
this day we got word that the power company that there would be an
outage due to work upstream on the power grid, so our local
correspondent went out there with a generator.
While
out there, he heard a loud "pop" and all servers rebooted. A
quick assay revealed that the on-site UPS had partially failed and was
now reeking of "blowed-up" capacitor and a wisp of smoke was emanating
from it. This event roughly coincided with a brief power failure (a second or two)
so the "B" output of the on-site UPS transfer switch was immediately
connected to the generator. A few minutes later, the power went
out "officially" and stayed out for the duration of the scheduled
outage.
We
are now purchasing a "new" UPS to replace the one that just died - this
is either the 4th or 5th UPS that we've had on site since the WebSDR
went online. This UPS held up pretty well over the past 3 or so years considering the abuse
that it has gotten from surges, brown-outs, lightning and whatnot!
We were able to temporarily install a "spare" UPS to tide us over until the new UPS arrives.
15-17 March, 2024: Site visit
Most
of the work for this site visit was "behind the scenes" to address some
long-standing issues that needed to be addressed, but most people would
notice - and some related to future activities and research.
Readjustment of gain for "2M Lo" on WebSDR #3 (Blue).
The gain of this receiver was adjusted downward a bit as this receiver was being (occasionally)
overloaded when multiple repeaters/users keyed up. This tendency
for overload is "par for the course" when RTL-SDRs are used as they are
marginally suitable for this purpose.
We have acquired some additional SDRPlay units and are looking into their use for 2 meter reception using a "new" (untested) configuration: Obviously, we need to test this potential configuration before trying it!
More work on common-mode isolation.
On
of the banes of having many pieces of interconnected RF equipment is
that having more than one, single wire connected to a receiver means
that there's the potential for ground loops and introduced noise - and
having "noisy" devices like computers involved makes the problem even
worse. Through the use of common-mode chokes and bonding we
further-reduced these issues, particularly on lower frequencies (e.g. 630, 2200 meter amateur bands)
Brought eclipse research test machine online.
This
computer - one of the "shelf spares" for the WebSDR - was outfitted
with an RX-888 (Mk2) receiver modified for external clocking - this
being provided by a Leo Bodnar GPS reference. Being GPS-referenced - accurate to a tiny fraction of 1 Hz - should permit analysis of Doppler shift.
The purpose of this machine is to record - in its entirety - all of the HF spectrum - for periods before, during and after the eclipse to analyze the many signals (amateur radio as well as AM broadcast and time stations)
that are present to ascertain the effects caused by the changing
ionosphere around the time of the event. As it turns out, we were
able to achieve only about a 60 MHz A/D clock (and set it to 58 MHz to be cautious) but this should adequately acquire the entirety of the HF spectrum - with a bit of aliasing on 10 meters which (hopefully) won't be much of a problem.
If
you have been reading this for a while you'll note a reference to
similar work for the October, 2024 eclipse where we tried a similar
recording - but this was of only marginal use as we "discovered" issues
related to buffering/queuing of large amounts of data. This
machine is running a minimal operating system with modifications to
maximize its ability to do these recordings while losing as few samples
as possible: We believe that we have succeeded in this.
The
machine that we used in October - also connected to a GPS-referenced
RX-888 - is now used for high-resolution WSPRNet and WSPRDaemon
recording and will also be participating in gathering data for the
April, 2024 eclipse.
WSPR Frequency/time reference brought back online
For
the October, 2023 eclipse we installed a box - also GPS-referenced -
that injects low-level RF signals into the receive signal path for the
two RX-888 based receivers (but no-where else, so users of the WebSDR won't see these.)
This
box is used to provide absolute frequency, time and amplitude
referencing to the recordings to allow correlation during analysis
after the event.
Previously, it was discovered that the GPSDO (a 10 MHz GPS-referenced oscillator - more stable than a Bodnar)
used to provide a reference to this box was creating a lot of
switch-mode power supply QRM across the amateur bands. With the
use of a linear power supply to power the unit as well as the addition
of strong L/C filtering within the GPSDO, this issue was completely
quashed.
Operating systems on KiwiSDRs upgraded
The
operating systems of all six KiwiSDRs on site was updated to a recent
Debian version. This was necessitated - with some urgency - by a
known flaw in the older (specific) version of Debian that finally reared its head, causing system instability over the past few months of all but one unit (KiwiSDR #3 or #5) which happened to be running an even older version that didn't have the same issue - but was updated, anyway.
Anemometer on the weather station "fixed"
The
anemometer on the weather station on site was reading very low wind
speed and this was traced down to the fact that the internal magnet -
used to trigger the magnetic switch to count rotations of the cups -
was covered with iron dust, some of which had infiltrated the bearings.
The
dust and bearings were removed, flushed with solvent and then
re-lubricated to restore proper operation. A new anemometer
module (which is cheaper than the entire outdoor weather unit) was ordered to have as an on-site spare.
This
iron dust - which will accumulate if you simply leave a strong magnet
outside almost anywhere on Earth - is likely a combination of magnetite
from the alluvial deposits in the surrounding area as well as meteorite
dust. Since the site is surrounded by alluvium, it's not
surprising that this occurred.
Use for "new" receive hardware
As
mentioned earlier, we have newly-donated SDRPlay RSP receivers on hand
and are trying to determine their best use. We'll likely use them
to improve reception/performance on some of the "bands" that currently
use older hardware (e.g. "softrock" receivers)
but are trying to figure out the optimal use. We had planned to
install one or more of these receivers on this maintenance trip, but we
spent more time fighting with the eclipse test machine than we expected!
5 February, 2024: Internet outage
On
this day the remote Northern Utah WebSDR site lost Internet
connectivity for approximately 90 minutes, beginning a bit before 1625
MST.
This
outage was caused by two separate events: Along the fiber
corridor in a nearby-ish city, a vehicle plowed into a utility pole,
knocking out power in the area and laying the underhung fiber on the
ground.
Not
too long after this, a truck plowed into the back end of the law
enforcement officer's vehicle investigating this accident, causing the
loss of another pole and knocking out the power in the fiber corridor
and surrounding community, interrupting two separate fiber providers
and knocking out the power at our ISP's fiber landing.
While
our ISP was able to switch to generator power, the damage caused loss
of connectivity from both fiber providers for a while: After a
while, a temporary VPN was set up to route a significant portion of the
ISP's capacity via yet another route, more or less restoring service.
The
ETA for all repairs and normalization of connectivity on the part of
the providers and the power company is expected to be on the order of
24 hours or so.
24 December, 2023: DNS issues
On the evening (U.S. time) of this day the WebSDR may
have been inaccessible for some time. This was not caused by a
network outage on part of the WebSDR but rather the authoritative DNS (the Domain Name Server - the server that converts the name of the web site to the numerical IP address) that was offline.
21 December, 2023: Network connection dropped
It
would seem that the first wireless hop (located atop a nearby mountain) between the remote WebSDR site
and the Internet went offline at about 0741 this morning. Connectivity was restored approximately two hours later.
The
cause was determined to be the tripping of a current overload
protection device on the DC bus at the site of the first wireless hop
from the WebSDR site toward the Internet: Adjustments have been
made to the gear to improve the operating margin while simultaneously
maintaining an appropriate degree of safety/protection.
15 December, 2023: WebSDR outage
On this day several things occurred:
There was a network outage caused by a link radio dropping off unexpectedly. A power-cycle was required to restore it.
While
we were at it, we did some software upgrades to the servers - something
that had been on our "to do" list for weeks, but never got around to
it: We typically try to do such things very late at night to
inconvenience as few people as possible, but since we were there...
Work
was being done at the nearby pipeline and that work was causing severe
disturbance of the power line voltage - ranging from no voltage at all
to a peak of over 190 volts AC on a 120 volt circuit, causing the
instant failure of several of the still-remaining incandescent lights -
so we turned off the main power to the building (running on battery) for most of an hour until they stopped messing with it. The guy from the power company was not pleased with what he witnessed the other folks doing!
8 December, 2023: Grid power restored - for now
On
this day - at approximately 10 AM - the power company reconnected to
the grid, getting us off the portable generator that they had brought
in a few days ago. This "transfer of power" was - unlike some in
recent times - completely without incident in that nothing blew up,
voltages were as expected, etc.
The
unofficial word received from the power company is that due to incoming
winter weather, they have suspended work on the line, having done as
much as they could - but not as much as planned: It is our
understanding that in the future, additional work will be done on
repairing/upgrading the power lines - likely closer to the spring when
the weather is more clement.
The
good news is that the lines feeding us and the commercial customer have
been very thoroughly inspected and the worst of the problems have been
addressed. The bad news is that there is still a lot that needs
to be done to bring it up to par, but at least they know now what most
of these issues are and likely have some plan of amelioration.
4 December, 2023: Grid (power) failure - Part 2
At
about 10 PM the grid power was "restored" - sort of. Nearby,
there is a pumping station for a pipeline and power company brought in
a large generator for that customer - but since our power feed is
simply a "tap" off the large, industrial feed, we are being fed by this
generator as well.
For the time being, we are available to resume "normal" operation while the repairs are underway.
The word from the power company is that it will be a few days before the "real" grid power is restored.
4 December, 2023: Grid (power) failure - Part 1
At 0951 AM MST the power failed at the remote WebSDR site.
This failure was caused by mechanical failure of multiple power
poles - in other words, they fell over! This failure occurred
about a mile (1.6km) away from the WebSDR site.
This
failure would appear to be a cascade failure with one power pole
failing, and then breaking adjacent poles by over tensioning the
catenary of the high-tension power line. These poles are quite
tall, carrying a high-current, high voltage industrial line (either 27kV or 38 kV). These poles traverse a wetland and it is likely that they are about 50 years old.
It
is suspected that the first power pole failed several days ago (Saturday, 2 December), but being
rural it apparently went unnoticed until more poles had broken and one of the phases finally faulted -
possibly by getting too close to the ground.
With
the power failure this morning, a fairly large work party from the
power utility appeared. While we have had little communications
from the power company (we are not the major customer, and we are staying out of their way) what we do know is that much more hardware is being replaced than just the original pole failures and this is likely to take several days.
For this reason we expect that grid power at the WebSDR site will be erratic - if present at all - until this work is completed.
Users of the Northern Utah WebSDR should expect the possibility of occasional loss of service over the next few days.
While
the Northern Utah WebSDR has battery back-up, it can only last about
four hours. We have a generator on site, but its limited fuel
capacity only allows it to run for between 6 and 10 hours, depending on
load: Since - once we are on generator - we must run the site and recharge the UPS battery (we have a high-current "bulk" charger to speed this process) predicting the run-time is difficult.
Because the receive site IS remote, keeping the generator fueled is awkward - and doing so around the clock, even more so!
While
we will do what we can to keep the WebSDR online during while the power utility does its repairs, please
understand that this is a fairly remote site, some distance away from
where those who maintain it live and it may not be possible to keep it running at
all times until the mains power is restored to reliability.
During
this time, we will shut down some of the services - namely the CW
Skimmer and some or all of the WSPRNET reporting - to reduce power load
and increase battery run-time.
20 November, 2023: Localized network outage.
On
this morning, at around 0530MST, an outage occurred at the first "hop"
away from the WebSDR site toward the Internet - one of three
wireless hops required to get from the rural location of our receiver
site to the nearest fiber landing. Due to the "awkwardness" of
reaching this site - and the time at which the problem occurred - it
took a couple of hours to reach the side and effect a temporary
work-around to get it back online. Inconveniently, this event
roughly coincided with inclement weather and the first valley-level
snow fall of the season.
4 November, 2023: Wide-area network outage.
On
this morning, starting a bit before 0740MDT, much of the Internet
feeding (all of) Northern
Utah collapsed in a sort of cascade - but this seems to have started
quite some time ago. About six months ago, the local power
utility scheduled an outage for maintenance and upgrades along the main
highway corridor: This outage would specifically affect
infrastructure located very near the highway - and it happens that at
least here in Utah, much of the telcom infrastructure is also
located along these transportation arteries: This makes sense as
obtaining right-of-way for such things is simplified as this would be
granted in large part by one government entity.
While one of the major providers (e.g. "Utopia") seemed to be aware of this scheduled outage, the word didn't seem to get passed along very well to related customers.
This
morning, the power company switched off the power as scheduled at
around 0400 and turned back on around 1100 - but it would seem that the
provider did not quite put it together to realize that their designed
battery run-time was only about 2 hours, so by the time the Utah SDR
lost connectivity at about 0740, four or five sites along the fiber
line had depleted their back-up power reserves. It is currently
estimated that at least 15,000 individual customers lost connectivity
along with data connectivity through at least one major cell phone
carrier which also meant that businesses who had "backup" connectivity
actually did not. Utopia reports that 200-300 "customers" were
affected, but this count appears to include ISPs, businesses and,
possibly, even a cell carrier - many of these having their own customers, so this number likely does not reflect the number of persons that were "inconvenienced".
At
this point one might ask, "Don't you have back-up connectivity?"
For the WebSDR - which is user-supported through donations, the quick
answer is no: We could
not reasonably justify spending significant amounts of donor's money on
something that is free to use for an event that would seem to be
unlikely to happen. If you think that we should
consider such, remember that while the WebSDR isn't quite in the
middle of nowhere, we can see it from the tops of our towers:
It's not like it would be easy, cheap or convenient to add a
totally-isolated back-up. Incidentally, if we had installed a backup to the nearest populated area, it, too, would have been taken down by this outage!
On
schedule, the power company restored service at 1100 MDT and over the
next 5-15 minutes, the networking equipment along the line came back up
and service was gradually restored, apparently back up by about 1115.
So now you know!
28 October, 2023: Experimental WSPR receiver system undergoing testing.
On
this day an experimental receive system was brought online using a
GPS-referenced RX-888 (Mk2) wideband SDR. This receiver,
connected to an Intel i7, eight core computer running Linux, is
operating at a sample rate of 64.8 Msps meaning that it can inhale the
entirety of the VLF, LF, MF and MF spectrum simultaneously. This
same receiver/hardware/software combination can also support a sample
rate of 130 MHz meaning that it is theoretically capable of receiving
signals on the 8 and 6 meter bands as well.
Running on this computer is software called "KA9Q Radio" which is capable of producing hundreds
of simultaneous, virtual receivers of arbitrary bandwidth and mode.
In this case, the main interests are receivers looking at not
only all of the WSPR subbands from 2200 through 10 meters, but also
time and frequency standard signals such as WWV, WWVH, WWVB and CHU.
Also
running on this computer is the WSPRDaemon software which uses WSJT-X
binaries to decode WSPR and the various modes of FST4W and report this
to not only wsprnet.org, but the more expansive WSPRDaemon database
which collects more data - including measurements of background noise
levels - than a standard WSPR reception. Additionally, the
monitoring of the time and frequency standard signals - and possibly
select broadcast transmitters - allows the possibility of monitoring
these signals as well, both in terms of signal strength but in
frequency.
These signals are being reported under the callsign of "KA7OEI/Q" (for "KA9Q Radio"). This receiver is currently using only the TCI-530 omnidirectional antenna (and an active E-field whip for 2200 meters).
This receiver is operating separately from the pre-existing
"KA7OEI-1" receiver system which aggregates data from not only the
omnidirectional antennas, but also the two directional antennas - the
East-pointing and Northwest-pointing log-periodic arrays.
Note: Because KA7OEI-1 uses all three antennas - and the reports only
the strongest of the signals - one cannot directly compare the results
reported to the wsprnet.org web site: The WSPRDaemon database
must be used to directly compare the reports from just the KiwiSDRs using the same antenna as the RX-888.
The
main purpose of this is to test the performance of the RX-888 (Mk2) to
verify that it is a viable alternative to other "full-spectrum"
receivers like the KiwiSDR and the Red Pitaya - and to aid in the
development of the software used to support the RX-888 and similar
receiver hardware.
This
same hardware will also be used to test receiver configurations that
may be used for WebSDRs in the future - a natural fit for KA9Q-Radio.
(Watch this space!)
25 October, 2023: Salt Lake "Metro" WebSDR - 2 meter Earth<>Space hardware upgraded:
On 24 October, 2023 - with access to a lift - we were able to (finally) install an M2 2 meter "Egg Beater" antenna at the site of the "Salt Lake Metro" WebSDR.
Additionally, we replaced the RTL-SDR Dongle with an SDRPlay RSP1a which offers vastly superior performance in many ways: Not only does it have a 14 bit A/D converter (as opposed to the 8 bits of the RTL-SDR dongle) but it has far better sensitivity and (especially) very much better filtering.
A
200 kHz wide band-pass filter was configured for the RSP1a which
perfectly fits the 145.800-146.000 2-meter Earth<>Space segment.
This also does an excellent job of removing energy from 2 meter
repeater inputs that could desense the receiver.
Note
that because the only two coverage bandwidths available on the WebSDR
that might be suitable for covering this 200 kHz sub-band are 192 and
384 kHz, we chose 384 kHz as it includes all
of the desired spectrum, but because the coverage is a bit wider than
the filtering, the edges of the waterfall above 146.00 and below 145.80
MHz are darkened.
The
S-meter on this receiver has been calibrated to be within +/- 1dB at
the antenna terminal. CW signals down to about -127 dBm are
copiable in narrow bandwidths.
Remember:
If you with to predict when a satellite is passing overhead, if
you wish to listen for it via the WebSDR, be sure to calculate the
times for Salt Lake City(where the receiver is located) rather than your location!
Up
to now, the only coverage of 60 meters on the Northern Utah WebSDR was
via the "60-49M" receiver using an inexpensive RTL-SDR. This
device, having 8 bits, is what we consider to be a "low performance"
receiver as that hardware isn't really up to handling the wide
variations of HF signals between weak and strong with aplomb meaning
that - particularly when the band was quiet during the day time,
images/distortion - particularly from WWV - would appear across it due
to the low signal level and too-few A/D converter bits being "tickled"
with the signal.
On
this date the 60 meter receiver was replaced with an SDRPlay
RSP1a. The coverage of this receiver was reduced from about 2 MHz
to 700-ish kHz so that it covers from about 4850-5500 kHz - but it does
so with better performance, sensitivity and resistance to distortion of
the previous. This coverage was selected to include WWV/H and
most of the 60 meter Shortwave broadcast band because, why not?
The original "60-49M" receiver hardware was moved to WebSDR #3 as described below.
"60-49M" receiver moved to WebSDR #3 (Blue)
With the upgrade of the 60 meter reception on WebSDR #1 (Yellow)
the original receive hardware was freed up and since we'd had only
seven of the eight available slots filled on WebSDR #3, we placed it
there.
The "60-49M" receiver - now on WebSDR #3 - is exactly
the same as when it was on WebSDR #1, so you can still listen to your
favorite 60 and 49 meter shortwave stations there, if you like.
2200 and 630 meter receivers back online
The 2200 and 630 meter receivers are, again, working properly.
The audio cable for the 2200 meter receiver wasn't making good connection: One of the two channels (I or Q) was missing, causing the levels to be 6 dB low with images.
On
the sound card for the 630 meter receiver, the external power cable's
connector had been slightly bent during install and wasn't making good
connection, causing the analog input section of the card to be
un-powered, resulting in an odd combination of high noise level with no
sensible signals. The pins were bent back to their proper shape
and the connector re-seated, restoring operation.
Eclipse monitoring gear installed.
As
noted in the19 September, 2023 entry, additional equipment was
installed on site to aid in the monitoring of transmissions made during
the upcoming eclipse(s):
A
custom-made reference unit was installed in the signal path to the
RX-888 and the KiwiSDRs on the TCI-530 Omni antenna. At 250 and
750 Hz above the center of each of the WSPR bands (on 80, 40, 30, 20, 17, 15, 12 and 10 meters) there are two carriers:
The
"R1" carrier is placed precisely 250 Hz above the center of the WSPR
band and it is BPSK-modulated with the phase flipping 180 degrees
approximately 82.2 microseconds after the beginning of each GPS
second. This signal may be used to orient the recordings to the
beginning of the GPS second as well as provide the possibility
of external signal from remote transmitters being similarly-modulated
being measured to determine the approximate time-of-flight between it
and the receive site.
The "R2" carrier is placed 500 Hz above this (750 Hz above the center of the WSPR band)
and it is unmodulated. This signal is primarily a frequency
reference to allow direct comparison of similar signals originating
from the field to allow direct, continuous measurement of Doppler
Shift. Like the "R1" signal, this signal can also be used as an
absolute amplitude reference to determine the strength of the incoming
signals.
On
a separate, non-public server, a GPS-locked RX-888 (Mk2) receiver has
been installed to allow recording and monitoring of the signals before,
during and after the Eclipse event on all HF band. This can
include both the WSPR/FST4W signals and others such as WWV/H, CHU and
AM broadcast stations.
If you have any questions about
the above gear, please feel free to make contact using the information
found on the bottom of the "About this WebSDR and Contact Info" page.
19 September, 2023: Occasional outages scheduled for the weekend of September 23/24 Mountain time.
During the upcoming weekend we'll be performing maintenance and upgrades that may affect all servers. Specifically, we are planning to do at least the following:
Repair/reconfigure the 2200 and 630 meter receivers. These two receivers have been (effectively)
offline since late July: We believe that lightning damaged the
630 meter receiver and there is a signal path issue with the 2200 meter
receiver: It appears that either the I or Q channel is missing,
causing the spectrum to be "mirrored". At the very least, we will
repair the existing receivers, but we are considering the installation
of another RSP1a on WebSDR #1 that will cover both the 2200 and 630
meter bands, simultaneously.
Reconfigure the 60 meter receiver.
We are also considering moving the main 60 meter receiver on
WebSDR #1 from an RTL-SDR to an RSP1a. This reconfigured receiver
will cover much less spectrum - either 384 or 768 kHz - but it will
offer superior performance to the existing receiver. It is likely
that we will then move the existing 60 meter hardware to WebSDR #3 to
retain full 60-49 meter SWBC coverage.
Work on the CW Skimmer.
Up to now, the CW skimmer has been configured only remotely:
Its operation and performance has not yet been properly
characterized, and due to software limitation (e.g. getting audio channels of greater than 48 kHz sample rate from the Linux source into Windows)only
relatively small parts of the CW portions of several bands is not
currently being covered. We are hoping to increase the coverage
on some of these bands from 48 kHz to 96 or even 192 kHz - plus add a
few more bands, such as 2200, 630, 160 and maybe even 60 meters.
The main
reason for the site visit will be to install another receiver system on
site - the main purpose of which is to facilitate the monitoring of
signals during the upcoming solar eclipses.
This receiver will be a modified RX-888 (Mk2)that is capable of monitoring the entire
HF spectrum at once and, specifically, it will be used to precisely
measure amplitude, frequency and possibly absolute time-of-flight of
signals originating from transmitters along the eclipse path and
elsewhere.
This
receiver/system will be GPS-referenced so that the data captured can be
analyzed to divine the aforementioned signal properties with scientific
precision.
This
system - since it has no real user interface - will not be publicly
accessible in the same manner as a WebSDR, but its results will be made
available for ongoing research in propagation - both eclipse-related
and for propagation in general.
This
new hardware will also be be evaluated: Specifically, seeing if
the RX-888 (Mk2) and similar devices may be used as alternate receive
hardware for the propagation research that has been ongoing at the
Northern Utah WebSDR for several years now and also for future upgrades
of the WebSDR system itself.
10 September, 2023: CW Skimmer online.
As
previously noted, some of the recent configuration changes and upgrades
would allow the inclusion of a CW skimmer system. This system
"listens" to chunks of spectrum and automatically decodes all Morse
transmissions that it can hear, reporting the callsigns of those
calling CW - and transmission of beacons to the Reverse Beacon Network
(RBN) - LINK.
Installation
of the software was done around August 11 and testing was begun - first
on just a few bands, but then rolling out reception on other HF bands a
few days later. At present the following bands are covered:
80 meters - Center: 3530 kHz +/- 22 kHz
40 meters - Center: 7030 kHz +/- 22 kHz
30 meters - Center: 10120 kHz +/-22 kHz
20 meters - Center: 14030 kHz +/- 22 kHz
17 meters - Center: 18088 kHz +/- 22 kHz
15 meters - Center: 21030 kHz +/- 22 kHz
12 meters - Center: 24910 kHz +/- 22 kHz
10 meters - Center: 28030 kHz +/- 22 kHz
The antenna being used for RBN coverage is the TCI-530 Omnidirectional antenna: The other antennas (e.g. the 2 beams) are not being used for RBN reception at this time.
At
present, only about 44 kHz of each band is covered, the reason being is
that we haven't figured out how to get more than a 48 kHz "pipe" from
the Linux WebSDR servers into the Windows sound system (RBN runs on Windows!) so if you know of a means to get more than 48 kHz of TCP-based audio into a Windows machine, let us know!
The callsign being used for the reporting is "KA7OEI". The Northern Utah WebSDR doesn't currently have its own (club) call - and it seems as though the Reverse Beacon Network doesn't allow non-callsign identifiers for its receive stations.
During
this initial period, several issues were discovered - and resolved, one
relating to aliased/image signals were getting their way into the
decoder's signal stream. For an example, a strong signal 3 kHz above the +/-24 kHz passband would appear much more weakly 3 kHz above the bottom of the passband, but reconfiguring the signal translation (specifically, tightening the transition bandwidth of the decimation) seems to have reduced these already-weak responses even more - hopefully to the point of nondetection.
At some point we'll likely add more bands (2200, 630, 60, etc.)
and - if we can figure out how to get larger than a 48 kHz pipe into
the Windows machine, more bandwidth on the currently-covered amateur
bands. We are looking into the use of additional receivers on the other antennas on site as well.
13 August, 2023: Server reconfiguration complete, comments:
The reconfiguration described in the 12 August 2023 entry was completed without incident.
It was observed that occasionally, some of the softrock receiver loopbacks will stop - but not all of them. This is being investigated.
630 meter receiver offline.
It
was noticed that the 630 meter receiver appears to be non-functional,
producing a rather high noise floor. It is suspected that the
receiver preamplifier was zapped in a recent storm and next time someone
visits the site it will be investigated - and even more lighting protection will be added as appropriate.
While it is working, it appears that the 2200 meter is not working properly:
Its sensitivity is down a bit and there are signal images
indicating that it's likely that one of the two audio channels (I or Q) is missing.
Since neither 2200 or 630 meters are "summer" bands (more lightning static than on 160 meters!)
their operational status is not critical at this time of year, but it
will be addressed during the next site visit. Because the
KiwiSDRs are working fine on both 2200 and 630 meters, they may be
used, instead - and this also indicates that their respective receive
antennas are working properly.
12 August, 2023: Server reconfiguration
In
order to make everything consistent WebSDR servers 1, 2 and 4 will need
to be interrupted for a while to reconfigure the signal paths within as
follows:
WebSDR #1:
Make the band numbers line up with the signal loopback numbering
and send the signals for the bands currently using Softrock-type
radios (2200 and 630 meters) through the loopback system.
WebSDR #2:
Make the band numbers line up with the signal loopback numbering
and send the signals for the bands currently using Softrock-type radios
(30, 17 and 12 meters) through the loopback system.
WebSDR #4: Send the signals for the bands currently using softrock-type receivers (30 and 17 meters) through the loopback system.
These
processes were done and tested on WebSDR #5 very early this morning -
but this was not without problems: The loopback would quick after
a while (the "aplay" utility would time out) but this is (hopefully!) resolved.
For
some reason we didn't realign the band numbers with the loopback
configuration on WebSDRs #1 and #2 when we replaced the servers (it was late - we were tired!) but we are doing it now.
By
aligning the receiver numbers with the band numbers we are making
things less confusing for ourselves in the future - and by putting the
remaining softrock-type receivers through loopbacks allows us to send
their signals to other processes - and other computers - for things
like CW skimmers and other signal analysis.
These interruptions will occur between 2300 and 0000 MDT this evening (0500 and 0600 UTC on 13 August) and should take much less than the full hour to accomplish.
11 August, 2023: CW Skimmer and PSK reporter testing
Making good on previous threats, we are pleased to announce the testing phase of two pieces of software:
CW Skimmer
A
"CW Skimmer" monitors a chunk of spectrum and decodes - as best it can
- the CW transmissions occurring within it. Callsigns, CQ calls,
beacons and other information is then posted to the Reverse Beacon Network(link) where one can view some of this information - namely who is calling CQ and who is hearing it.
The
present configuration is very kludgy in that the conveyance of the
audio from the Linux WebSDR servers to the Windows machine running the
CW skimmer (it runs only on Windows, so we have to deal with it!)
is very manually configured. Additionally, the best bandwidth we
can attain is that which will cover just under 48 kHz of spectrum:
Once we determine the best (less "hacky") way forward, the start-up process will be automatic (or only a few mouse clicks) and we hope to cover 96 (or even 192 kHz) of bands as appropriate.
As of the time of this writing the CW skimmer is operating on 80, 40 and 20 meters (via the omni antenna)
- all three receiver centered 30 kHz above the bottom of the respective
bands. The reporting callsign is currently "KA7OEI" - although
this is subject to change.
PSK Reporter - FT-8 spots
We are also occasionally testing the monitoring the 40 and 20 meter FT-8 frequencies (via the omni antenna) and the reception reports are being forwarded to the "PSK Reported" web page (link). The reporting callsign is "NUTSDR" if and when it is active.
This is a test!
As noted previously, this is all in a testing
phase and you should expect the above to occasionally go offline or -
hopefully - become more stable with more bands added. Currently
we have the ability to monitor the following bands:
2200 Meters, 630 meters, 160 meters, 80 meters and 40 meters (omni and east-pointing beam).
On the omni, east-pointing beam and northwest-pointing beam: 30 meters, 20 meters, 17 meters, 15 meters, 12 meters and 10 meters.
Owing
to hardware limitations, we cannot currently monitor 60, 6 or 2 meters
- but this could be addressed in the future if there is call to do so.
There are limitations related to how many "receivers" can operate simultaneously so we likely cannot monitor all of the aforementioned band/antenna combinations so we'll have to pick and choose - when we get to that point.
There
also exists the possibility of adding monitoring of RTTY, PSK31 and
FT-4 transmissions - plus whatever might appear in the future.
This would not be possible without your donations and support!
The
addition of the above was made possible after the upgrade of the WebSDR
server hardware - and the purchase and implementation of a stand-alone
Windows machine to run the CW Skimmer software.
If you have questions about this, feel free to contact using the information on the "About" page link at the bottom of this page.
28-30 July, 2023: Server replacement - and a few receiver upgrades:
We would like to thank all those who donate to help support the Northern Utah WebSDR! Without your support we would not be able to make upgrades and improvements over time.
2 meter reception improved.
As we have a building full of computers, it's no surprise that
there is some computer RFI. While this is undetectable on HF,
there have been issues with weak birdies across 2 meters. This
issue was largely eliminated when we moved the 2 meter antenna quite
some distance away onto the tower, but there's a bit of inevitable
ingress into the cabling and hardware. To mitigate this further, the
band-pass filter/amplifier module was relocated from inside the
building to the enclosure at the base of the tower, allowing signals to be amplified
closer to the antenna before ingress of the interference thus almost
completely overriding it in the process.
Server replacement. During this long weekend we did a marathon session to replace ALL
of the WebSDR server computers.
The previous hardware - which was
current way back in 2008 - consisted mostly of quad-core (Intel Q9650) 2.9 GHz machines and
were originally retired media servers. The computer for WebSDR
#1, in particular, is exactly
the same one that was first made public during initial testing of the
WebSDR in January, 2018 - and it has been on almost 24/7 ever since
without a single failure - but it was time to replace them with
something newer with much more processing capability but did not
consume as much electrical power.
It
is also expected that with the greater processing power, the ongoing
tendency for some receivers to occasionally go into a "stutter" mode
will be dramatically reduced.
The
new servers have Intel i7-8700 processor with six hardware cores along
with much faster RAM and I/O. This hardware will very easily
handle the current needs of the receiver hardware and processing and
will take us into the future with the ability to do more including
support additional services like CW Skimmer, additional signal and propagation
analysis and much more.
While
our original intent was to simply "move" the operating system images
over to the new hardware, onto brand new storage devices, the
virtualized file system of newer Linux (Ubuntu, specifically) made this into quite a challenge. In particular:
As
expected, UEFI and Grub prevent one from simply moving a storage device
from one piece of hardware to another - but it also prevents one from
simply making an image and doing so. We spent several hours
trying several methods of doing the latter before figuring out the
procedure. For WebSDR's 1 and 2, this was a bit simpler as the
original SSDs on which the system was installed were smaller than the
new NVME drives.
For
WebSDRs 3, 4 and 5 these were running on 1 TB spinning drives from the
original hardware and although the partitions on these drives was still
smaller than the new NVME drives there is (the WebSDRs don't need much in terms of storage space) no real procedure or software tool
exists for doing this owing to the complications of the newer file systems.
For WebSDRs 4 and 5 we ended up using a image of WebSDR #2 an
copying the files relevant to the WebSDR from the old and dropping them
onto the new hardware which was much faster than a complete reinstall
of everything. For WebSDR #3 we were able to find a procedure as it was still running a slightly older - but still supported - version of Ubuntu and was not running the file system of the others.
Because
the focus of this work was primarily on computer/server replacement we
did not have time to do much else so there was relatively little done
in terms of upgrading receive capabilities and adding new frequency
coverage, etc. which means that for the most part the WebSDR will look exactly like before after the upgrade.
The only real change that was made was that all WARC band receivers (e.g. 30, 17 and 12) now have 192 kHz coverage.
Here's a description of what was done:
WebSDR #1:
The computer was replaced and the sound card for the 630 meter
receiver was swapped for a newer unit to accommodate the bus
capabilities of the newer unit.
WebSDR #2: The computer was replaced: Not a lot else needed to be done.
WebSDR #3: The computer was replaced. The existing complement of receivers was retained.
WebSDR #4:
The 12 meter FiFiSDR receiver was moved to 30 meters, coverage
set at 192 kHz of bandwidth and an RSP1a was implemented for 12 meters,
improving coverage and overall performance.
WebSDR #5:
Just like WebSDR #4, the 12 meter FiFiSDR receiver was moved to 30 meters, coverage set at
192 kHz of bandwidth and an RSP1a was implemented for 12 meters,
improving coverage and overall performance.
Once
we have made sure that everything continues to work as expected - and
as we get time to do so - we will start to add additional capabilities
and consider further receiver upgrades.
25 July, 2023: Comments:
"Mobile" versions of the WebSDR interface updated.
If
you have brought up the WebSDR on your phone you might have noticed
that it suggests the use of the "Mobile" version of the web page.
This lighter-weight and "simpler" version of the WebSDR interface
hales from about a decade ago when "smart phones" were less smart, less
powerful, and had smaller/lower resolution screens - which is also why
it has few bells and whistles (e.g. no noise reduction, fewer modes, etc.).
These
days - with "better" phones - there is less call for the use of these
page, but some people seem to prefer it on their mobile devices but
since it is less-used, it also doesn't get the "love" of the main pages.
On this day, we finally got around to making a few tweaks to the Mobile WebSDR pages:
"Mozilla Audio Start" button added. This is in addition to similar buttons that the user MUST press when using Apple or Chrome.
More sensible frequency and mode on startup.
Before, the start-up mode was always "AM" on the "first" band -
but now it generally reflects a band and mode that one is likely to use.
Strongly worded email received.
Following
the weather-related outage yesterday an email was sent to the ISP - but not
to the public address of the Northern Utah WebSDR - expressing apparent
outrage at the outage that occurred.
The
remoteness of this location -
and the fact that it is supported by donations and run by volunteers -
means that it's a challenge to get connectivity at all, let alone
redundancy: We believe that the majority of our user base is
intelligent and fully understands that occasional outages -
particularly those caused by Mother Nature - are inevitable, but there
are apparently some that fall outside this group.
This email contained a lot of words - but little actual substance and few real details
about the nature of the complaint. If taken at face value, it would seem that we
disrupted testing of a "new marine high frequency 'Emergency Location
Transponder'" based on HF, and that the Northern Utah WebSDR was a "key
site" in this testing and network.
If
this is the case, it is news to us and in contravention of our stated usage
policies - and such use would demonstrate the lack of responsibility and questionable ethics on
the part of those who would do so - particularly if they represent information contrary to our stated policies.
It
was interesting to note that this email was sent anonymously via Proton
Mail - with no contact information whatsoever - and ostensibly from an international organization whose name gets in zero results from
a Google search, an unlikely combination of factors in a legitimate
email.
As such, it will be given the consideration that it deserves.
If you wish to read a redacted version of this letter you can do
so here.
Apparently,
not everyone is aware that the Northern Utah WebSDR is free-to-use and
volunteer supported and as such, we offer our service on a "best effort" basis. As noted under the Terms and Conditions of use (link) neither this or ANY WebSDR should be the primary or sole resource for life and safety concerns. While our ISP does
have redundancies built into its network, the Northern Utah WebSDR
relies on a somewhat tenuous "spur" of this network for connectivity:
Geography essentially precludes the implementation of redundant
paths that would be cost effective - particularly for our usage model,
hence the warnings and admonitions within the Terms and Conditions.
We do not condone, support, or are involved in any commercial uses of the facilities of the Northern Utah WebSDR. If you see anyone, anywhere claiming otherwise, please let us know via the email address on the About page.
24 July, 2023: Severe weather - Direct lightning strike to ISP facility nearby.
Around
1900 MDT a severe lightning storm knocked down one of the connectivity
points of the ISP used by the Northern Utah WebSDR located in the town of Corinne, severing
connectivity of the remote WebSDR site in Northern Utah from the Internet.
Connectivity to the "SLC Metro" WebSDR server was unaffected as it is not at the same location.
Connectivity restored:
Despite much of Corinne's power being out, connectivity was restored at the ISP facility at around 2015 MDT.
There
was an apparent direct lightning strike which has damaged the
electrical wiring at the commercial building that hosts this gear and a
number of lightning arrestors on the various cables connecting the
radio gear topside have sacrificed themselves to protect more expensive
gear. There is also some damage to the battery back-up system on site
as well as the destruction of a transfer switch on the AC mains.
The site is back up on battery/generator as of the time of writing this (2030 MDT) as a result of a bit of kludging to work around damaged/destroyed components.
Expect
ongoing connectivity issues as extreme weather passes through the area,
causing disruption of wireless links and while repair/replacement of
the ISP's gear is underway.
23 July, 2023: Server hardware upgrade planned
Starting on the evening of Friday, 28 July, 2023 MDT (29 July UTC)we will begin replacing the SDR Server Hardware.
These interruptions will continue through Saturday, 29 July as we work our way through the other servers.
We will try to do this work sequentially, taking down only ONE WebSDR server at a time (we hope!) allowing users to find alternates for reception. For example:
The back-up for 80 and 40 meter reception on WebSDR #1 (Yellow) is WebSDR #3 (Blue).
WebSDR #4 (Magenta) also has 40 meter reception using an east-pointing beam.
The back-ups for 30-10 meter reception on WebSDR #2 (Green) are WebSDR #4 (Magenta) with its east-pointing beam and WebSDR #5 (Teal) with its northwest-pointing beam.
These
updates will replace all WebSDR servers - some of which has faithfully
served us since the system was first tested in January 2018 - and these
servers were already about a
decade old at that time! The new servers will have several times
the processing power using much newer hardware yet use the same or less
amount of electrical power.
With
the change in server hardware, we will have to get creative with some
of the receivers as a few of the smaller bands are still covered using
PCI sound cards + Softrock receivers and with the newer servers having
one-fewer slots (only PCIe - no PCI at all) so we'll need to figure out a work-around. We have several ideas in mind that should work - but we'll see.
This new hardware should reduce the likelihood of the drop-out/stuttering issue that we have been fighting for some time (see comments about this in previous entries, below)
and it will allow us to use new RX hardware in the future when the time
comes to upgrade. Excitingly, it will also allow us to proceed
with adding multi-band reception for an on-site CW Skimmer to
contribute to the Reverse Beacon Network as well as other related
things.
The "Salt Lake Metro" WebSDR providing VHF/UHF coverage in the Salt Lake valley will NOT be affected by this work.
17 June, 2023: "Well, that didn't work!"
The software update that occurred last night was rolled back almost exactly 24 hours later (again, interrupting WebSDR #2 and 4).
The reason was that one of the problems noted with a previous
version - the "callback" function suddenly stopping - still occurred,
causing outages throughout the day. We'll continued to work with
SDRPlay for a solution.
16 June, 2023: Software updates
Between 2200 and 2300 MT (0400-0500 UTC 17 June)
both WebSDR #2 and #4 were taken down for a few minutes to perform a
software upgrade. These servers use SDRPlay RSP1a hardware for
40, 20, 15 and 10 meter (low) coverage.
As you may have noticed, there is occasional "glitching" in the signals caused by the "callback" function (that which gets signal data from the hardware)
getting out of synchronization with itself: This is manifest by
occasional "clicking" and what may sound a bit like "stuttering" at
time. When this happened, it would usually correct itself within
a few minutes.
Working
with SDRPlay, they sent a new version of the API - code used to
interface with the hardware - that they would like us to try out to see
if this resolves the problem, and that was installed this evening.
We thank the SDRPlay folks for working with us.
We'll continue to monitor the statistics and performance of these receives (and report back our observations to SDRPlay) and if everything looks OK after a few days, we'll roll it out to WebSDR #1 and #5, which also use this same hardware.
24 May, 2023: Connectivity issues:
On
this day at 1700 UTC connectivity to the WebSDR site was lost as
emergency repairs/reconfiguration had to be done on one of the
interconnecting sites: This site is one of several between the
WebSDR site and the fiber connection to the gigabit backhaul.
Connectivity was restored a bit before 1915 UTC.
Expect occasional interruptions and/or data drop-outs as additional troubleshooting, testing and/or configuration is being done.
19 April, 2023: Power utility work:
On
this day the power utility rebuilt the high-voltage (47kv) and the
medium (2400 volt) voltage power busses at the nearby substation,
feeding the WebSDR.
The reason for this has to do with the power infrastructure being quite old (perhaps in the 50s?)
being insufficient for the current power loads at a pipeline pumping
station near the substation. Long-term operation of the
infrastructure at or exceeding its ratings resulted in the fault that
occurred on 11 April (noted below) and subsequent damage to the pipeline equipment.
The
mains power was off for several hours, the WebSDR being run on battery
- then generator - during this work. From what we can tell, we
had no outages at the WebSDR or other equipment during this event.
11 April, 2023: Power disturbances:
A
bit after 1200 (noon) local time there were a series of rapid
brown-outs over a period of about 25 minutes. According to our
on-site line monitor, the voltage could be seen to vary wildly between
60 and 127 volts. These disturbances were apparently county-wide,
affecting a large number of customers in the geographical area.
Apparently, the on-site UPS didn't quite know what to do about this and its output would "glitch" (e.g. drop) during some of these transfers. During this melee, WebSDRs #1 and #2 rebooted.
It was known that with the recent upgrade to Ubuntu 22 on the servers, there may
be issues with automatic updates: We attempted to turn them off,
but apparently missed a few and this caused, during the reboot, some
"minor" updates to occur, breaking one of the software packages that we
use with the WebSDR.
We think that we have now disabled all of the automatic updates that would otherwise occur when the system reboots - but time will tell! (This all sounds a bit like Windows, doesn't it!)
We
are considering alternatives to a UPS - perhaps a system that consists
of a high-current charger, a battery and an inverter - that will not
require any sort of transfer between the line and load.
4 April, 2023: Internet outage:
Problems
with one of the backbone carriers caused routes from many customers in
Northern Utah to be dropped for a while. The result of this was
slowdowns and outages that persisted over the course of an hour or so
around 2100-2200 MDT as network operators worked to correct the
situation.
12 March, 2023: Adjusting knobs and dials...
With
the update of the operating system on the WebSDRs has come a few
hiccups: While operation on the test server showed no problems,
there is no substitute for the real world to "stress test" things with
multiple receivers and a fairly heavy user load. A few of the
issues noted:
The
same "stuttering" problem. While the new operating system version
fixes a known issue with the low-level USB drivers, it would seem that
this wasn't a major cause of problems.
The
driver/API will suddenly just stop: For reasons under still under
investigation, the new API from SDRPlay (V3.11) seems to have a
tendency to randomly stop - an issue noticed by anyone trying to listen
to 40 meters (on WebSDR #1) or 20 meters (on WebSDR #4) this morning.
For WebSDR #4, the API has been rolled back to the older V3.07 to see if this problem "follows" the API version or not.
A stripped-down version of our RSP driver (sans AGC) has been installed on WebSDR #4 to see if there is any change in its behavior. If there is, we will at least have a starting point to determine if the features removed (mostly the AGC) have anything to do with the instability - and if this seems to be the case, it will be a basis of further investigation.
We we will do:
While
we appreciate the inconvenience that these "driver malfunctions" are
causing to users, we are working very hard to get to the bottom of this
issue to make sure that the system is as stable as possible.
Unfortunately
- as we have noted - there is no substitute for "on air" testing:
As much as we have tried, testing and simulation on the
development WebSDR hasn't forced the problems to occur, so we have to
put it on an "online" server where you, the user, can beat it up and
stress-test things.
What this means is that there will be periods where things aren't working correctly (stuttering, no signal).
We'll do our best to analyze and restart/change things to restore
service as soon as possible, but since there are 24 hours in a day -
and we can't keep constant vigil on things - it may take a while before
things get back to "normal".
We don't think that this is a direct
result of the operating system upgrade, but rather that we've changed
several things all at once and are selectively rolling them
back/changing to try to determine if there is a cause-effect
relationship.
11 March, 2023: Operating system updates.
On
this day WebSDRs 5, 4, 2 and 1 were updated to the current version of
Ubuntu Linux (22.04LTS). This was done to resolve some possible
issues with USB drivers that may be at least partly responsible for the
occasional "stuttering" that occurs on the RSP1a receivers.
Rather
than an actual "update", the original WebSDR configuration files have
been saved and a "clean" install of the operating system has been done
as the current servers have been updated several times - starting with
Ubuntu 16.04 back in 2018 - and there may be a bit of "wreckage" strewn
around the disks from these updates.
This process takes between 1 and 2 hours per server - although it took much longer than this for WebSDR #5 (the first to be done)
as there were a number of unexpected dependency and configuration
issues - many of them related to things being done "differently" on
current versions of Ubuntu as compared to how they were done 5 years
ago!
Of course, there may be a few teething troubles along the way as this is effectively a new install - but this version of Ubuntu has already been running on other WebSDRs (e.g. test server, KFS) and it is known to be stable - at least one the initial kinks are worked out.
WebSDR #3 may be updated at a later date: Since it does not use any RSP1a receivers, the priority is lower.
14 February, 2023: Site visit, driver updates.
It was possible to make a site visit today and several things were observed:
The
UPS was "stuck" in an error mode: It was not available for
backup, the cryptic error code having something to do with the AVR
(Auto Voltage Regulator) relay (It's an APT UPS and the error code is "F06").
The UPS was power-cycled and then tested several times by unplugging/plugging it into mains power after this
and it seemed to work properly - but we're not sure if this means that
the UPS is on its way out. We have an extra UPS available should there be "issues."
A "wall wart" used to power another on-site computer (Raspberry Pi) was destroyed.
It
appears as though the hardware USB port on WebSDR #2 locked up in some
way requiring that the server be completely powered down before
anything was recognized on that physical port, ultimately coming back
up and working properly.
More
work has been done on the drivers for the RSPs to help with reliability
- expect minor interruptions on servers 1, 2, 4 and 5 over the next few
days while testing continues. (See the 12 February entry for more information.)
13 February, 2023: Wide area power disturbance knocked down UPS at the WebSDR site - possible equipment damage.
At
around 0229 MST in the morning of 13 February, 2023 a wide area power
disturbance - likely a high voltage surge - appears to have caused
widespread (county-wide)
damage to electrical devices. At this same time, the UPS at
WebSDR site tripped off line, dumping all critical loads and causing a
reboot of all WebSDR servers.
All
WebSDR servers except #1 started on their own, and #1 was restarted
remotely around 0925 or so when it was discovered to be offline.
The 15 meter receiver on WebSDR #2 (Green) is currently non-responsive and that band is unavailable until either a "re-plug" of the USB port is done or (more likely) a hard power-cycle reboot of that server is done - something that can only be done via a site visit.
Please use the 15 meter receivers on WebSDRs #4 (Magenta) and #5 (Teal) in the meantime.
The
on-site UPS will need to be tested. Since it clearly went offline
at the time of the disturbance, it may have suffered damage.
There is a transfer switch on site that, should the UPS's output
voltage fail, it will transfer it to a secondary power source which is
currently just the mains power on a different phase.
It's
possible that the UPS has been damaged in this incident and that there
is currently no back-up UPS on site, but this, too, will require a site visit to
ascertain.
It is unknown when a site visit will be possible - but we hope that it can be done in the next day or so.
Please note that the investigation and possible repair may cause additional system outages.
12 February, 2023: New receiver (software) drivers installed.
On
this day - mostly in the later evening - the driver software was
updated on all of the WebSDR servers that use RSP1a receivers.
It
was noticed that the 15 meter receiver on WebSDR #4, in particular,
would go into the "stutter" mode even with the new driver indicating
that the modification in the code didn't necessarily "solve" the
ongoing issue. In response to this a number of configuration
changes are being tried (e.g. disabling the AGC, changing time constants, etc.) to see if this helps.
It's
possible that this particular receiver has some sort of problem ranging
from a flaky USB cable to some sort of issue with the USB port or the
receiver itself - but it could also be that the modifications done with
the driver to coordinate communication with the API aren't sufficient -
or even that the API itself may be somewhat "broken". In the case
of the latter, we can hope for an update where the issues are fixed.
11 February, 2023: Testing of new receiver (software) drivers underway.
While
the SDRPlay RSP1a receivers used for many of the bands appear to work
very well, their does is not completely without "hiccups" - literally:
While
it didn't seem to happen when we first installed them, more recently
they will occasionally go into a sort of "stutter" mode - seemingly at
random. Often the problem resolves itself in just a few minutes,
but sometimes the receiver drivers need to be manually restarted.
This issue isn't directly related to the hardware - but it may be that, occasionally, a few A/D samples are lost here and there (which is normal for a computer that is shoveling a lot of data around!) and this, by itself, shouldn't cause a problem - but it may be that some piece of software doesn't deal with it gracefully.
In
communications with others using this hardware, the consensus is that
there are some "peculiarities" with the vendor-supplied API (software interface) that may lead to corruption of the data flow.
As it happens, the custom-written driver that we use to interface with SDRPlay receivers to the operating system (to allow it to work with the WebSDR using a 16 bit signal pathway at up to 768 kHz of bandwidth) does some "talking" with the API to adjust the RF gain (seen as "RF AGC" on the display) - and the method of communications may be related to issues.
There are times when you can't/shouldn't be communicating
with the API or else "bad" things might happen - and there are some (minimally or un-documented) aspects with this communications that one must observe to reduce the probability of making the API "mad".
The
reason that we suspect this goes back to what was mentioned earlier:
We didn't used to have this problem when we first put the RSP1a's
into service, but back then - only a bit more than a year ago - the
higher bands (20-10 meters)
didn't open that often with "strong" openings. What this meant
was that there was infrequent AGC action and thus little communications
by the driver with the API - but as the solar terrestrial conditions have changed and
improved the conditions on the higher bands, the AGC is now very active
when a multitude of strong signals appear and disappear from the bands.
This
increase in activity has also correlated with a more frequent
occurrence of this "stuttering" on the same bands that are improving
and changing wildly over the course of the day. By itself the
stronger signals shouldn't have any effect on the stability of the
receiver or drivers - but the increased interaction via the API may well be one of
the causes!
We
are currently testing - at first on WebSDR #5, but later on other
WebSDRs, if all goes well - an updated version of our custom driver
that jumps through extra hoops during its communication in the hopes that we don't upset the API
and cause the "stuttering" issue.
Additional changes:
Up to now we have been using an interim program (a custom version of Linux "aplay" modified to operate at higher than 192 kHz)
to take the audio from our custom driver and put it into ALSA, the
Linux sound system - which is then used by the WebSDR. Our driver
has had the capability of doing this directly, without the use of
"fplay" (our modified version of "aplay")
but it never worked properly. While making the above changes,
modifications were done to our custom driver and we believe that this
problem is now fixed, allowing us to eliminate one step in the data
pipeline between the receiver and the WebSDR server - and this will (hopefully) further improve reliability and stability.
Additional
code was written to allow a more "graceful" exit of the API than before
- something that could help improve stability and also speed up
restarts.
For more information on this configuration, see the "RF Distribution" page link - particularly the links near the bottom.
What you can expect to happen:
The new drivers and configurations are currently being tested - first on WebSDR #5 (as it has the lightest usage and screw-ups will inconvenience few people) and if those go well, they rolled out to the other WebSDRs - probably in the order of #4, #2 and then #1.
When
the drivers are updated there will be brief outages: The
waterfall will go black and all signals will disappear.
Hopefully, this will last only a few 10s of seconds - but if the
"old" drivers misbehave, this may last for a couple of minutes.
Because
these are "live" drivers outside the WebSDR, we don't expect that we'll
need to restart the WebSDR itself - a process that "kicks" everyone off.
A
banner will be posted on the WebSDR server - just above the waterfall -
prior to and during these upgrades/changes, so you shouldn't be
surprised!
8 February, 2023: Maximum number of users on WebSDR #2 (Green) increased.
With
the rising sunspot activity the 20 through 10 meter bands are more
active and consequently, the number of users on WebSDRs 2, 4 and 5 is
increasing. On this day it was observed that some people were
getting a "Too Busy" message from WebSDR #2.
When
it was installed nearly 5 years ago, the maximum number of users for
WebSDR #2 was set to "85" - in deference to the amount of available
outbound bandwidth and also to the maximum number of expected users at
that time. Since then the Internet connection has been improved.
It
would also seem that these days the "higher" bands are getting more and
more popular - not an unexpected consequence of the improving
conditions on these frequencies and this previously-set limit was no
longer adequate, so the limit on WebSDR #2 (Green) was raised to 150 users.
18 January, 2023: Work on WebSDR #1 and WebSDR #3's 80 meter receiver
A
site visit was done on this day to investigate two issues that were
noticed since the last visit: The loss of the 630 meter
receiver's sound card and terrible receiver noise on the 80/75 meter
receiver on WebSDR #1
WebSDR #3:
If you'd gone there you would have seen a tremendously-high noise floor - as if a coax cable shield had come undone.
It
was, in fact, a loose coax connector: When the AGC thresholds
were adjusted on January 13, it must have pulled on the right-angle SMA
connector on the 80 meter RTL-SDR and partly unscrewed it: It was
"re-screwed" and the problem resolved.
WebSDR #1:
In
looking at the local console it was full of errors about no room for a
swap file, so the system was rebooted - and it "sort of" came back up,
but it was obvious that the hard drive (an SSD, actually)
was full. The issue turned out to be the prolific generation of
log files by the WebSDR server having filled up the drive: We thought
that it automatically deleted these after a while, but obviously not,
so a but of file purging later, we now are using only 15% of the 120 GB
SSD and it's happy - at least on that point.
As
for the 630 meter receiver's sound card - an Asus Xonar D1 - that was
re-seated in its PCI slot and that seems to have done the trick.
During
this, the computer spontaneously rebooted - not something that we ever
like to see! We aren't sure if it was still recovering from the
hard drive space (e.g. doing some sort of drive cleanup)
or what, exactly, but we are keeping an eye on it. It's possible
that the power supply is on its way out or even that it's related to
the bad capacitor noted in the 13-14 January entry - but these sorts of
things usually cause Kernel panics and the like rather than acting as
if someone just pushed the "reset" button. We have spare hardware
on site, but transplanting hardware to a new computer is a PITA!
17 January, 2023: 630 meter receiver offline
For reasons unknown, the 630 meter receiver - which uses a sound card - is currently offline and remote attempts to restore it (restart the WebSDR, reboot the server) have not been successful in restoring it.
The
next step will be a hard power-down during a next site visit which will
occur... sometime soon: Hopefully this will restore service.
In the meantime, you may listen via one of the KiwiSDRs on site -
See the bottom of the "Landing Page" for more information.
13-14 January, 2023: Site work and comments.
KFS Status:
A site visit was made during the evening of 14 January and the network gear was
brought back online, restoring service. Note that
Internet connectivity is still a bit "flaky" due to ongoing weather
events and pre-existing issues so expect audio drop-outs
during times of heavy usage/Internet traffic. Still offline - see the 10 January entry, below, for details.
Site Work (Northern Utah WebSDR):
USB cards were upgraded in WebSDRs 1 and 4, requiring that they be offline for a while.
Issues with #4: In
the process, we were able to determine that the source of errors we
were seeing on the local console - but not having any obvious effect on
the operation of WebSDR #4 - may have been due to a motherboard issue.
When we plugged the new USB card into it, it wouldn't "POST" (boot)
- but with other testing, we knew that the board was OK. We ended
up swapping to one of the on-site spares which involved moving the
upgraded CPU from the old to the new, moving plug-in cards and
remapping the FiFiSDR receivers used for 17 and 12 meter reception.
Future issue with #1:
We installed an upgraded USB interface card in WebSDR #1, but
noticed a vented capacitor on the motherboard. This capacitor was
away from critical components like a power supply regulator and none of
the capacitors appear to be leaking onto the board so we'll either replace the capacitor or swap out to another motherboard on a future visit.
Bug discovered with the router/firewall:
When we shut down WebSDR #1 we changed the routing so that people
going to #1 would end up on WebSDR #3, the 80/40 meter "backup" WebSDR.
It would seem that setting two forwarding rules to the same
server - while it worked - make the router unhappy and when we switched
it back, it magically deleted most of the other forwarding rules -
definitely not
the sort of thing that it was intended to do! Fortunately, a
back-up configuration was on hand, but rolling it in required a reboot
of the network gear.
Further quieting of VLF/LF RF signal path. As noted in the 7 January entry, there was still noise in the very low (<150 kHz)frequencies
being conducted via the power supply. A "new" power supply was
installed and this appears to have completely eliminated noise ingress
via that path. There is still a bit of low-level noise to be seen
on the spectra here and there, but it is markedly cleaner and for the
first time in a while, signals from the "Alpha" navigational system (10-15 kHz) and JJY (40 kHz) were easily audible - plus a number of other LF/VLF signals.
WebSDR #5 work:
It had been noticed that the absolute sensitivity of the "10M FM"
receiver was a bit low, so some adjustments were made on the
converter/filter/AGC module that feeds the RTL-SDR for this receiver.
AGC thresholds AGC module adjusted:
For the 60 Meter receiver on WebSDR #1 and the 80, 40 and 30/31
meter receivers on WebSDR #3 - which use RTL-SDR modules - there is a
quad AGC/Filter block that prevents overload of these receivers.
Originally, the AGC threshold had been adjusted such that a very
strong signal (e.g. -10dBm)
would yield a half-scale peak A/D converter reading. Because the
AGC uses an "averaging" detector, it was still possible - in some
conditions - for enough A/D clipping events to cause intermodulation to
become visible on the waterfall and to be heard in received audio.
The thresholds on all of these receivers was reset to
quarter-scale A/D for a strong signal, increasing the margin by about 6
dB.
10 January, 2023: Status of the KFS WebSDR in Half Moon Bay, CA
Unless
you have been stuck in the wilderness or hiding under a rock for the
past few days, you'll know of the severe and disruptive weather
recently experienced in north-central California, particularly near-ish
the San Francisco Bay area: There has been heavy rain, high
winds, flooding, landslides and power outages - and any of these can
interrupt service to a web site like KFS, either through loss of
Internet connectivity and/or power failure. The Internet service
provider, itself, has had to contend with power outages and damage to
infrastructure.
On
the days of the 8th and 9th of January, the KFS site experienced a
large number of power failures of varying duration. While there
is a UPS on site, it would appear that either it isn't working properly
or, possibly, the power outages were so frequent and of long duration
that the battery charge either didn't last, and/or it could never make
headway in charging.
It
would seem that during this chaos, some of the on-site network gear
went offline and it will take a site visit to diagnose and fix what is
going on. If it's something very simple, it may be possible to
"talk through" whoever might be on-site, but those that normally would
handle such issues are either indisposed or out of town, hence the
delay in getting it back online.
7 January, 2023: Site work:
During the evening of 6 January and on this day (the 7th of January) several projects were undertaken at and related to the remote Northern Utah WebSDR receive site:
LF/MF Noise floor improvement.
It
had been noted that when we rebuilt the coaxial cable entry and
lightning protection in the middle of 2022, it was noticed that the
noise floor on the 2200 and 630 meter bands on WebSDR #1 (Yellow)
and on frequencies below 500 kHz on the KiwiSDR was significantly
elevated. If you have been following this page you'll note that
we were extremely busy on site in the following months - and since it
was summer, with tremendously high lightning static and noise on these
bands, it was "out of sight, out of mind" and simply forgotten about
until the winter months, where this raised noise floor impacted
reception.
A bit of sleuthing was done and two things were found:
A
ground loop between the LF/MF/HF combiner and the KiwiSDR stack was
introducing significant noise from audio through at least 500 kHz:
Most of this was eliminated when a coaxial choke consisting of
about two dozen turns on a FT-240-77 core was inserted in this line.
Noise
below 150 kHz was still present and it was discovered that when the
amplifier/antenna was powered from battery, it cleaned up. A pair
of chokes were wound with about 35 turns each on a pair of FT-140-75,
introducing several milliHenries of inductance on that line, reducing
this ingress by 25-30 dB, essentially eliminating any noise along this
path above 100 kHz. A bit more work needs to be done to
completely eliminate the remaining noise on a few frequencies below 100
kHz.
USB Card upgrade on WebSDR #2 (Green).
Occasionally, the 15 meter receiver on this WebSDR would go "out into the weeds" causing a "stuttering" - but only
the 15 meter receiver . It's hypothesized that the USB port for
this receiver is occasionally running out of steam, dropping packets,
so we replaced it with a more capable, quad-port USB card that has been
proven to be reliable elsewhere.
Doing
this required that WebSDR #2 be taken out of service for a while as it
had to be completely "de-cabled", removed from the rack, the dust blown
out of it, new card installed, reinstalled in the rack and everything
relabeled.
Re-cabling of relay site.
On
the morning of 7 January we braved a snowy and frozen road to a
mountain to the east of the remote receive site where there is a site
used to relay to another RF site in the nearby town. The purpose
of this visit was to determine the cause of the LAN cable going between
the radio that connects the remote receiver site and the on-site router
was "flapping" - that is, frequently switching between 1 Gig, 100 Meg
full-duplex, 100 Meg half-duplex, and 10 Meg half-duplex. This
activity was also accompanied by high packet loss on this link, causing
TCP retries from end-to-end between the WebSDR and the user. This
may, in part, explain the network issues noted in the 30 December, 2022
entry, below.
On
the way up, we had to stop about a half mile from the site due to snow
drifts across the road, causing is to carry only the essential tools
and supplies, post-holing through the snow.
Our
hope was that the cable was damaged rather than the radio failing - and
that turned out to be the case: An emergency repair a few months
ago to repair some damage to conduit resulted in the "fix" pinching the
LAN cable, abrading the jacket causing both water ingress in the jacket
which upsets its impedance and balance as well as exposing raw copper
to the elements. This cable was replaced in its entirety and
rerouted to prevent future damage, causing a complete outage of the
WebSDR for about 2 minutes.
The
repair successful, we walked back to the Jeep and started backing down
about 1/2 mile of now-slushy, muddy road. About halfway down we
ran across a "dip" caused by a drainage causing the back end to kick
outboard toward the downhill side and I was unsuccessful in getting
back on the road. A quick call to a local ham who happens to own
a nicely-equipped Jeep with winches, etc. resulted in his appearing
about 45 minutes later. With inboard tension applied to the winch
cable to bring my Jeep back onto the roadway we did three separate
maneuvers to get fully back on the road, allowing me to finish the
awkward back-down to a curve near the bottom of that section of road
where it was finally wide enough to turn around.
Statistics indicate that the link is once again solid and we hope that this will greatly reduce drop-outs overall.
6 January, 2023: KFS WebSDR connectivity issues
The
KFS WebSDR in Half Moon Bay, CA has been experience connectivity issues
since the evening of 5 January due to problems being experienced by
their Internet Service Provider. There does not seem to be any
problem with KFS per se, it's just that the network connecting to it is
having problems. While we know that this issue is being worked
on, we don't know more details than this.
With
KFS being offline, the load on the Northern Utah WebSDR has increased
as many of those who normally use KFS have found there way over here so
we will do what we can to delay/minimize any outages.
5 January, 2023: WebSDR outages to occur January 6 and 7, 2023.
As
noted in the 4 January entry, there is evidence of equipment problems
on a mountaintop relay site that connects the remote receive site to
the Internet. Barring weather, illness or excessively slick road,
work is planned on that site to investigate and repair as appropriate.
This work will likely cause extended outages to all of the remote Northern Utah WebSDR servers - but the "Salt Lake Metro" server will be unaffected.
We will also be working on the RF infrastructure and servers themselves which is likely to cause further, occasional outages.
While
we are hoping to get all work done on Friday, January 6, it's possible
that based on the difficulty of travel and short days that some work
will extend into Saturday, January 7.
4 January, 2023: Comments.
Network instability.
Over
the past week, there have been network "issues". The precise
nature of the problem has been hard to diagnose as there were several
things happening at the same time - including issues with an upstream
provider that make it hard to sort things out.
An ongoing problem may be TCP retries caused by a failing physical cable on one the first (of THREE) wireless hops between the remote WebSDR receive site and the fiber landing point. Unfortunately, this cable is atop a (now)
snow-covered mountain. The repair of this has been complicated not
only by the snow on top, but relative lack of snow on the bottom
preventing use of snowmobiles (presuming the availability of such!) but also weather, sickness (there's a lot of "stuff" going around!) and time within several peoples' schedules to make a significant trip like this.
Having said that, work is tentatively scheduled for for a few days hence (possibly January 6 and/or 7) to effect repairs.
Synchronous AM (and "normal" AM) enhanced.
During the server work on 21 December, 2022 a change was made that allows up to 8 kHz of audio response (as opposed to about 4) for AM and SAM (Synchronous AM) reception to improve audio quality. This does NOT affect SSB, CW or FM reception.
When 8 kHz is enabled at the server, SAM now uses a nominal offset frequency of 450 Hz (rather than 150 Hz)
as the offset, allowing more variation in AM carrier frequency.
This is useful in an AM "round table" where the various stations
may be several hundred Hz different in frequency - and the highest
allowed offset frequency, which is about 1 kHz when using the normal 4
kHz audio mode is now increased to a bit over 4 kHz, allowing even
wider variation - particularly if one purposely "off-tunes" (e.g. a few hundred Hz lower when listening on SAM-U, a few hundred Hz higher when listening on SAM-L).
A modification was made such that when the NCO (Numerically-controlled oscillator - used for Synchronous AM carrier recovery)
is unlocked, audio is derived via the "square root of the sum of
squares" of the I/Q channels. By doing this, one does not hear
the "squeal" as the NCO locks to frequency when it is unlocked.
The "lock detection" isn't foolproof so one can sometimes hear
the squeal briefly as the NCO "slips" lock - typically on a signal with
a lot of QSB, some interference or if it is drifting quickly.
By intent, the NCO will unlock when the frequency, bandwidth or any aspect of the waterfall (position, zoom)
is changed: This is done because doing so causes the NCO to
unlock, anyway, and by signaling that this is happening, audio
disruption is minimized.
Due to the slight amount of NCO leakage, one may hear a tone until the NCO has locked and the notch filter removes it.
Power
Line noise:
There is an intermittent power
line related noise that occasionally shows up - and we are working to
resolve this issue - but the system's noise blanker seems to be able to
remove most of it. This noise is typically the worst on 80/75
meters, but it seems to disappear at night as the band opens up and the
ionospheric noise submerges it. Several poles have been
identified
Audio/system
drop-outs: There have occasionally
been periods where the audio will drop
and/or the waterfall will freeze briefly. While this is most
often a result of the user's computer (e.g.
your
computer!)
getting busy, pre-empting the browser's audio processing or congestion
on your Internet congestion - particularly if you have Internet via
Wireless, Satellite or phone
- see "Miscellaneous Quirks",
below for some causes and work-arounds.
A
"pop" in the audio and an accompanying change in signal level:
There appears to be an intermittent cross-connection on the
antenna between a feed and guy wire that can cause noise in the
received signals and levels to
fluctuate during very windy periods on site - particularly on the
lower (160,
80/75 and the 630M-AM-160M) bands. A
similar-sounding - but unrelated - issue (described below)
exists with the AM demodulator. Occasionally, this
seems to be accompanied by an increase in
intermodulation distortion from AM broadcast band signals - a problem
that can affect the 75 Meter and lower-frequency bands.
Occasional
outages: We are working to
"neaten up" the installation so we occasionally have to take something
off line to dress a cable, replace a connector, "permanentize" an
installation. Tasks like this will cause outages of several
minutes duration. Some work is also being done at one or more
of
the interim sites that provide the Internet connection that
occasionally result in outages.
Low-level
switching supply "birdies":
With lots of computers comes the possibility of lots of QRM
from
them! Fortunately, there are no "strong" birdies, but there
are
some that, if they happen to sweep across the frequency to which you
are listening, will cause some mild annoyance.
Miscellaneous
quirks (e.g.
"It's
supposed to be that way!"):
If the WebSDR
is not on an active window on YOUR computer: If
the WebSDR is not the currently active
window on your desktop, the computer will give it lower priority and
this will make audio drop-outs/waterfall freezes more common.
The
same thing can happen if your computer is "busy" doing something else,
such as other programs, updates, incoming emails, etc. The
same is likely true
with other platforms (phone,
Mac, etc.)The
first thing that you should do if you experience drop-outs is to switch
to the window running the WebSDR. Don't forget
that your own
Internet connection/ISP can result in drop-out issues as well.
It
has been observed that on a typical Windows machine, if the processor
utilization is over 65-70%, you may experience occasional drop-outs due
to delays in the operating system providing processor time to the code
that produces the audio and draws the waterfall.
Waterfall not
updating when you switch to another window: When
the waterfall is not visible (that
is, you have switched to a window and moved the one with the WebSDR in
the background)
it will stop - which makes sense if you can't see it. What
this
means is that when you switch back to where the waterfall is again
visible there will often be a clear line of demarcation between what
the signals were when you switched away and when you switched back.
Occasional
burst of noise across the band correlated with strong signals:
This is sometimes the result of the noise blanker that is
active
on all bands being triggered by the peaks of the strongest signal(s) on
the band. This effect is most easily spotted on 40 meters
during
the daytime where there is one signal that is extremely strong - often
on the leading edge at the beginning of a transmission - and the band
is otherwise quiet. This effect should not be confused with
other
things that cause similar-looking artifacts, such as static crashes
from distant lighting storms or brief signals from ionospheric and
ocean wave profilers.
When
listening to AM, a sudden "pop" in the audio:
It's not actually supposed to do this, but there a "quirk" (bug?)
in the AM demodulator that can cause this to happen, occasionally.
It seems to happen mainly when the signal level changes very
quickly due to QSB (fading)
but is less likely to happen on very steady signals or those with only
very slow QSB. The only "fix" for this - if it becomes
annoying -
is to listen on USB or LSB and zero-beat the carrier.
The bands on
the "other" server aren't visible to me unless I go to that "other"
server: It is this way because there are several independent
WebSDR servers.
If you notice some issues that are unrelated to those
listed abovefeel
free to use
the contact info on the About
page to let
us know about it.
Additional information:
For general information about this WebSDR system -
including contact info - go to the about
page(link).
For technical information about this WebSDR system, go to
the technical
info
page(link).
For a list of questions and answers, visit the FAQ
page (link).
For more information about the WebSDR project in general -
including information about other WebSDR servers worldwide and
additional technical information - go to http://www.websdr.org