> It’s unclear to me why there’s so much delay and jitter in the PPS timestamping.
I’ve messed around with this on a couple different GPS chips. I’ve found improvements can be made by increasing the baud rate to the maximum supported. 9600 tends to be the default, but 57600 works a lot better. Also, disable all NMEA sentences except the one you are using. Finally ramp up the update interval to be much more often. The default tends to be every 1000 milliseconds, but 100 milliseconds works better for less jitter. I’ve been using NTPsec, not Chrony, so maybe there are more nuances.
Im just a hobbyist, but I have a bit more details written up here, checkout the poorly designed hamburger menu for some charts and graphs: https://www.developerdan.com/ntp/
I’m not familiar with Chrony. With NTPsec, the PPS driver docs say [0]:
> While this driver can discipline the time and frequency relative to the PPS source, it cannot number the seconds. For this purpose an auxiliary source is required;
And so (with NTPsec), you need to define two sources even though it’s coming from the same device. One for the PPS signal for clock discipline, the other for the clock value.
Sure, but that aux data should not be used for any sub-second accuracy information. The PPS is the end-all be-all definition of the start-of-second. Improving performance of another band should never affect the performance of the sub-second jitter.
They should hook up a scope to that PPS output and compare it to a solid reference. I suspect if they're experiencing intermittent dropouts on a poor GPS module that the PPS signal likely is not a high quality reference. Those ublox counterfeits might be okay, but I've been really impressed with Navspark's pin-compatible ublox "knockoffs". Super cheap, super performant.
I get weird looks when I speak with people about my GPS PPS NTP servers. This is website amazing. Most of it is way beyond my comprehension, but it’s given me a lot to read up on.
I had this happen before. I used it as an excuse to learn how to setup NTP with a GPS receiver. I made a little blog on it if anyone is interested in the results. Be sure to click the sandwich menu for some real-time data: https://www.developerdan.com/ntp/
Quick question, because I've followed a number of how tos before that don't, if you disconnected all network connectivity, would it sync to the PPS source? Second, have you tested this, I only ask because 1# I found a lot of time sync was really slow, and #2 some of them reported that they had sub-microsecond time sync but when tested they were several milliseconds out.
Anyway, I solved these problems with sbts-aru, but maybe it's interesting for you to test if you haven't already done this.
Good question, it absolutely syncs without any network connectivity. For better or for worse, I have it setup NOT to sync to anything else- being a stratum 1 source is enough. Now… I will admit I’m just a hobbyist with this stuff. Anyways, the way I got around it trying to sync to another source when using the PPS was to define the GPS as two separate sources. One for the PPS signal and another for the NMEA sentences:
To get around issues with sync being slow, I have NTPD configured with some extra flags to allow large time jumps at startup:
NTPD_OPTS="-ggg -N"
I have two raspberry pies setup this way- which I don’t use for much else. My primary computers have these two servers configured as time sources- but also have external sources- so as long as network connections are available, they can determine if they are providing a sane time or are false tickers. As for accuracy, you can view the ntpviz output for each setup in the hamburger menu. For catwoman, its regularly within 4 microseconds.
Ah okay. Because I found that the default start order of gpsmon and chrony was incorrect meaning that the shared memory communications wasn't working but there were no error messages stating this. So sometimes, it would sync (slowly) and then chrony sources would say that it was having microsecond sync time but when I connected a switch (On/Off type) to the same gpio on two separate Pi's with linked earth connections and had a python program output the system time in nano seconds I found that despite telling you it was keeping time accurate to microseconds it was in fact many milliseconds out. The only thing I can conclude is that despite the setup indicating that it should use the PPS source that use the interrupt, this was in fact not happening and it was somehow getting it's time from the serial line only. Without indicating that this was happening.
When I switched the startup of gpsmon and the chrony daemon to the other way around time sync was achieved within 1/2 minute to a minute (Real fast) and the reported time on two separate Pi's with the joined up gpio's (So you can get very close to triggering a time check at the same time) was then sometimes as low as 30 ns's from each other, which is damn good considering that most of the difference would be jitter.
And none of the how-to's that I read talked about changing the start order of chronyd and gpsmon so I can only assume they all might be suffering from the same problem. It's important to note that the diagnostic commands from chrony will not indicate this and the only way you will notice if this is happening if you setup a rig like I mentioned above. I noticed because my sound localization calculations were showing errors that didn't make sense.
I must admit I've never used chrony, so I'm unfamiliar with how to configure it. I've read a lot say saying gpsmon's shared-memory-segment is a really great interface and to go with it instead of NTP's own GPS drivers, but I cannot say that lines up with my experience. I do have one computer setup this way, and it works, but I found on the PPS configurations, it seemed to me to have better accuracy and less jitter using NTP's own GPS drivers and taking gpsmon out of the equation. Its quite possible I just don't know the right configuration combination, but ntpsec's own GPS drivers work great for me.
This is off-topic, but I was looking at your sbts-aru project and remembered having read the hackaday post on it about fireworks. I'm curious if you've ever seen the RaspberryShake BOOM sensor? If so, any thoughts on how your project and it differ?
I hadn’t seen that. It uses a pi so in principle it could sync the time the same way. How well related to localizable recordings would also depend on how it establishes the actual time arrival with the clock and how it deals with jitter.
Currently I only support USB mics at 16 bits that can talk to jackd. This was a conscious decision because it was all that was needed but also because jacks makes it essentially have multiple source sinks. Such as as well as recording a real time audio processing pipeline that potentially leads to real time gunshot and other sound event localization.
But it would be cool if the infrasound sensor had a USB sound interface. As then it could be useful in localizing elephants.
If it syncs time accurately by any method maybe you are able to localize some very interesting events! The localization code I provide with my project should fine with times obtained from this project. I seriously doubt whether that project uses a memory overlayFS but it’s to add as an improvement.
Interesting! What is the accuracy like when lacking "PPS precision"/Clayface? I'm wondering about the magnitude by which it is off (seconds, minutes, or other).
PPS turned up quickly on Wikipedia, so that one is readily answered
> PPS signals have an accuracy ranging from 12 picoseconds to a few microseconds per second, or 2.0 nanoseconds to a few milliseconds per day based on the resolution and accuracy of the device generating the signal.
That Wikipedia quote should mention temperature! Temperature variations have a big impact at this level of accuracy. These really cheap GPS receivers do not have temperature adjusted clocks. Unfortunately my server closet (this is just a hobby) does not have well regulated temperature, so you can see the impact of temperature on the clock accuracy. Also, I found if I start running a bunch of stuff on these computers - that makes the CPU heat up, which also affects the jitter. If you really want high-precision, you'll have to shell out some extra cash then I did: https://www.sparkfun.com/products/18774
This is the changelog for 20.8.1, but it’s important to point out that that 4 of the CVEs were also patched in 18.18.2.
Shameless promotion time, I have a little utility that can check a node version for CVEs or EOL:
npx node-version-audit@latest --fail-security
Or with docker:
docker run --rm -t lightswitch05/node-version-audit:latest --version=$(node -e "console.log(process.versions.node)")
Some highlights of the tool is zero dependencies and CVEs are sourced directly from NPM changelogs instead of waiting on slow CVE release processes. See the website for more details: https://www.github.developerdan.com/node-version-audit/
The new feature for subscribed allowlists is going to pair really nicely with the existing support of ABP-style blocklists. I’m very excited for this release!
Three years ago I released a tool called PHP Version Audit. The idea is that it parses the PHP changelog and notifies you if you are running a PHP version that has a CVE or has lost support.
Anyways, after running for three years, I thought it would be fun to put together some data. The most interesting one is that PHP Version Audit has a median CVE discovery of 5 hours after the PHP announcement. In contrast, the NVE CVE Database has a median of 260 hours - or almost 11 days. Of course the NVE CVE Database has all sorts of information like a vulnerability score, so maybe it’s an apples vs. oranges comparison. Anyways, I hope someone else finds this interesting :)
If you think PHP Version Audit is interesting, there is also Node Version Audit[0] that I released earlier this year.
I’ve written a library with zero production dependencies. Of course I have Jest as a development dependency which pulls in all sorts of stuff. It would be difficult to make a zero-dependency library. As for not having production libraries, this was my experience:
1. Using the https module directly was more work than I expected, especially with error handling. This made me really look forward to the new Fetch API coming out.
2. No CLI parser. Its not like parsing args is a LOT of work - but its also something that is already solved and having to write support for that directly was a bummer
3. No logging library. This one was pretty easy. Create a little class with logging levels. Again this is something that is very common that would have been nice to use a package for.
What you describe is exactly why people use dependencies. You just decided to trade your time for the noble act of having “no production dependencies”, while one of the 275 modules installed by Jest (real number) stole your production secrets anyway.
As for point 2, Node 18.1 I think just introduced a native argument parser.
They still get run on a developer’s machine most of the time and are at least installed there where they can run arbitrary code on install. And there are juicy secrets beyond just production server secrets sitting on your laptop.
Lots of production environments tend to sit on latest LTS. v18 _just_ hit LTS, up to last month it was v16. Those weren't really an option for all these projects until then.
Yes, my library absolutely has to support the LTS version of Node, and I run the tests against all supported versions to ensure compatibility. So, one day I can use the nice things people are mentioning, but it will be years from now.
> I've written a library with zero production dependencies. Of course I have Jest as a development dependency which pulls in all sorts of stuff. It would be difficult to make a zero-dependency library.
That really depends on what your expectations and goals are. You mention Jest but as it's a test dependency then security is not a major concern, and thus it's ok to just vend the source code into your project trees.
No, the levels are not useless, far from it. Even if you just print to stdout, prefixing each line with a (maybe colored?) level info is very useful. Also, if later you decide to use xyz format, it is trivial to implement this. Changing all logging calls? Not so much.
To answer your question, the levels are important for both regular usage and debugging when something goes wrong. I create a number of debug/info/warning/error log messages throughout the tool and allow the desired level of logging to be set as a CLI argument - or even run in silent mode if desired.
It seems like the most reasonable choice. Incidentally, I have an open-source tool called Node Version Audit [0] which checks a given node version against known CVEs and end-of-life dates. It looks like the official change hasn’t been made yet [1].
I didn’t realize there were so many CVE-based tools out there! I even have an ultra-specific one for PHP (with some extra logic for support timelines).
PHP Version Audit: https://www.github.developerdan.com/php-version-audit/
One thing I’ve noticed with PHP at least, is that their release docs will regularly have the CVE listed with the details for many days before it shows up in the CVE feed- even as long as a week. Sourcing only from the feeds is a bit slow, but perhaps that is limited to the process that PHP uses?
Using the same receiver, but with a PPS wire to a GPIO pin on a raspberry pi, I get +/- 5µs jitter.
If you are interested, I have a write up with live graphs in the hamburger menu here: https://www.developerdan.com/ntp