I had Dr. Alan Paeth as a university professor for 4 years, one or two classes every semester. I tried to fit my schedule around the classes he was teaching.
When you went to his classes you really had no idea what it was going to be about. I got the sense that sometimes he would just teach whatever was on his mind that day, nevermind the syllabus. It was sometimes bizarre, often challenging, and always interesting. I learned a lot from him.
I'm sad that he has passed and sorry to see his website is no longer online, but also pleased that his legacy is living on and people other than me still think of him -- thank you!
In 2015 I was working at a "fintech" company and a leap second was announced. It was scheduled for a Wednesday, unlike all others before which had happened on the weekend, when markets were closed.
When the previous leap second was applied, a bunch of our Linux servers had kernel panics for some reason, so needless to say everyone was really concerned about a leap second happening during trading hours.
So I was assigned to make sure nothing bad would happen. I spent a month in the lab, simulating the leap second by fast forwarding clocks for all our different applications, testing different NTP implementations (I like chrony, for what it's worth). I had heaps of meetings with our partners trying to figure out what their plans were (they had none), and test what would happen if their clocks went backwards. I had to learn about how to install the leap seconds file into a bunch of software I never even knew existed, write various recovery scripts, and at one point was knee-deep in ntpd and Solaris kernel code.
After all that, the day before it was scheduled, the whole trading world agreed to halt the markets for 15 minutes before/after the leap second, so all my work was for nothing. I'm not sure what the moral is here, if there is one.
Reminds me of the story of the computer engineer at Data General in Traccy Kidder's nonficion book, "The Soul of a New Machine" [0], who quit after spending weeks toiling away on sub-second timing concerns:
> He went away from the basement and left this note on his terminal: "I'm going to a commune in Vermont and will deal with no unit of time shorter than a season."
Sub millisecond timing is basically impossible with context switching involved. Even on a real time kernel config, the best you are going to get with a time slice is 1ms.
With busy waiting, you can achieve timings in the sub-microsecond range, depending on how heavy the loop is.
The hardware providing those nanoseconds is not nanosecond accurate (unless using atomic clock)
However they may be more accurate than 1ms. The guy is saying the timeslice given by the OS for a program to run in has at best a 1ms slot because the OS is switching between threads on a 1ms timeslice basis
So unless you're polling, the timing at which you ask the hardware for nanoseconds will jitter with 1ms offsets
There are a lot of applications in the world that don't run on a regular processor under regular linux
The guy is overgeneralizing IMO. You can delve deep within the sub millisecond world even as a regular dude with a regular computer and a regular OS by just doing everything in an interrupt context
No, TSC on modern CPUs is much more granular that that. No atomic clocks needed, just a normal quartz crystal. This is how ptp works and you can definitely get sub nanosecond accuracy from it.
Wrt scheduling quantum, this is entirely configurable and subject to scheduling policy, priorities and additional mechanics such as isolcpus and nohz. GP's comment is just plain wrong.
woah, do you have a link on that? I use plan [1] which is great for minute level planning but also annoying in various ways, if there's other software that can do similar, would love to try it
(i'm guessing that this was purely a joke and no such thing exists)
It was from a skit on the Colbert Report (can't remember the episode). He talks about how NASA added a leap second and then pulled out this comical "second-by-second" year planner and said his plans are ruined because he doesn't know what to do with the extra second. Wish I could remember it.
Telling jokes on HN is risky. HN readers don't want the site full of jokes, so they savagely downvote them unless almost everyone finds them hilarious.
Dissent, because for those of us who have never heard of such a connection (myself included) it is /not/ a euphamism but rather a safe epithet that the use of which never got anyone accused of "foul langauge" nor got anyone threatened with "mouth washed out with soap".
. o O ( The minor peril of disabling my autocorrect is that while my words may be inteligible, and both non sequiturs and wholly altered meanings may be avoided…spellings may occasionally falter. )
>Sometimes the best solution is not a technical solution (“halt the markets for 15 minutes before/after”)
We've had an election recently, right on the day when DST changed. On the night of counting of the votes, the clock went 2:59 AM -> 2:00 AM.
To save themselves trouble the Statistics Office instructed all vote counters that under no circumstances are they to enter or update anything in any system during the repeating hour until it's 3:00 AM the second time…
Europe/Berlin is NOT an offset, it's the zone. A proper date needs BOTH the offset AND the zone ! How come people don't understand timezone when it's right there, in the etymology of the word !?
Sorry not personal but I find myself explaining that each time the subject comes up at work (and generally to the same people... sigh)
How about this? Frame one: man beats woman at chess. Frame two: woman shoots man with automatic pistol. Not sure how deep the original really is if you think about it...
IIRC, Russel's thesis was something to the effect of ultimate supremecy of man over machine. I associate "Sept 1952" as the issue. The last frame suggested a certain obviousness and nonchalance in the man unplugging the computer, as if no great debate would be involved. I wonder if the article itself might now evidence too much of innocence back in that day, aside from a prediction that computers would eventually beat humans at chess. Too much innocence regarding technological imperitives and the technosphere?
My intention was for the woman to shoot the man, it's possible I wrote it wrong. They could be both men or aliens or even computers/robots with human like intelligence. If they were both thinking computers the "pistol" could really be a human paid in Monero to go smash the other computers hard drives and backups.
There must have been discussions earlier about the market freeze. Finding/starting those would have been the correct approach, with a technical solution as a backup.
You got paid to dig extremely deeply into a very complex and important problem spanning multiple systems and domains. You developed a plan, tested it and were ready to act.
This is a hugely valuable learning experience few people even get a chance at, let alone solve. Too bad it doesn’t show up on your resume is the only downside!
$work had thousands of full custom, dsp-heavy, location measurement hardware devices widely deployed in the field for UTDOA locating cell phones. It used GPS for time reference -- if you know your location, you can get GPS time accurate around the 10's of nanoseconds. GPS also broadcasts a periodic almanac which includes leap second offsets: if you wanted to apply the offset to GPS you could derive UTC. Anyway there were three models of these units, each with an off-the-shelf GPS chip from one of three location vendors you've probably heard of. The chip firmware was responsible for handling leaps.
One day, a leap second arrived from the heavens. We learned the three vendors all exhibited different behaviors! Some chips handled the leap fine. Some ignored it. Some just crashed, chip offline, no bueno, adios. And some went into a state that gave wildly wrong answers. After a flurry of log pulling, debugging, console cabling, and truck rolls, we had a procedure to identify units in bad states and reset them without too many getting bricked.
It seems the less likely an event is to occur, the less likely your vendor put work into handling it.
It seems the less likely an event is to occur, the less likely your vendor put work into handling it.
This recalls perhaps the biggest mistake in the GPS specification, the 1024-week rollover period. A timespan long enough to be impractical to test without expensive simulator hardware, short enough to be virtually guaranteed to cause problems in long-term installations... and long enough for OEMs to ignore with impunity. ("Eh, it's 20 years, by that time I'll be retired/dead/working somewhere else.")
Moral: timescale rollovers need to be designed to happen very frequently -- as in a few days at most -- or not at all. Unfortunately the leap second implementers didn't have that option.
Somebody on LinkedIn working in data science opined that they should do away with DST. I commented that yeah maybe they ought to, and then bring it back in 5 years, rinse / lather / repeat as a stress test. Got a number of likes.
Think of all the time that could be better appropriated than on fintech in general. Seems like such a waste of resources siccing a bunch of computers against each other in a zero sum game of stock arbitrage. I will admit some of the stuff tech comes out of it is cool on its own at least.
Yeah, they could have been working on something truly valuable like violating people’s privacy with ad-tech. Or maybe sucking millions of hours of people’s lives away with TikTok algo improvements. Maybe they could be working on the next MoviePass!
> zero sum game of stock arbitrage.
By your definition insurance is zero sum as well. But people find that generally useful. Taking risk off of peoples hands has value even if a widget doesn’t come out the other end.
> Yeah, they could have been working on something truly valuable like....
But why not just a reasonable product?
Yes, what could that be, there cannot be something that is fairly priced and people really want, need, what just helps. Not today anymore!
> By your definition insurance
I know what you try there and on a very abstract level you are maybe slightly right, but PLEASE no, high level gambling (==milliseconds&millions) is not at all comparable to the insurance model in many regards, foremost maybe purpose for the community?
(and its pretty clear that fintech here sure doesn't mean the classical bank and modern "normal" payment system... ).
This is a weasel phrase. Insurance is high level gambling as well. They both use risk models to decide at what price they will sit on the opposite side of a trade for.
Decide where you really draw the line and if it’s time horizons that’s pretty arbitrarily stupid. Insurance uses re-insurance quite quickly to de-risk their own books so pretending trading within a minute vs daily/hourly is pretty silly.
Also, most “fintech” is not the “be the fasted to arb 2 exchanges” variety. It’s usually “be a market maker for products that you can relatively quickly price based on proprietary signals”. If speed is your only advantage, the cat and mouse game will wake you up one day getting beat to the order book on every trade.
> and its pretty clear that fintech here sure doesn't mean the classical bank and modern "normal" payment system...
Not sure where you got that from. All of this is interconnected. The “classical banks” are all participating in deep fast moving bond markets. JPM doesn’t send their orders to some old timey trader with a cigar and a bowler hat via telephone. They use fintech like the rest of the industry.
You didn't want to understand me. Sure both use risk models, but exactly this still doesn't make them anyhow relatable?
And sure I know still that everything is interconnected, but it is clear that if someone complains about fintech (but other arguments then immediately go into the "oh you say fintech, but what about .."), he doesn't mean the necessary stuff like payments processing or normal markets.
anyway fine to disagree here ;)
No, I’m saying your hand-wavy “fintech is bad” doesn’t actually point to any thing in particular. Most of HFT is not “do the same thing as the other players, but faster”.
Most HFTs primarily engage in market making (matching buyers and sellers) which absolutely is a very useful and necessary function to society unless your position is that markets should not exist.
I'm against pure speculation, but forbidding futures outright would feel ... wrong. Locking prices for contracts up front is a useful thing.
However, it might be much more interesting, and societally more beneficial, to require that anyone trading in commodities futures must have, at all times, the facility to take delivery of the contracts. The upper limit of exposure for a trading desk would therefore be bound by the capacity of their physical infrastructure.
Eh. I see the downsides of a lot of that HFT stuff, but there are upsides too. Yes a lot of it is zero sum, but not all of it is. Lowering spreads between currencies, say, does materially help non-finance actors. There are other areas of the stock market that are useful too. Ingesting financials and other non-manipulated data to better reflect a company's true worth at any time helps, for example, employee option holders that seek to have a fair renumeration for their labour.
Market makers are always talking about "lowering the spread" being this great thing they're doing to make the world a better place
Yet when I go somewhere with no liquidity and a huge spread like a crypto exchange, a deeply unpopular corner of the stock derivatives market, or a Craigslist used stuff category, the wide spread is just a mild inconvenience at worst. You get to choose between waiting for a better deal and taking a worse deal immediately
A tight spread just means that somebody is getting rich by taking that choice away from me and everyone else on the exchange. It's not the most nefarious thing in the world, but it's not particularly helpful or altruistic either.
"Yet when I go somewhere with no liquidity and a huge spread.."
Someone who wants to buy or sell goes to a market in order execute at the best price achievable, and they may be under time pressure.
If the spreads are wider at one place than another, participants will gravitate to the place with the narrower spreads. If there is better liquidity at one place than another, activity will move to that place.
The purpose of being at the market is that you want to trade.
These qualities that you dismiss, "a mild inconvenience at worst", are the essence and measure of a market's effectiveness.
"A tight spread just means that somebody is getting rich by taking that choice away from me and everyone else on the exchange."
No, it is the opposite of that. It is when spreads are wide that there is easy opportunity for getting rich. Consider: it is more lucrative to buy from one person at 10 and sell to another at 20 then to buy from one at 15.01 and selling to another at 15.02.
"You get to choose between waiting for a better deal and taking a worse deal immediately"
That is not the choice. On a good market you get both a competitive price, and you get to deal immediately.
An order that you rest on the book is called a limit order. You can do limit orders on any market, even those where market makers operate. Creating a market that offers limit orders is easy. The more challenging problem is to create a market where people can come and place market orders that get filled immediately
Four years ago, I wated to sell stock to close a deal to buy a house. I didn't want to sit around for hours or days or weeks or forever tweaking limit orders, hoping the market would move in my direction and that the house would stay on the market. I went to market to execute, got my money, and put down the deposit.
If a company goes to a market to offset risk, they typically want the convenience of getting the deal done immediately. If they have to wait weeks to close the deal, the risk might already have past by the time they could get the deal done. Liquid markets with tight spreads get rid of the workflow and loss of time inherent to haggling whilst giving you justified confidence that you are getting a competitive price.
I feel like you basically rephrased what I said, but added a little terminology around it and said it's a good thing. Market orders taking 1 second to fill instead of 80 milliseconds isn't a meaningful contribution to society.
Imagine a world with no market makers. There would still be plenty of buyers and sellers of SPY shares at any given time to keep a tight spread and fast execution, but people placing market orders would get slightly worse execution and people who can accept the risk of placing a limit order and waiting two seconds would get slighty better execution. The ONLY real difference is that there wouldn't be some market vampires magically extracting tiny bits of profit all day. Your Facebook shares would still move just fine even if nobody front-runs your order and takes a few cents from you.
I get it, market making is profitable and the victims are distributed widely enough/offset enough by trivial benefits that it's not a particularly bad thing to do. But they aren't making ANY positive contribution to society AT ALL by squatting on the exchange and intercepting all the market orders at a profit.
(Full disclosure: this whole argument obviously goes a different direction when you start talking about derivatives, since market makers are mostly the ones who create and offset the derivatives. I could believe an argument that they're making the world a better place by making it quick and cheap to mitigate financial risk)
What is your goal? Is it better markets, or getting rid of market makers?
You seem fixated on the latter, to the extent that you will define away the quality measures of a market to get there.
The vampire/victim labelling you use is unreasonable. Market makers rest liquidity on the book and then other participants choose to interact with them. Both participants have chosen to enter the deal.
You propose a market without market makers. If you set up such a book, I expect you would find that nobody would want to trade there because they will get better prices and more liquidity elsewhere. If it was a good model, then all the exchanges would be doing it.
You misuse the term front-running here. Front running is when a broker has an order from a customer, and places orders on their own behalf before processing the customer order. In doing so they would put their own interest ahead of the customers. Rules about front running are a form of consumer protection. On-exchange market makers do not have customers, so the concept of front running is not relevant there.
Maybe they could be affected and needed plans to avoid the impact as well, but unlike stock markets you can’t say pause payments globally for half an hour just to get through the leap second.
> Seems like such a waste of resources siccing a bunch of computers against each other in a zero sum game of stock arbitrage
Despite the useful service of price discovery (here are so many better ways) it is clear from the EMH that those computers are not doing arbitrage, they are front running trades.
Illegal. Criminal in USA (I think). But makes billions and billions for the already very rich. So that is why there is so much of it
I think it's the other way around. They had a problem which previously only impacted weekends, so it was written off entirely without consideration of whether this was happening by rule or by convention. They knew it would be a concern on any other day and yet did nothing until the day it was announced.
Idle curiosities can lead to their own waste, but the kernel panic was probably worth digging into earlier.
>I'm not sure what the moral is here, if there is one.
Apparently, its about as useful as the leap second itself ;)
I feel your pain though, as I've spent weeks on something only for it to be tossed away like it was nothing at the last second. I guess that's how Google devs feel when their projects are deprecated. At least theirs saw the light of day and provided some validation
A bit of a tangent but I have observed that whenever a networking record for bandwidth is broken it is typically by a nonprofit such as a university, but whenever a networking record for latency is broken it is more often than not by someone in the "fintech" industry developing a faster bag-passing mechanism.
It is clear to me that the disparity of latency creates islands of privilege. I mentioned this to someone in the industry once and they replied that what the layman perceives as parasitic middlemen actually provide valuable liquidity. When I asked whether they considered ticket-scalpers to likewise provide liquidity they claimed that was not at all the same thing.
I think the moral is that it'd be a lot easier if we could just stop messing with the clocks, or at least push more technical things towards only caring about a closest-to-a-global-high-precision-monotonic-clock-as-relativity-allows rather than worrying about what the clocks on the walls say, which is more a personal matter of how much you care or don't where the sun is in the sky at 12:00:00.000.
I was gonna say, why not just close all positions and turn off the computers around the leap second? How much are you realistically gonna lose by missing a few minutes of trading, compared to the alternative risk?
Edit: I guess the other way to look it is I guess now how much you can make on a few minutes of trading, seeing that it was worth putting at least one software engineer on it for a long time despite the risks...
It is a uniquely crummy feeling to have your work go unused like that, but you shouldn’t let it discourage you. You reached a level of mastery on this particular thing that few people have, which is evidenced by the fact that no one else in the trading community was able to reach your company’s level of confidence and they decided to wait out the leap second instead.
> no one else in the trading community was able to reach your company’s level of confidence
So his work contributed to community wisdom, and that influential community has probably had some say in cancelling leap seconds. I wouldn't call his work wasted. I would call that notably few degrees-of-separation in making an observable difference.
Yes, I agree, and I do think this is another example of worse is better. The complex but correct solution is to the hard work the OP did. But the simple but better solution is to simply halt the markets.
Contingency plans have their own contingency plans. Maybe trading companies started talks to stop the market months before your company assigned that task to you, in case of no agreement or a negative one.
The moral here is that you and people in similar positions convinced everyone else that there was too much risk to go forward. Either by direct or, indirect action and implication. Sometimes, just seeing what your own team needs to feel safe and seeing what everyone else is or not doing on the same front is enough to make the call one way or the other.
I think that's the only reasonable way to handle this kind of thing, though I bet that accurate time matters enough in fintech that you'd still have some cases where you'd need access to the "true" wall time in order to stamp logs for auditing or whatever.
Thanks for the chime in. I don't work in fin or even adjacent to it, but in robotics we often have to correlate logs between multiple systems to fully understand a failure and in a lot of those situations milliseconds do matter— when did the sensor reading come in, how quickly did we understand it and alert the other unit, how does this line up with the timestamps on a security cam video we don't control, etc etc.
The interesting thing is that for the less careful, the 15-min before/after halt may have been not enough. You knew enough not to use a time smearing NTP server but others that didn’t obsess as you did might have been off by a fraction of a second for the entire 24 hour period leading up to it.
Don't conclude it's that hard for everyone until you've spoken to a good subset of different people.
You're conscientious and willing to dig in to the details to fix a problem. Plenty of people aren't, and plenty of those are doing the same job as you. Look up from your own little world and try to figure out what other people are doing, how they're doing it, and why. This applies generally: If you fixate on a specific language or toolkit, you'll miss others which fix or obviate bugs you were resigned to living with. Same with OSes and environments. It even applies to relationships, which is why a big hallmark of abuse is isolating the victim.
I worked in algo trading at major bank in Japan. Japan time zone is UTC+9. Markets open at 9am. A leap second brought down our trading right at the open.
For most of us that are doing implementation engineering, a lab is simply a collection of the gear that can be put together in a simulation of the production environment without being constrained by formalities. For me it would be a bunch of network and server kit and cables in a rack.
Not OP, but at several jobs our labs were small server rooms stuffed with network gear, servers, and client PCs. They were used for end-to-end simulations and tests. It wasn't uncommon to actually do work in the lab, keeping an eye on the blinky lights or somesuch.
There was a library to use webcams for entropy called lavarnd up until about 8 years ago, as well. I am unsure if the api to the os changed or there's a functional difference between ccd and cmos or whatever.
I do a lot of entropy research, and was annoyed I had to roll my own implementation for a no-photon camera entropy source.
https://github.com/LMDB/lmdb/blob/30288c72573ceac719627183f1...
Which meta-page used is determined by the transaction ID (even IDs use first, odd IDs second).
This is the earliest description I can find of the "shadow paging" concept: https://dl.acm.org/doi/10.1145/320521.320540
And I believe its first implementation was in IBM's System R.