Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To keep things in perspective, a vertical shift of a few centimeters could be measured if two of these clocks were placed next to each other, just by the lesser gravity/time dilation at the increased "altitude".

It's an amazing time to be alive. While not this precise, you can have atomic cesium beam clocks of your own for a few thousand dollars each, and some elbow grease.





This is compared to the ~1 mile vertical shift resolution of cesium clocks. The fun part of cesium clocks is that you throw three in the back of a minivan and take them camping!

http://leapsecond.com/great2005/


Could we realistically get so accurate that we can measure time changes due to (human scale) mass movements near a clock?

My calculations says that moving 1cm up or down earths gravity well (at the surface) changes the acceleration of gravity about 5x more than the acceleration you'd feel from a 100kg mass 1m away.

Assuming my math is correct, it's already affected by nearby human scale masses, for certain values of "near".


I believe the time dilation is caused by differences in gravitational potential, not gravitational acceleration, so it would be even worse than that.

Huh, I was thinking that it's accelerated gravitational frames that cause the dilation, and I've encountered a lot of statements that argue the same. This is from wikipedia: "This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects."

However, according to that logic, an object located in a cavity in the center of earth should experience no more dilation than an object outside the earth's potential well, because the gravitational forces / curvature gradient cancels out, and should be zero. But that isn't the case according to the same sources, for example, Wikipedia says' "Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface."

Something's not right about how we verbalize this story about gravity


It's a very subtle point! The trick here is that you need to be careful when talking about the reference frames.

To an observer at the infinity, a clock at the core of the Earth will tick slower than a clock on the surface of the Earth because the "core clock" is sitting in a more curved space, and that's it.

The difference between the clock on the surface of the Earth and the clock at the core is that the surface clock can't follow the "straight lines" (geodesics) in that curved space. So it experiences acceleration due to the force of inertia. And the thing preventing that movement is the repulsive force between atoms that make up the bulk of the Earth.

If this repulsive force magically disappears, then the Earth's atoms will immediately start moving at the straight lines, in trajectories that will lead them all into a point at the center of the Earth.

To add: the force of inertia due to moving in curved lines instead of geodesics depends on the "steepness" of the curved space. Which decreases as you reach the center of the Earth. So you get essentially the same result as with the classic Newtonian gravity, but through an entirely different path.


A clock at the center of the planet should experience no net force by the mass of the planet.

that's the argument, yes

no net force, but net potential energy - thus gravitational dilation


Think about gravitational redshift. This is a direct sign that time is running more slowly for the emitter that is at a deeper gravitational potential.

That would be an amazing proximity sensor. "Looks like time slowed down again, there must be someone close by."

Submarine detection would be interesting with a few such precise clocks.

Given that the mass of a submarine is ~the same as the water it displaces, this wouldn't actually work.

> [a] submerged neutrally buoyant submarine produces no first order gravitational anomaly; however, because the distribution of mass throughout the submarine is not uniform, second and higher order effects can be produced. [...] For a submarine to be in stable equilibrium while submerged it is necessary that its center of mass be below its center of buoyancy (in submarine-fixed coordinates). That is, the mass of the lower half of a neutrally buoyant submarine's volume (including ballast) must be greater than the mass of the water it displaces whereas the mass of the upper half must be smaller than the mass of water it displaces. Therefore, a submerged submarine may be considered to possess a net vertical gravitational dipole moment having "negative mass" above and "positive mass" below.

https://apps.dtic.mil/sti/pdfs/AD1012150.pdf

Though, that 1989 paper concludes that because gravimeters would need a sensitivity of at least one part in 10^13 for practical usage, far beyond what was capable at the time, "[t]he concept of detecting submarines by means of detecting gravitational anomalies they produce, should be abandoned."


Yes and no. A moving submarine has a bow wave, a combination of compressed water and an actual wave manifest at the surface above. So there is a transiting bunch of extra mass to detect, at least so long as the sub is moving.

I like the TLDR at the end of the paper, but could be abbreviated further by just writing:

No.


I'm curious if any sci-fi authors were knowledgable and prescient enough to write this into their world building?

If not, it'd make for a pretty cool plot device if done well.


There's an article, I think on wired.com, years ago about exactly this. It talked about using a vast array of accurate clocks as a kind of radar. Seems plausible only with a few more orders of magnitude accuracy and miniaturization.

From what I understand, the relativistic effects are a lot less sensitive to nearby mass than the acceleration due to gravity (basically a 1/r relationship instead of 1/r^2). So while a sensitive enough gravimeter can pick up a nearby fairly heavy mass moving and such an elevation change, an atomic clock is going to be much more sensitive to elevation changes than nearby changes in density.

Asked this and related questions to o3. I do not vouch for the answers at all but you may find it interesting. https://chatgpt.com/share/6876cdd1-dfbc-8011-a55f-6915a90275...

I'd love to hear what the kids remember about this trip. It's been awhile!

> It's an amazing time to be alive. While not this precise, you can have atomic cesium beam clocks of your own for a few thousand dollars each, and some elbow grease.

How hard or expensive would it be for a reasonably equipped lab to build their own optical clock though? I see there are optical clocks the size of few rack units on the market for a rather hefty price, are the materials needed that expensive or is it just the expertise?


What keeps your average home experimenter from building an optical clock is the fact that femtosecond combs are still way too expensive and exotic. Some progress has been made -- you can get them from ThorLabs, for instance ( https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=11... ) -- but they are still in the "Call for pricing and lead time" category.

Once optical comb sources are commoditized to the extent that solid-state lasers are now, a lot of fun stuff will become possible.


Heh, I would not ever have expected to see my company mentioned on HN. I'm the software tech lead at Menlo Systems, we're building those frequency combs that Thorlabs sells.

Re the commoditization: Part of the problem is that customers, especially the scientific ones, don't want "commodity" frequency combs. Nearly every comb we sell is tailored to the specific customer in one way or another.

Industrial customers start to be interested in frequency combs more and more. I guess this will be the clientele that values off-the-shelf products more, eventually paving the way for commoditization.


The presence of individuals such as yourself is what makes the HN comments such a frequently meaningful place to find insightful discussions. Thanks for the context!!

We aren't far at all from on chip combs or pseudo combs, and that will be fine. More for sensors generally, but you can also have your clocks.

The lasers alone set you back many tens of k so it's not really possible to do on the cheap presently, even if a lot of the cost is expertise and the high overhead of R&D costs when only producing small number of units.

Oh and to know if it's any good you have to either build two (ideally more) of them to compare against each other (ideally using different approaches so their errors are less correlated), or have access to a clock better than the one you're building to compare to. So you can rarely get away with building just one if you want to know if you've succeeded.

Source: I work on the software for these portable optical clocks: https://phys.org/news/2025-07-quantum-clocks-accuracy-curren...


Expertise

This is a great way to compare ultra-precise clocks! Also, I'm looking forward to Einsteinian altimeters everywhere.

I fear this altimeter idea may be scuppered by local variations in the Earth’s density (it’s not an exactly uniform sphere of rock). Or maybe that just means the clocks could be great density-mappers!

It is easier to measure density with gravimeter than measure gravitational force directly.

Satellite, ACES, was launched recently that uses atomic clocks to accurately measure Earth's gravity field.


Are you saying they would be relatively inaccurate?

I had the same thought, but I still want one!

On second thought, you need a base station on the ground to tell you its time for comparison anyway, so if that base station is nearby the density thing should mostly work itself out


> ... vertical shift of a few centimeters could be measured

In what amount of time? Not instantly, right?


In a 2010 experiment based on an older version of this clock[0], NIST succeeding in measuring the gravitational time dilation across a 33 cm vertical separation—a frequency difference of 4.1×10^{-17}—with 140,000 seconds of integration time (<2 days). I don't really understand how that worked.

[0] https://sci-hub.se/https://doi.org/10.1126/science.1192720 ("Optical Clocks and Relativity" (2010))


Time dilation from general relativity is approximately gh/c^2 (1e-18 -ish), which is an order of magnitude bigger than the uncertainty on your clock frequency (1e-19 -ish).

But you would need a more precise characterization of the clock to answer this.

There might be significant noise on individual measurements, meaning that you need to take multiples to get precise enough (see https://en.wikipedia.org/wiki/Allan_variance).

Edit: If you just have clock output in ticks, you also need enought time to elapse to get a deviation of at least one tick between both bot clocks you are comparing. This is a big limitation, because at a clock rate of 1GHz you are still waiting for like 30 years (!!). (In practice you could probably cheat a bit to get around this limit)


These are optical clocks, so their rates are in the hundreds of THz to PHz range (1.121 PHz in the case of the Al+ clock). A 1e-18 shift corresponds to a ~mHz frequency deviation which a decent frequency counter can resolve in around 1s. However, no optical clock is actually that stable at short time scales, and the averaging is required to get rid of these fluctuations.

>Edit: If you just have clock output in ticks, you also need enought time to elapse to get a deviation of at least one tick between both bot clocks you are comparing. This is a big limitation, because at a clock rate of 1GHz you are still waiting for like 30 years (!!). (In practice you could probably cheat a bit to get around this limit)

In practice with this level of precision you are usually measuring the relative phase of the two clocks, which allows substantially greater resolution than just looking at whole cycles, which is 'cheating' to some degree, I guess. (The limit is usually how noisy your phase measurement is)

(To give some intuition, imaging comparing two pendulum clocks. I think you can probably see how if you take a series of pictures of the pendulums next to each other you could gauge whether one of them is running fast relative to the other, and by how much, without one completing one full swing more than the other)


From the article:

    This improves the clock’s stability, reducing the time required to measure down to the 19th decimal place from three weeks to a day and a half.
So no, not instantly.

https://en.wikipedia.org/wiki/Allan_variance

It takes a longer measurement to be more confident.


Instantly more or less. Time instantly moves differently at altitude because you are in a weaker gravitational field. The time dilation effect would be noticeable after 1 (or at most a few) ticks of the clocks.

I'm very skeptical of this claim. While the physical effect of time dilation acts immediately, I expect it would take many many ticks of both clocks before the rate difference between them became resolvable.

Yes, and no. The time-dilation effect will happen instantly, but the more quickly you want to observe it, the better your measurement's S/N ratio will have to be... and that, in turn, requires narrow measurement bandwidths that imply longer observation times.

So then the question has to be asked, does the effect really happen instantly? Or do the same mechanisms that impose an inverse relationship between bandwidth and SNR mean that, in fact, it doesn't happen instantly at all?


I don't understand. Wouldn't it only be possible to find out by comparing two identical clocks that were at different altitudes for some larger number of ticks, allowing you to then compare the elapsed ticks? How would you conduct such an experiment? My mental model is that I have a black box that outputs an electrical signal every tick, and then maybe we could just figure out which clock ticked first with a simple circuit. But that seems like we would need to sync them, and that it's fundamentally wrong due to the fact that the information of the tick is also subject to the speed of light. I don't know much beyond high school physics, fwiw.

My comment here might give some intuition for it: https://news.ycombinator.com/item?id=44576004 . You do need to measure for some time, because the measurement of the clocks with respect to each other is noisy, but you don't need to wait for there to be a whole 'tick' of extra time between them.

According to ChatGPT, the speedup factor for getting 10 cm higher is 1 + 1.09e−17. (With ΔT = gh /(c^2) The math seems to check out, but not sure if the formula itself is correct.) Surely, if the clock ticks at rate 1e-19 in a second, i.e. one tick is hundred times smaller than the dilation difference in a second, the clock would still need at least a hundreth of a second to accumulate enough ticks for the count of ticks to differ even by one tick because of the dilation.

The frequency that is actually counted with a digital counter in this clock is only 500 MHz (i.e. after a frequency divider, because no counter can be used at the hundreds of THz of an optical signal).

Nevertheless, in order to measure a frequency difference between two optical clocks you do not need to count their signals. The optical signals can be mixed in a non-linear optical medium, which will provide a signal whose frequency is equal to the difference between the input frequencies.

That signal might have a frequency no greater than 1 GHz, so it might be easy to count with a digital counter.

Of course, the smaller the frequency difference is, the longer must be the time used for counting, to get enough significant digits.

The laser used in this clock has a frequency around 200 THz (like for optical fiber lasers), i.e. about 2E14 Hz. This choice of frequency allows the use of standard optical fibers to compare the frequencies of different optical clocks, even when they are located at great distances.

Mixing the light beams of 2 such lasers, in the case of a 1E-17 frequency difference would give a difference signal with a period of many minutes, which might need to be counted for several days to give an acceptable precision. The time can be reduced by a small factor selecting some harmonic, but it would still be of some days.


To make this even clearer:

Let's imagine that there is a huge amount of time dilation (we live on the surface of a neuron star or something). By climbing a bit, we experience 1.1 seconds instead of 1.0 seconds experienced by someone who left down.

We have a clock that can measure milliseconds as the smallest tick. But climbing up, back down, and comparing the amount of ticks won't let us conclude anything after a single millisecond. If anything, we must spend at least 11 milliseconds up to have a noticeable 11 to 10 millisecond difference.

Now, if the dilation was 1.01 seconds vs 1.00, we would need to spend at least 101 milliseconds up, to get a minimal comparison between 101 and 100 milliseconds.


Thinking in terms of 'ticks' over-discretises it. You can in practice measure frequency to much more precision than any discrete cycle time in the system you're using, because you can usually measure phase, i.e. you're not just seeing some on-off flash, you're seeing a gradual pulse or wave, and it's how accurately you can measure that pulse (in terms of SNR) which sets your integration time for a given precision, not how rapidly you're measuring it.

> Let's imagine that there is a huge amount of time dilation (we live on the surface of a neuron star or something).

That idea is the premise of https://en.wikipedia.org/wiki/Incandescence_(novel)


I have had to talk myself out of the buying one of those atomic clocks on a chip. I'm kind of a watch geek, and I find the entire history of timekeeping to be endlessly fascinating, and I think it would be so cool to have atomic clock level of precision just running in my house. I've just never been able to justify actually buying one; outside of looking at a console and saying "how cool it that!?!", I don't know what I'd do with it.

I'm not above buying a toy to look at logs and geek out over it, but I can't justifying spending several grand for it.


I was in the same boat some time ago, ended up settling for a Casio radio controlled "atomic" clock for multiple orders of magnitude cheaper

Yeah I have one of those, and of course my smartwatch syncs with my phone and gets very accurate time from the towers (which in turn probably gets very accurate time from NIST or something). They are very cool and realistically more than accurate enough for anything I am likely to ever need.

Just something very cool about the idea of a hyper-accurate clock living in my house. I don’t know what I would do with it, just that it would be neat.

If I ever become a billionaire or something, I will absolutely buy one. Sadly I don’t think that’s likely to happen any time soon.


Isn’t the second defined as a specific number of cesium transitions?

How can anything … you know what? Never mind. No matter what answer anyone provides, I won’t understand.


> Isn’t the second defined as a specific number of cesium transitions?

Yes

> How can anything …

So your cesium counting device will fauthfully provide such a count and depending on their altitude it will be at different rates.

Both clocks are each experiencing time at the usual one second per second but gravity dilates spacetime.

Locally, a second is always a second, but from everywhere there is no such asbsolute, just as there is no universal "now".


I don't think this changes the way the second is defined. Rather, that definition describes some theoretical ideal where the cesium transitions are all perfectly equally spaced over the course of the second.

I think this new clock is simply able to generate more precisely spaced ticks than those of a traditional Cs clock. Less jitter and variation in the timing of those ticks. Similar to how a one-hour water clock or sand timer's runtime will vary between "transitions", but a one-hour quartz stopwatch timer is much more regular. I could keep going, but I'm already out on a limb so I'll stop before my own uncertainty rises too much.

(Edit: I read the article. I don't think my words above are correct.)


The idea here is to have a clock that actually ticks faster and to redefine the second in terms of that new, faster ticking speed.

I wonder how many more orders of magnitude of precision will be realistically possible. I wonder if we'd ever be able to use gravity to "see" things at non-cosmological scales, like if you could resolve the gravitational waves and interference patterns caused by a person walking by.

Wondering if we could spatially place a bunch of these to make a time based gravitational wave detectors. A single location could infer magnitude, and direction of a gravitational wave.

I propose calling it TIGO(Time Interferometer Gravitational-Wave Observatory) ;-)


Could you detect gravity waves with accurate enough clocks?

They would have to be extremely low-frequency. Plus, you'd need something to compare them to that wasn't affected by the wave, which is hard. (You'd need as accurate a clock, placed at a distance greater than the gravitational wavelength.)

I think.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: