Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My CDMA phone dropped service for a few minutes after the leap second.

It's absurd that we continue to keep subjecting ourselves to these disruptions and the considerable amount of work that goes into handling leap seconds for the systems that aren't disrupted by them.

Leap seconds serve no useful purpose. Applications that care about solar time care usually care about the local solar time, while UT1 is a 'mean solar time' that doesn't really have much physical meaning (it's not a quantity that can be observed anywhere, but a model parameter).

It would take on the order of 4000 years for time to slip even one hour. If we found that we cared about this thousands of years from now: we could simply adopt timezones one hour over after 2000 years, existing systems already handle devices in a mix of timezones.

[And a fun aside: it appears likely that in less than 4000 years we would need more than two leapseconds per year, sooner if warming melts the icecaps. So even the things that correctly handle leapseconds now will eventually fail. Having to deal with the changing rotation speed of the earth eventually can't be avoided but we can avoid suffering over and over again now.]

There are so many hard problems that can't just easily be solved that we should be spending our efforts on. Leapseconds are a folly purely made by man which we can choose to stop at any time. Discontinuing leapseconds is completely backwards compatible with virtually every existing system. The very few specialized systems (astronomy) that actually want mean solar time should already be using UT1 directly to avoid the 0.9 second error between UTC and UT1. For all else that is required is that we choose to stop issuing them (a decision of the ITU), or that we stop listening to them (a decision of various technology industries to move from using UTC to TAI+offset).

The recent leap smear moves are an example of the latter course but a half-hearted one that adds a lot of complexity and additional failure modes.

(In fact for the astronomy applications that leap seconds theoretically help they _still_ add additional complication because it is harder to apply corrections from UTC to an astronomical time base due to UTC having discontinuities in it.)



CDMA system time is already defined as free of leap seconds.

---

3GPP2 C.S0002-A section 1.3 "CDMA System Time":

All base station digital transmissions are referenced to a common CDMA system-wide time scale that uses the Global Positioning System (GPS) time scale, which is traceable to, and synchronous with, Universal Coordinated Time (UTC). GPS and UTC differ by an integer number of seconds, specifically the number of leap second corrections added to UTC since January 6, 1980. The start of CDMA System Time is January 6, 1980 00:00:00 UTC, which coincides with the start of GPS time.

System Time keeps track of leap second corrections to UTC but does not use these corrections for physical adjustments to the System Time clocks.

---

I'm pretty sure the only use of leap seconds in CDMA is for converting system time to customary local time, along with the daylight-time indicator and time-zone offset also contained in the sync channel message.

Edit: C.S0005-E section 2.6.1.3 says the mobile station shall store most of the fields of the sync channel message; it may store leap second count, local time offset, and daylight time indicator. This suggests that these fields aren't really that important for talking CDMA.


But yet, parent poster's phone dropped, so that migration from UTC=GPS+N to UTC=GPS+N+1 would result in the same conniptions we all have to deal with an extra second in the day. Even if it is at the phone's presentation layer, that's several GB of software that might hold lurking N+1 bugs causing the data layer to drop.


Let's reframe that: His phone dropped on the new year, when everyone is sending a happy new year message to everyone they know.

No link with leap second till proven.


It wasn't midnight anywhere with CDMA service when the leap second happened.


Agreed, while the CDMA specification requires tight time syncing, like everything else the UNIX OSs used to run the equipment can receive the leap indicator. Any problem within the OS, or the software reading date/time from the OS can cause instability.

Also, without knowing more about what exactly went wrong with your phone, it's possible other infrastructure within the network was unstable, signalling equipment, etc. I can't remember exactly, but I think some of the CDMA equipment at my previous company had a leap second problem previously. And the equipment is no longer being maintained or patched really.


Past life $work made UTDOA equipment for GSM networks. The hardware used a number of GPS modules from various vendors to obtain GPS time, which someone noted above includes leap offset broadcast in the periodic almanac.

Anyway, of three different module vendors, two got leap handling wrong. Then our own code had its own leap bugs, on top of the OS (Solaris) timekeeping bugs such as clock jumps and timezone update issues. Good times.


Some GPSDO that are used to time CDMA base stations are known to misbehave around leap seconds (though often when the GPS signal sends the leapsecond warning, not at the leapsecond itself).


But is it a bug or network saturation under people wishing each others a happy new year? (And Perhaps more so with a selfie than a text msg as prev yrs)


Given that it's cdma, moat likely the user was in the US or thereabouts, so the leap second was several hours before local new year.


Happy new year at midnight UTC ... in the Pacific timezone?

Doubtful. :) I've observed a similar outage at the last leapsecond (and in that case, dropped me off a call-- which is why I even checked this time.)


Att got hit too. Dupe sms everywhere. Neville, the T-Mobile CTO was kind enough to answer me asking on their prep for it. https://mobile.twitter.com/vvtgd/status/814654159614050304


While we're at it, can we get rid of daylight savings time too?


As a programmer, I hate daylight savings, but asa human being, I love daylight savings.

It's just so nice to get that extra bit of sunlight in the evening.


As a human being, I hate daylight saving time... It's very disruptive to sleep schedules (particularly for children, but adults as well). Traffic accidents spike in the days after the DST switch (likely due in part to the aforementioned sleep disruptions). Summer days are plenty long already...


Yes, exactly my thoughts. While it's great that automated systems and networks can account for this man-made invention of time change, it is very stressful on humans and also unnecessary. A dissolving and averaging out to a new standard would help everyone out in the long run..


Or we could just keep time on DST and stop switching. No reason to give up those nice, long summer evenings.


Or maybe we just need to code system that handles leap seconds correctly.


Two valid points. A third: convince everyone to adopt epoch time for data transfer (seconds since epoch), and let applications that require formatted time do the transformation where it will be used (not earlier). It doesn't make a lot of sense that a timestamp represented as HOUR/MIN/SEC:DAY/YEAR should be passed around on the network of a production system. Leave it to the recipient to convert. I guess this is a subset of your point.


Leap seconds aren't just an issue of formatting times. Leap seconds actually involve turning the UTC count of seconds back by a second.

You'd have to switch the "seconds since epoch" count to TAI, and that would cause new formatting bugs because all kinds of software assumes that the minute changes on a multiple of 60 seconds since the epoch.


Yes but even here, seconds since Epoch should remain unaltered, and the correction should be made by whatever is rendering a human readable date format (to address every leap second). In most cases, the renderer wouldn't have to address it (since it's only being read by humans, and a second difference does not usually matter) and it's truly a non-issue! The application-layer dev can choose to increment time in whatever blocks he wants instead of having an if-else chain for every "official" leap second. Like adding a minute every few hundred years, to that other commenter's point.


Unfortunately, epoch time is not literally "seconds since epoch", at least not as implemented/standardized as "Unix time". It skips or repeats itself in case of leap seconds. So it can't save us here.

I think if there were such a thing as a different kind of epoch time that literally actually is "seconds since epoch" it would help a lot and work like you suggest.


Yes, I agree -- I am not referring to any specific implementations. Didn't know that about Unix time, but it makes sense for compatibility given the current way we adjust for leap seconds.


Is this correct? Because I am having trouble understanding the rationale behind making the unix epoch relative to an earth solar year, as opposed to just the "number of seconds which have elapsed since the unix epoch". Do you have an example of this implementation? The Wikipedia article regarding epoch notes many counter-examples.


The Wikipedia Article here, describing Unix Time, indicates precisely the issue. Unix time does not include leap seconds, which means that when a leap second occurs, the midnight transition to the next year needs to insert additional time. Strictly following the standard thus, the Unix timestamp rolls time backwards by one second over midnight, which is precisely the kind of behavior that breaks systems depending on continuous timestamps: https://en.wikipedia.org/wiki/Unix_time#Leap_seconds

It sounds like in your scenario, you would prefer Unix time to instead include the leap second, so that no rollback or time smearing behavior would need to occur. I believe the reason it does not has to do with simplicity: current systems rely on a day being 86,400 seconds, making each year (regardless of leap days) a multiple of 86,400. Leap seconds break this simple assumption. While it would be simple for a new time formatting system to take leap seconds into account, it is not so simple to go and retrofit all of the existing systems for a new formatting standard, and convince so many different groups of developers to change that much code while also agreeing with one another about the changes.


I wrote a WebDAV client the other year and dates where one of the hardest parts to implement because they expected them in calendar format! It seemed so odd to me that they would do that.


Http/1.* headers are intended to be human readable


That one is called ephemeris time (ET). Astronomical applications require the information of the current difference UT-ET.


I believe those efforts could be better spent making systems more robust against other threats that can't be avoided by simply deciding to stop cutting ourselves.

Besides: Even expensive commercial time keeping devices frequently mishandle leap seconds. History suggests that we are underestimating how difficult they are to get right in complex systems.


It's difficult to write bug-free code for events that happen very infrequently. Even more so when the distributed nature of the system makes effective testing under real-world conditions nearly impossible.


Yeah, that's exactly why Leap Day getting skipped every 4th year is such a disaster.


The ratio of embedded systems that care about the calendar to those that care about time has to be astronomical.


Maybe we should code systems with zero bugs. /s


Well, yeah, that's not a bad idea. It was kind of Dijkstra's whole thing. The problem is that, at current levels of technology, it's economically better to write cheap buggy software than more expensive bug-free software for almost all consumer applications. We are gradually pushing the optimality curve towards the provably bug-free end of the spectrum, but it will take time.


I thought we were pushing the optimality curve toward ever greater volumes of ever buggier code.


Glad TOTP refresh every 30 seconds, and are generally valid for at least 1 minute. One second less wouldn't make a large difference.


Someday when everyone switches to Rust we can have a standard library that handles all of this and software bugs will be obsolete ;)


Or maybe we should start holding project managers accountable for these issues instead of simply basing their performance on deadlines.


Classic Hacker News, it's always the PM or the management at fault, engineers are faultless.


I'd have time to develop a new toy programming language as a service alongside 3 new JavaScript frameworks in 2017 if it weren't for the evil PMs.


;-) 98% of developers would be clueless to leap second issues. Hell, 50% struggle with leap years.


> if you can’t measure it, you can’t manage it.


Just do a leap minute every 60 theoretical leap seconds and reduce the number of times these problems occur by 60 (and they always do because fallible humans programming machines).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: