Hacker News new | past | comments | ask | show | jobs | submit | jmchuster's comments login

There are lot of points in time that don't have a location, such as a historical event. Then it's most accurate to store in UTC, then convert to the timezone you want only at display time.

The author actually made the point in the beginning of the article.

> When I read Stack Overflow questions involving time zones, there’s almost always someone giving the advice to only ever store UTC. Convert to UTC as soon as you can, and convert back to a target time zone as late as you can, for display purposes, and you’ll never have a time zone issue again, they say.

But what if you're not actually storing "a point in time" but actually storing a "time at a location"? The trick is to follow the Stack Overflow advice, but just in the opposite direction. You store the time as a "wall time", and don't store a timezone or an offset, and convert it to UTC (an actual point in time) at the last minute possible.

For a given location, what timezone or daylight savings rules it's going to follow, are always up for change, so the only thing you know for sure is the "wall time" and the "location". Waiting till the last minute maximizes the chance that you've got the latest time library with the latest rules, so it follows the same principle as the Stack Overflow advice.

Now, my opinion might also be colored by experience working in the rental car industry, because the primary goal there is what the customer experiences. Because in their mind, they set up their rental in Phoenix, AZ to start at 9am, so when they're at the counter, and it says 9am on the "wall", the car better be there. They don't want to hear that time zone rules for Phoenix changed in the 6 months since they placed the rental, so "technically" their rental doesn't start until 10am. So in our database, we actually just store "07/23/2024;0900;Phoenix". It's actually incorrect to even store a timezone or an offset, because there's no guarantee those won't change, only the location won't change, so you have to do the lookup for the timezone and the rules for the given location at the very last minute, maximizing your chances of having all the latest time library updates.


The 2008 mortgage crisis is now 16 years old, and the stock market had recovered by 2011. So presumably anyone who entered the work force afterwards might have little sense of it, which would be anyone under the age of 31 (or 35 if you count college).


On climate change, it already did work once with the hole in the ozone layer.

People ask why they don't hear about it anymore -- oh, it's because we listened and banned CFCs and fixed the hole.


> What does Altman bring to the table exactly. What is going to be lost if he leaves.

If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.


It's probably helpful to first get a sense of how much work it is to gather customer feedback. As an example, Qualtrics was getting single digit percent response rates, on a simple NPS survey (https://delighted.com/blog/average-survey-response-rate). If you're asking people to give you more detailed feedback, I'd imagine the hit rate might be even lower. Another data point, people who gather feedback professionally often offer customers $100 for an hour of their time.


New grad talent is a huge pipeline for Google or any other FAANG, and they'll recruit at every well-rated and large engineering program they can find, the large majority of which aren't just MIT. And tons of engineering programs that aren't those, though they may just do all their recruiting through the colleges' online resources, since they have a limited number of humans to physically travel.


Online recruiting isn't the same as in-person recruiting though. Note that Google (just using them as a stand-in) recruits in-person at MIT but doesn't at other places where they "recruit online" from.


I gave a non-exhaustive list of universities Google does on-site recruiting at.


From the provided wiki link, it doesn't mean to refuel the car, it means to "step on the gas."


Supporting ie6, ie7, and ie8 separately while developing on firefox just so we could use firebug.


I'm so glad Internet Explorer/Edge/Trident is Officially Over. If you told me in 2010 that in 10 years the worst browser I'd have to support was Safari/Webkit... I'd be overjoyed.


One thing (only thing?) I honestly miss about IE5.5-8 is how amenable the engine was to polyfilling. It wasn't fast, but you could do almost anything with the right polyfill technique.

No sessionStorage? Use window.name. No (then-) modern CSS? Use CSS3PIE [0]. IE doesn't support the transform CSS property? Use an *.htc behavior to convert the transform to a matrix filter.

It was madness, and it was beautiful in a Cthulhu kind of way.

[0] http://css3pie.com/


I was on the "fuck ie6" train. I had to write some support for ie6 in like 2015, I almost quit the job.


It's the scale of it.

"Here is a vaguely understood problem, that might take 2 hours to solve."

"Here is a vaguely understood problem, that might take 2 years to solve."

Though if you're used to the latter timescale, you might not consider the former as all that "vaguely" understood.


For us, we started off with a world where each service communicates to each other only via RabbitMQ, so all fully async. So theoretically each service should be able to be down for however it likes with no impact to anyone else, then it comes back up and starts processing messages off its queue and no one is the wiser.

Our data is mostly append-only, or if it's being changed, there is a theoretical final "correct" version of it that we should converge to. So to "get" data, you subscribe to messages about some state of things, and then each service is responsible for managing its own copy in its own db. This worked well enough until it didn't, and we had to start doing true-ups from time to time to keep things in sync, which was annoying, but not particularly problematic, as we design to assume everything is async and convergent.

The optimization (or compromise) we decided on, was that all of our services use the same db cluster, and that if the db cluster goes down, it means everything is down. Therefore, if we can assume the db is always up, even if a service is down, we consider it an acceptable constraint to provide a readonly view into other services db. Any writes are still sent async via MQ. This eliminates our syncing drifting problem, while still allowing for performant joins, which http apis are bad at and our system uses a lot of.

So then back to your original question, the way that this contract can break is via schema changes. So for us, since we use postgres, we created database views that we expose for reading. And postgres view updates are constrained that they must always be backwards compatible from a schema perspective. So then now our migration path is:

- service A has some table of data that you like to share

- write a migration to expose a view for service A

- write an update for service B to depend upon that view

- service B now needs some more data in that view

- write a db migration for service A that adds that missing data, but keeping the view fully backwards compatible


> So then back to your original question, the way that this contract can break is via schema changes. So for us, since we use postgres, we created database views that we expose for reading. And postgres view updates are constrained that they must always be backwards compatible from a schema perspective. So then now our migration path is: > - service A has some table of data that you like to share > > - write a migration to expose a view for service A > > - write an update for service B to depend upon that view > > - service B now needs some more data in that view > > - write a db migration for service A that adds that missing data, but keeping the view fully backwards compatible

I don't think I understand. You need to update (and deploy) service B every time you perform a view update (from service A), although it's backward compatible?


if service B needs some new data from the view that isn't being provided, then you first run the migration on service A to update that view and add a column. Then you are able to update service B to utilize that column.

If you don't need the new column, then you don't need to do anything on service B, because you know that existing columns on the view won't get removed and their type won't change. You only need to make changes on service B when you want to take advantage of those additions to the view.


This only works if you apply backward compatible changes all the time. Sometimes you do want to make incompatible changes in your implementation. Database tables are an implementation detail, not an API which you're trying to expose as a view, etc.

But hey, every team and company has to find their strategy to do things. If this works for you, that's great!

It's just not a microservice by definition.


I would never claim that our setup uses microservices. Probably just more plainly named "services".

And yes, that is correct, we agree that once we expose a view, we won't remove columns or change types of columns. Theoretically we could effectively deprecate a column by having it just return an empty value. Our use cases are such that changes to such views happen at an acceptable rate, and backwards incompatible changes also happen at an acceptable rate.

Our views are also often joins across multiple tables, or computed values, so even if it's often quite close to the underlying tables, they are intentionally to be used as an abstraction on top of them. The views are designed first from the perspective of, what form of data do others services need?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: