Hacker Newsnew | past | comments | ask | show | jobs | submit | donjoe's commentslogin

To me, the most important question is: how do I scale v7 in an environment of 20+ engineers?

When using v7, I need some sort of audit that checks in every API contract for the usage of v7 and potential information leakage.

Detecting V7 uuids in the API contract would probably require me to enforce a special key name (uuidv7 & uuid for v4) for easier audit.

Engineers will get this wrong more than once - especially in a mixed team of Jr/sr.

Also, the API contracts will look a bit inconsistent: some resources will get addressed by v7, others by v4. On top, by using v4 on certain resources, I'd leak the information that those resources addressed by v4 will contain sensitive information.

By sticking to v4, I'd have the same identifier for all resources across the API. When needed, I can expose the creation timestamp in the response separately. Audit is much simpler since the fields state explicitly what they will contain.


It is human engineer problems all the way down.

UUIDv4 is explicitly forbidden in some high-reliability/high-assurance environments because there is a long history of engineers using weak entropy sources to generate UUIDv4 despite the warnings to use a strong entropy source, which is only discovered when it causes bugs in production. Apparently some engineers don't understand what "strong entropy source" means.

Mixing UUID types should be detectable because type is part of the UUID. But then many companies have non-standard UUID that overwrite the type field mixed with standard UUID across their systems. In practice, you often have to treat UUID as an opaque 128-bit integer with no attached semantics.


> Detecting V7 uuids in the API contract would probably require me to enforce a special key name (uuidv7 & uuid for v4) for easier audit.

Unless I'm missing something, check it on receipt, and reject it if it doesn't match. `uuid.replace("-", "")[12]` or `uuid >> 76 & 0xf`.

Regardless of difficulty, this comes down to priorities. Potential security concerns aside (I maintain this really does not matter nearly as much as people think for the majority of companies), it's whether or not you care about performance at scale. If your table is never going to get over a few million rows, it doesn't matter. If you're going to get into the hundreds of millions, it matters a great deal, especially if you're using them as PKs, and doubly so if you're using InnoDB.


> By sticking to v4, I'd have the same identifier for all resources across the API. When needed, I can expose the creation timestamp in the response separately. Audit is much simpler since the fields state explicitly what they will contain

Good luck if you're operating at a decent scale, and need to worry about db maintenance/throughput. Ask the DBA at your company what they would prefer.


If you read the prior comment, this is now an ouroborus


... If you live close by Theresienwiese, the city provides free cleaning for any accidents during Oktoberfest in your front/backyard also. I have to smile everytime I do find the note containing an emergency accident cleanup number in my mailbox :-)


Well I live very close to Theresienwiese and dont have any problems at all. During the 2 weeks of Octoberfest they clean the streets every morning at 4am and everything is clean and shiny again. Also now that the Theresienwiese have been encircled by a fence during Octoberfest the numbers of drunk people has fallen dramatically.


Let's hope it's deepseek-compatible.


testcontainers is great. I struggled a bit with testcontainers due to the nature of one container per test which just felt too slow for writing gray/blackbox tests. The startup time for postgres was > 10 seconds. After a bit of experimenting, I am now quite happy with my configuration which allows me to have a snappy, almost instant testing experience.

My current setup:

- generate a new psql testcontainer _or_ reuse an existing one by using a fixed name for the container - connect to the psql container with no database selected - create a new database using a random database name - connect to the randomly generated database - initialize the project's tables - run a test - drop the database - keep the testcontainer up and running and reuse with next test

With this setup, most tests run sub-second;


If your table setup process starts to get slow like ours, checkout psql TEMPLATE (https://www.postgresql.org/docs/current/manage-ag-templatedb...). Do the setup once to a db with known name, then use it as the template when creating the db for each test.


You can run Pg against a RAM disk too, if the container isn't effectively already doing that.

And tests can often use a DB transaction to rollback, unless the code under test already uses them.


Which is perfectly fine. However, you will be able to process only a single message per connection at once.

What you would do in go is:

- either a new goroutine per message

- or installing a worker pool with a predefined goroutine size accepting messages for processing


Another option is to have a read-, and a write-pump goroutine associated with each gorilla ws client. I found this useful for gateways wss <--> *.


UseMemo should not be used for fetching/kicking off a fetch either. UseMemo fans should be pure. Using logic that belongs into useEffect (logic that happens _outside_ the reactive flow) could potentially lead to other side effects which are very hard to debug. Just a example: a lot of fetch implementations are using fetch with a cache triggered in useMemo returning immediately. You will probably have a setState somewhere in the flow which will terribly interrupt react and break your page.

In case you trigger a native fetch, you've got no way to cancel the call due to the missing cleanup fn.


> UseMemo should not be used for fetching/kicking off a fetch either

Wrong. That's just like, your opinion.

The best time to kick off external calls is during first render, not after the component is mounted and React has gotten around to calling your useEffect callback.

useMemo can be a part of the solution along with useRef and useEffect cleanup(remember it's essentially to the unmount lifecycle hook as well).


Bikes are all about efficiency since you don't use any energy but your own. Things change a bit nowadays with e-bikes.

When it comes to efficiency, internal gear hubs sadly aren't yet in the range of a rear derailleur.

https://fahrradzukunft.de/17/wirkungsgradmessungen-an-nabens...

A dirty rear derailleur of course also reduces the drive train's efficiency by a lot - which can be solved by cleaning the chain every other month.

When moving towards belt drives, you need a very stiff frame which can be opened/split in the back to fit the belt. These frames are more expensive to produce which will furthermore increase the overall bike's price.

Ideally, we all train our legs to be able to handle a single speed setup ;-)


You don't NEED an opened/split frame for a belt. There are so many ways to solve this problem.



... besides the change towards hooks and the react dev team which does not seem to be clear about how to best use them.

The new beta docs just recently changed again removing old best practices concerning dependency arrays in useEffect hooks in favor of a new potential hook called useEffectEvent (which is still experimental).

I love to work with react. However, it takes _a lot of time_ onboarding new engineers for tasks which are a bit more complicated in nature. Also, using hooks the wrong way can really mess up your product big times.

It would be nice to see react moving in a direction which is by design/architecture less error-prone.


Umidigi is another company to mention. Their Bison models pretty much fill the gap between a massive rugged phone and a standard phone. The phone's battery @~6kmAh usually covers a 2-3 day trip without charging.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: