Hacker Newsnew | past | comments | ask | show | jobs | submit | dantiberian's commentslogin

Could you please pass on feedback to the team that the git diff view is very hard to read for red/green colourblind users like myself? Colour scheme like GitUp.co is very readable for me.


I'd be very interested to know how they produce it if the formula is so tightly held. At some point people need to be purchasing the ingredients and mixing them together.


It's possible to separate out these tasks such that no single person or group has every needed piece of the puzzle.

The Carthusian monks who produce Chartreuse (a collection of herbal liqueurs popular for use in cocktails) have been producing it and protecting the secret 130 ingredient recipe for over 400 years successfully. At any given time no more than three of the monks hold the entire recipe, and yet they have a company they have formed to execute most of the production without the secret being leaked.

The designated monks coordinate production and are involved in QC, as well as developing new blends for special releases, but much production is done by paid employees who do not know the complete recipe.

https://en.wikipedia.org/wiki/Chartreuse_(liqueur)


I suspect though that a lot of the secret behind Chartreuse isn't just the recipe, but the actual sourcing of the ingredients.

Presumably the recipe relies on very unique and location-specific herbs to the alps. Part of the justification for limiting supply is concern for the environment and sustainability of their production. The order also had to cease production while they were evicted.

I wouldn't be surprised if some of the key ingredients weren't wild foraged or at least very unique species.


> secret 130 ingredient recipe

One of the greatest use cases of security by obscurity, specially if part of the ingredients are decoys.


You could say the same about cryptographic signatures where each party only knows a part of the key, yet those all work fine. You could probably piece together the formula by a sum of some employees and some external suppliers if everyone broke their NDA, but if people keep their word, your factories could just as well see shipments of "Ingredient A" and the worker only knows how much to add to each batch.


Real life ain't abstract math. You have MSDS 'mulmen mentioned, but I also can't imagine any factory being able to just mix shipments of ingredients "A", "B", "C", etc. without the actual content being documented on purchase orders, OSHA reviews, etc. You may want to operate in secret, but at the very least, the taxman really wants to know if you aren't skimping on your dues, so there should be plenty of relevant documents in circulation.


Since they're operating in Europe it's trivial to split manufacturing into 3+ places that are within an hour drive but also in 3+ distinct jurisdictions that are part of the same free trade zone, so no tax authority can have a full picture either. And you'll never get, say, French and German tax authorities to voluntarily talk to each other.


I do recall some episode of "How its made" or similar of a food factory discussing some mix they were doing for a fast food chain, IIRC, that involved "two separate bags of spices, each sourced from a separate supplier for secrecy". That's about the level I'd expect out of such a scheme.


I wonder how much information leaks through something like Material Safety Data Sheets.


Exactly what I was thinking. I mean how can you produce something, esp. in bulk, when the exact ingredients and quantities aren't known? Assuming it is made in a typical factory, the machines would have to be programmed and that would typically mean someone has to know. I wonder if they split the knowledge over several different groups so a group only knows a single piece? Hmm....


This is how they do it. There was a documentary about coca-cola and they explained that they completely separated the supply pipeline. Operators manipulate unlabelled sources coming from separate parts of the company.


It's a myth that Coca-Cola is a closely held secret, though. Any food flavoring specialist can reconstruct the flavor of Coke almost exactly.

A few years ago I (not a specialist!) made lots of batches of OpenCola, which is based partly on the original Pemberton recipe, and it comes so close that nobody could realistically tell the difference. If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.

The tricky piece that nobody else can do is the caffeine (edit: de-cocainized coca leaf extract) derived from coca leaves. Only Coke has the license to do this, and from what I gather, a tiny, tiny bit of the flavour does come from that.


> If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.

I've not participated in Cola tasting, but assuming fresher tastes better isn't really a safe assumption. Lots of ingredients taste better or are better suited for recipies when they're aged. I've got pet chickens and their eggs are great, but you have to let them sit for many days if you want to hard boil them, and I'd guess baking with them may be tricky for sensitive recipies.

Anyway, even if it does taste better for whatever that means, that's not meeting the goal of tasting consistently the same as Coke, in whichever form. If you can't tell me if it's supposed to taste like Coke from a can, glass bottle, plastic bottle, or fountain, then you've told me all I need to know about how close you've replicated it.


I think my point flew past you: If I can make a 99% clone of Coke in my kitchen, any professional flavoring pro will do it 100%. The supposed secret recipe isn't why Coke is still around, it's the brand.

And by fresh I do mean: The OpenCola is full of natural essential oils (orange, neroli, cinnamon, lime, lavender, lemon, nutmeg), and real natural flavor oils have a certain potent freshness you don't get in a mass-produced product.


> you don't get in a mass-produced product.

But you are trying to reproduce a mass-produced product.


I'm merely making the point that there's nothing magical about the recipe. Anyone wanting to truly replicate it for mass production can simply use commodity flavor compounds.


> caffeine derived from coca leaves

Coca leaves contain various alkaloids, but not caffeine. Coca Cola gets its caffeine from (traditionally) kola nuts, and (today, presumedly) the usual industrial sources.


Not sure what happens with my brain there. I did indeed mean de-cocainized coca leaves, not caffeine.


Um… might want to double check your brain there!


You had better luck than I did, I tried my hand at making Open Cola, put around $300 into it (between the carbonization rig and essential oils primarily), and while I'd say it was "leaning towards coke", I would also definitely say that nobody would mistake it for coke.


I noticed it was incredibly important to get the recipe mixture exactly right, because even a slight measurement error resulted in weirdly wrong flavors.

I did my OpenCola experiment in the company office together with a colleague, and we ended up hooking it up to a beer tap, with a canister of CO2. I'm proud to say the whole office really got into it.


Some YouTuber basically reverse engineered it, and he found that the main thing contributed by the coca leaves were tannins.

https://www.youtube.com/watch?v=TDkH3EbWTYc&t=209s



Ive heard from others that this is how defense software engineering goes.

You write code for a certain part/spec that could go on a number of things (missle, airplane, etc). You dont know if your code will be used in a missile or not.


Slightly unrelated, the recent LabCoatz video went into a bit about the CocaCola recipe and how it's protected: https://youtu.be/TDkH3EbWTYc?si=GuvCd-kKXP5_gcRs&t=26

He mentions that the ingredients are shipped unlabeled from different facilities who don't know what they're making.

He then goes on to reverse engineer the formula. Because science.


A fairly obvious solution (IMO) would be to have multiple people buying the ingredients, some even buying unused ingredients. That would cover purchasing.

The mixing, again, spreading it out, have factory A mix ingredients x, y, and z, factory B mix ingredients Alpha, Beta, Gamma, and factory C mix factory A and B's mixtures.


Considering how complex some software can get, it's more surprising there are people who can hold enough of the whole design in their heads that they have a good idea of what's going on in general.


From the article:

The intention of OrioleDB is not to compete with Postgres, but to make Postgres better. We believe the right long-term home for OrioleDB is inside Postgres itself. Our north star is to upstream what’s necessary so that OrioleDB can eventually be part of the Postgres source tree, developed and maintained in the open alongside the rest of Postgres.


OK, just saved to the file cringespeak.txt:

"Our north star is to..."

:)


Looks like it, based on this video driving on the left side of the road in what looks to be Australia: https://www.youtube.com/watch?v=Fkh3s6WHJz8


I listened to https://www.localfirst.fm/18 recently from Electric-SQL. One of the things James mentioned was that Electric lets you use commodity CDNs for distributing sync data, which takes the load off your main Postgres and servers.

This seems like a good pattern, but of lower value for a SaaS app with many customers storing private data in your service. This is because the cache hit-rate for any particular company's data would be low. Is this an accurate assessment, or did I misunderstand something?


Hey, one of the things here is to define shapes that are shared. If you imagine syncing a shape that is that user’s data then it may be unique. But if you sync, say, one shape per project that that user has access to and a small shape of unique user data then you get shared cache between users who have access to each project.

It’s worth noting that Electric is still efficient on read even if you miss the CDN cache. The shape log is a sequential read off disk.


I'm curious on how you'd configure this. Is it common (and safe) to let a cdn cache private data for authenticated users?

Say Jira used electric, would you be able to put all tickets for a project behind a cdn cache key? You'd need a cdn that is able to run auth logic such as verifying a jwt to ensure you don't leak data to unauthorized users, right?


Yup, you can put an auth proxy in front of the CDN, for example using an edge worker.

See the auth guide: https://electric-sql.com/docs/guides/auth

Some CDNs also validate JWTs, so the CDN can be the proxy part of the Gatekeeper pattern (in the guide).


Another option too for scaling reads is just putting an nginx in your cluster.

Electric itself is quite scalable at reads too so for a SaaS use-case, you might not need any http proxy help.


Will this be partially available from the Claude website for connections to other web services? E.g. could the GitHub server be called from https://claude.ai?


At the moment only Claude Desktop supports MCP. Claude.ai itself does not.


Any idea on timelines? I’d love to be able to have generation and tool use contained within a customer’s AWS account using bedrock. Ie I pass a single cdk that can interface with an exposed internet MCP service and an in-VPC service for sensitive data.


https://where.durableobjects.live is a good website that shows you where they live. Only about 10-11% of Cloudflare PoPs host durable objects. Requests to another PoP to create a DO will get forward to one of the nearby PoPs which do host them.


The issue here is that if company.com does not use Google Workspace and hasn't claimed company.com, then any employee can sign up for a "consumer" Google account using user@company.com.

There are legitimate reasons for this, e.g. imagine an employee at a company that uses Office365 needing to set up an account for Google Adwords.


I don't really understand what this is offering beyond Cloudflare's recent release of running SQLite in durable objects: https://blog.cloudflare.com/sqlite-in-durable-objects/. Is it about providing an external interface to Cloudflare's SQLite databases?


The project is open source (https://github.com/Brayden/starbasedb/blob/main/src/index.ts). Yes to it provides a way to update Cloudflare's SQLite with HTTP.


If that's the case there's libsql (https://github.com/tursodatabase/libsql) which already provides HTTP clients, embedded replicas (essentially good old SQLite but with replication support), self-hosted SQLite server and is maintained by a company using it for their own product.


Some day I really need to learn when to use sqlite in a durable object vs the eventually consistent one (r2).


iirc they are both powered by the same engine to stream and replicate the WAL. I believe R2 is now implemented as a Durable Object backed by SQLite now.


Does this mean that R2 is not "eventually consistent" anymore?

I wonder what are the use cases (and when it's safe) to use "eventually consistent".

I'm guessing that maybe things like social media posts could be fine with "eventually consistent". It's not really critical to have the latest data.

I'm guessing that things like a shopping cart, a user account, a session shouldn't use a "eventually consistent" database.


Still think there is a lot we can add to StarbaseDB to make the developer experience on SQLite databases better, and personally I think it starts with a better external interface. Provide a frictionless way to get started. Then figuring out how to improve developer experiences on how they interface with databases.

Is it auto-accessible REST endpoints? Easy to implement websocket support for the database? Data replication for scaling reads? Offline data syncing? A lot of potential wins for a layer like this to build on.


Could you explain more why you were you not able to sign the URLs at request time? Creating an HMAC is very fast.


I’m going to have to look into this today. I assuming generating the URLs hit an API, but if those can happen fast locally that changes things.


Yup, pre-signing is fast and local, without any I/O. It’s just math. You could likely pre-sign thousands of URLs per second if you needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: