I wish Google would open source their gtl library. Similar utilities exist elsewhere but not in the same consistent quality and well-integrated package.
I particularly like the “what to do for flat profiles” ad “protobuf tips” sections. Similar advice distilled to this level is difficult to find elsewhere.
Honest question. Why is a Rust rewrite of coreutils getting traction? Nobody thought it’s a good idea to rewrite coreutils with Go, Java, Python, C++, etc etc. It can’t just be memory safety.
> uutils coreutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
> Our key objectives include:
> Matching GNU's output (stdout and error code) exactly
> Better error messages
> Providing comprehensive internationalization support (UTF-8)
> Improved performances
> Extensions when relevant (example: --progress)
> uutils aims to work on as many platforms as possible, to be able to use the same utils on Linux, macOS, Windows and other platforms. This ensures, for example, that scripts can be easily transferred between platforms.
Experimenting with better error messages, as test-bed for extensions that might not be able to be tried or accepted in GNU coreutils (for technical, social or other reasons), and being able to use the same tools in all major OS are very reasonable divergences from GNU's project to "justify" its existence.
The project was originally just a learning project for someone who wanted to learn Rust by reimplementing tools that were small, not a moving target and useful. From there, it grew as it found an audience of developers interested in productionalizing it. There have been coreutils ports for the languages you mention (go-coreutils, pycoreutils, coreutils-cpp, etc.), they just didn't (yet?) hit critical mass. It is a harder sell for GC-based projects in this case because they are unlikely to ever be included as part of a dirtribution's base. Lets not forget that coreutils themselves are a rewrite of previously existing tools to begin with.
Given the site where this is posted and the screenshot, is the author an engineer turned fiction writer? Kudos if true. Posting these must take a lot of courage.
As a former employee, the engineering culture at Google gives me old-school hacker vibes, so users are very much expected to “figure it out” and that’s somewhat accepted (and I say this with fond memories). It’s no surprise the company struggles with good UX.
LLMs are good at in-distribution programming, so inventing a new language just for them probably won’t work much better than languages they were already trained on.
If you could invent a language that is somehow tailored for vibe coding _and then_ produce a sufficient high quality corpus of it to train the AI on them, that would be something.
Interestingly it became most painfully clear to me once I started working in an office in a developed country. Something about seeing the scale of it all. But I take your point.
Some juniors do figure it out, but my experience has been that the bar for such juniors is a lot higher than pre-AI junior positions, so there is less opportunity for junior engineers overall.
It’s possible to build this around protobuf. Google has a rich internal protobuf ecosystem that does this and supports querying large amounts of protobuf data without specifying schemas. They are only selectively open sourced. Have a look at riegeli if you are interested.
I don’t understand this argument. It seems to originate from capnp’s marketing. Capnp is great, but the fact that protobuf can’t do zero copy should be more an academic issue than practical. Applications that want to use a schema always needs their own native types that serialize and deserialize from binary formats. For protobuf you either bring your own or use the generated type. For capnp you have to bring your own. So a fair comparison of serialization cost would compare:
native > pb binary > native
vs
native > capnp binary > native
If you benchmark this, the two formats are very close. Exact perf depends on payload. Additionally, one could write their own protobuf serializer with protoc they really need to.
I particularly like the “what to do for flat profiles” ad “protobuf tips” sections. Similar advice distilled to this level is difficult to find elsewhere.