Hacker Newsnew | past | comments | ask | show | jobs | submit | tl2do's commentslogin

The expected/unexpected distinction assumes you have the capacity to anticipate failures in the first place. But that capacity varies - even among experienced developers, everyone has blind spots shaped by their specific history. What's "obviously expected" to one senior dev is a surprise to another. The article's model is useful, but there's a prerequisite it doesn't address: the ability to expect is itself unevenly distributed.

I haven't dug into this deeply, but to me CRDTs look like a P2P data structure abstracted to the programming language variable level. The article says they shine when you don't want a central server. But most communication libraries already handle authentication and multiple peers well — and if you designate one peer as canonical (via leader election), conflict resolution is solved. I'm curious what use cases make avoiding a central server worth the paradigm shift. That said, it's a choice of approach — some may prefer the CRDT paradig

I admire the de-Googled approach of GrapheneOS. As a lawyer, privacy concerns resonate with me too. I love the rebellious attitude of tech that presents an alternative choice in an overly duopolistic market.

That said, I wouldn't last 8.4 months like the author. Even though he admits to some Google app usage, I'm in too deep — I'd never be able to get out. But if I get the chance, I'd like to try it on a secondary phone. Those solid black icons are one reason. They look cool.


my take on this is to some advocates probably shocking. But I think you don't need to perfectly switch and never touch anything google again.

I personally just encourage people to take a look at what you are using, and if you could gradually change some of it. Who knows sometimes alternatives even offer better services. I am not saying never use anything google ever again. Just question your tools regularly and peruse the alternatives.


Graphene supports the 6a, which unlocked goes for ~$100 on ebay. I imagine you can swing that as a lawyer to play around.

I'll also echo the ideas from everyone else here. You can just use it as a normal Android phone the way you do any other and there's still big benefits. There's also really big benefits in terms of carrier privacy that aren't often talked about, like vpn routing and hotspot usage.


Not wanting to discourage you from trying Graphene, but the icons are probably not a good reason. Can always install an alternative launcher and icon pack on stock android.

Running Graphene for a long time now, everything works perfectly fine, but I don't do mobile banking.


I do mobile banking and use GrapheneOS daily (2 online banks + 2 trading platforms).

I also work in mobile app/SDK publishing as a business dev and it's critical that I can install my clients' apps (thousand+) in private space.

It works great for me.


you can use mostly google appsand still benefit (e.g. unlike google android, play services aren't privileged and are sandboxed like any other app) https://grapheneos.org/features#sandboxed-google-play

also you can restrict some apps network permissions, for example i use the google camera app with the network disabled :p


I'm not writing this to disregard the PDF author—it's just a personal retrospective.

I'm a 50-year-old Japanese person who watched the original Dragon Ball broadcast on TV around 40 years ago. Back then, there were no LCDs or OLEDs—only CRT ("brown tube") TVs, and the signal was analog. With that kind of analog rendering, it was practically impossible to tell what the "true" colors were. Plus, CRT displays degraded over time, shifting colors toward brown.

The pre-processed raw images in the article actually look like what I remember as the real Dragon Ball colors.


From a photographer's perspective, using cel scans as a reference could be a fool's errand because they are biased by the white color of the scanner light and scanning software. There's a lot of room for opinionated scans there.

OTOH, the result looks great, so good on the passionate fans who spent their time and effort doing this.

> CRT ("brown tube")

ブラウン管 means Braun tube, named for its inventor.


Thanks for the correction — I had no idea it was named after a person. Interesting that in katakana, both "brown" and "Braun" are the same: ブラウン.

Maybe that’s no coincidence as the German word braun means the same as the English word brown.

Kana is a phonetic alphabet, you write things exactly the way you pronounce them. Since both "brown" and "braun" are (roughly) pronounced the same (at least to Japanese ears), they are written the same.

The word "personality" smuggles in biological assumptions. Asking "does this model have personality?" feels unproductive because the term implies something it can't be.

More useful framing: how do these subnetworks produce outputs that observers evaluate as personality-consistent? Personality isn't an internal property - it's a judgment made by people watching behavior.


> Personality isn't an internal property - it's a judgment made by people watching behavior.

Partly, yes, but personality is also an internal property, or it is coherent and correct enough to generally say that it has internal aspects. I.e. a person's personality is the set of (relatively) stable and difficult-to-change patterns that manifest in their behaviour in broad contexts, and these patterns are almost certainly encoded internally in the brain in some form. It is not much different than saying a person's intelligence / IQ is partly internal.

Otherwise, I do agree with your more careful framing, and I wish people thought and spoke more carefully about these things, and doubly so for LLMs.


I've been programming for 20 years and apparently still don't understand what my terminal is doing. Recently I asked Claude Code to generate a small shell script. It came back full of escape codes and I just stared at it like a caveman looking at a smartphone. This article finally explains what's been happening right under my nose.

I'm Japanese, and the 80286 at 10MHz was huge for Japan's PC-98 scene. The V30 handled backward compatibility while the 286 ran much faster than what we had before. This project brought back memories—the 286 was the chip of my era, and it's great to see people still exploring its capabilities decades later.

Japanese PC-98 games have an aesthetic that’s so unique. There’s this one Twitter account that posts images from visual novels from that era and they all look so pretty: https://x.com/PC98_bot Also on bluesky https://bsky.app/profile/pc98bot.gang-fight.com

Since generative AI exploded, it's all anyone talks about. But traditional ML still covers a vast space in real-world production systems. I don't need this tool right now, but glad to see work in this area.

A nice way to use traditional ML models today is to do feature extraction with a LLM and classification on top with trad ML model. Why? because this way you can tune your own decision boundary, and piggy back on features from a generic LLM to power the classifier.

For example CV triage, you use a LLM with a rubric to extract features, choosing the features you are going to rely on does a lot of work here. Then collect a few hundred examples, label them (accept/reject) and train your trad ML model on top, it will not have the LLM biases.

You can probably use any LLM for feature preparation, and retrain the small model in seconds as new data is added. A coding agent can write its own small-model-as-a-tool on the fly and use it in the same session.


What do you mean by "feature extraction with an LLM?". I can get this for text based data, but would you do that on numeric data? Seems like there are better tools you could use for auto-ML in that sphere?

Unless by LLM feature extraction you mean something like "have claude code write some preprocessing pipeline"?


It's for unstructured inputs, text and images, where you need to extract specific features such as education level, experience with various technologies and tasks. The trick is to choose those features that actually matter for your company, and build a classifier on top so the decision is also calibrated by your own triage policy with a small training/test set. It works with few examples because it just needs a small classifier with few parameters to learn.

Isn't the whole point for it to learn what features to extract?

I agree that expanding communication with strangers is important. But starting with "Do you mind if I sit here? Or did you want to be alone with your thoughts?" and then continuing a conversation for 10+ minutes is a real struggle for me. Sometimes I even wonder—how exactly does this kind of individual conversation actually help me? Maybe this is just me.

Yeah it'll be hard. But with a lot of practice it'll get easier. I think part of the practice is recognizing "they don't want me to continue this conversation" and bailing, vs trying to force every interaction to be a deeper conversation.

I never practiced "idle conversation with a complete stranger" like that because I was lazy. But I did practice making normal, non-sexual, conversation with women on dating sites and dates so that I could go from "isolated in school, then after going online, low response rate and never more than 1 or 2 dates" to someone in a long-term relationship. And recognizing that sort of "ok there's just not any interest here, move along" signal was definitely relevant there too.

Skills take investment.

My parents didn't give me nearly as many opportunities to practice these skills as they had when they grew up, and pop culture actively encouraged me not to talk to strangers as a kid, so I had to work harder at them as an adult. But it was worth it.


Is it a matter of skill, or a matter of courage?

It's a matter of priveledge. Many people don't have time to try and make new connections

>how exactly does this kind of individual conversation actually help me?

It doesn't. It just helps the speaker.


That makes me think—why do I enjoy conversations with friends then? What's really the difference between a friend and a stranger? Friends annoy me too, maybe even more often than strangers do.

Your friends are hopefully somewhat invested in you for a non-transactional reason, and have proven to be a non-threat. There's no guarantees with a stranger.

What a bizarre perspective. Have you never gotten any personal value out of a single conversation in your entire life? Have you never made a friend? I don't understand this "all conversations are bad and useless" nonsense. What on earth do you think you're doing on social media?

I'm not saying that all conversations are bad. I'm just talking about the one hour one-sided conversations described in the article.

One of the basic rules of starting conversations with other people is letting the other person do most of the talking. People like talking about themselves. So the old lady in the article violated that rule. That isn't to say that just talking to people instead of actually talking with them will never work. You might be lucky and the person you talk to just happens to be very interested in what you want to tell them, but it is rather unlikely.

Once you have shown that you are interested in someone by listening to them and thereby learning about them, you might sometimes find that they might also be interested in something you can share with them. The easiest way to get someone interested in you is to first get interested in them.

It's a pretty simple principle, but since people like to talk about themselves they often do not follow it.


Is there a compile-to-Python language with built-in type safety, similar to how TypeScript transpiles to JavaScript? I'm aware of Mojo and mypyc, but those compile to native code/binaries, not Python source.

Python does not need that, as it has built-in type annotation support. The annotation is any expression, so you can in theory express anything a custom type-only language would allow you (although you could make it less verbose and easier to read).

However, the it IMHO just works much worse than TS because: * many libraries still lack decent annotations * other libraries are impossible to type because of too much dynamic stuff * Python semantics are multiple orders of magnitude more complex than JavaScript. Even just the simplest question: Is `1` allowed in parameter typed `float`? What about numpy float64?


Thanks for helping me understand. I wasn't aware of Python's type annotation support. I did some quick research and learned that type annotations don't cause compile errors even when there are type errors. Is that why type checkers like Pyrefly exist?

Correct, currently in Python the type checking is implemented more in a linting phase than in a compiling or runtime phase. Though you can also get it from editors that do LSP, they'll show you type errors while editing the code.

Thanks linsomniac and exyi. I didn't realize Python's type hints are checked by linters, not the compiler. Learned something today.

I really like them, I'm a very long time Python programmer ('97) and so the ability to just bang something simple out and not care about the typing is nice at times, but for anything very serious at all it's very nice to have the option to add the type annotations and get the bulk of the benefits from it.

Yes, but there are also runtime type checkers that can be used to check that input data conforms to the expected types (aka a schema but defined using python types and classes).

The only language I'm aware of that's a bit like that is rpython, but it's the other way round: designed for python to compile to it. If you think about it, you get more benefit from the typed language being the base one, as the compiler or JIT can make more assumptions, producing faster code . Typescript had no alternative but to do it the other way, since it's a lot harder to get things adopted into the browser than to ship them independently.

You can compile F# to Python with Fable https://github.com/fable-compiler/Fable.Python

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: