Hacker Newsnew | past | comments | ask | show | jobs | submit | roarcher's commentslogin

> We just packed the kid along and went traveling anyway. He had eleven stamps in his passport by his first birthday.

How do you keep a baby happy and quiet on long international flights? I currently have no kids but I may find myself in this situation in the next couple years. I'm dreading being the guy with a screaming infant on a 13-hour trans-Pacific flight that keeps everyone from sleeping.


Babies want nothing more than to sit on your lap snoozing and feeding. It's more or less what you do at home anyway. The hardest part about flying with a baby is dealing with the added luggage (stroller, carseat, overpacked diaper bag, etc).

It's only once they're old enough to have more sophisticated feelings (like boredom) but not old enough to communicate them (except by screaming) you get in trouble.


First two years they can fly for free, but they have to ride in an adult’s lap and that gets tiring. Don’t believe the bassinet offerings - as soon as a plane gets turbulence, you have to get the sleeping baby out of the wall bassinet and good luck appeasing them. Age 1-2 is hardest for travel, so you can skip it. The only thing that worked was getting their own seat with the cosco scenera next car seat (or their own if they like it, but that one is $50 and light to carry). They would sleep nicely for large chunks and you get to enjoy travel again. After age 3 it’s much easier (they can ipad if that’s the only ipad time they ever get).

I'd say that's pretty much the definition of standard, yeah. And it's why you can't make a profit selling a simple ToDo app. If you expect people to pay for what you build, you have to build something that doesn't have a thousand free clones on the app store.

I politely disagree.

I think you’re conflating software and product.

A product can be a recombination of standard software components and yet be something completely new.


> LLM use in litigation drafting is thus akin to insurgent/guerilla warfare: it take little time, energy, or thinking to create, yet orders of magnitude more to analyze and refute.

The same goes for coding. I have coworkers who use it to generate entire PRs. They can crank out two thousand lines of code that includes tests "proving" that it works, but may or may not actually be nonsense, in minutes. And then some poor bastard like me has to spend half a day reviewing it.

When code is written by a human that I know and trust, I can assume that they at least made reasonable, if not always correct, decisions. I can't assume that with AI, so I have to scrutinize every single line. And when it inevitably turns out that the AI has come up with some ass-backwards architecture, the burden is on me to understand it and explain why it's wrong and how to fix it to the "developer" who hasn't bothered to even read his own PR.

I'm seriously considering proposing that if you use AI to generate a PR at my company, the story points get credited to the reviewer.


Evil voice: "I don't mind not getting credits for the story points. The story was AI-generated anyway."

If code smells like LLM, then you walk to said coworker and ask them to explain it for you. Play dumb if necessary.

Or you use YOUR LLM to review the PR :D

...and wtf, you get "credited" story points for finishing tasks? That sounds completely insane.


> you get "credited" story points for finishing tasks? That sounds completely insane.

Developers' names are attached to stories, and stories have points on them. Why is that insane, and how does your company track who did what?

I propose that the name on the story should be that of the reviewer since they did the work.


We don't really track individual features to people in a way I could call "crediting" - as in nobody really checks afterwards who did how many story points in a sprint.

As long as the team as a whole gets stuff done, everything is good.


Because story points is a tool for the business to know when optimistically a thing could be done. Or more realistically get a decent "no sooner than" estimation of the task.

Using them for anything else, or by anyone else, like scoring the team or like here, individual contributors, is idiotic.


And yet they won't split the bill with me. Bunch of freeloaders.


Banking Company LLC presents: Quantum Loans™ "Your money is simultaneously yours and ours until you check your balance."

Superposition Financing: Your loan exists in all possible amounts until observed. Checking your balance collapses the wavefunction — so we recommend you simply... don't. Ignorance isn't just bliss, it's financially optimal.

Multiverse Co-signing: Split the debt across all versions of yourself in the multiverse. Sure, some of you will default — but statistically, infinite yous means infinite revenue for us.

Entangled Interest Rates: Your rate is entangled with a partner borrower chosen at random. If they pay on time, your rate drops.

Payment Clusters: Forget monthly installments. Payments arrive in probabilistic clusters — sometimes three in a week, sometimes none for a year. We can't predict when, and neither can you. It's not a bug, it's quantum mechanics.


The real joke here is how close these quips are to the reality of modern day financial markets. Specifically, lending and hedging, are time entangled and value within the markets exist in superposition.


The AI doesn't know anything about the thing you're documenting except what you tell it, so what value does it add?


I write a lot of tech docs with AI so I can add some substance here. First of all, you do tell it what you’re writing about - you can give it reams of documentation on related systems, terminology, and source code. Stuff that would take you weeks to consume personally (but you probably know the gist of it from having read or skimmed it before). Let’s say you are writing a design doc for a new component. After all of this stuff describing the environment is in its context (you don’t have to type it all out, you can copy paste it, use an MCP to suck it in from a DB, read it from a file, etc), you describe at a high level what architecture choices and design constraints to make (for example, needs to be implemented with this database, this is our standard middleware, etc) and it will fill in blanks and spit out a design for you. Then you refine it - actually this component needs to work like that, unit testing strategy needs to be like this, and so on. Now give me the main modules, code stubs, suggest patterns to use, etc. continue to iterate - it will update the entire doc for you, make sure there are no dead or outdated references, etc. extremely powerful and useful.


> If losing your job is traumatic, I’d suggest reviewing your relationship with employers and employment in general.

Should I also review my relationship with my need to eat and have a roof over my head?


Being dependent to that degree on something you have ultimately no control over is the problem. In that situation, I am always hedging my bets and preparing for the worst.

Because it’s not if but when. I work like hell to get out of those situations and into a situation where I am more in control of my destiny.

That people get lulled into a sense of security in typical employment situations is to me extremely bad judgment on their part, if not outright denial of the reality of their situation.

The only thing sure about employment is that it will end.


He was 57, born in 1888. Died of a stroke.

https://en.wikipedia.org/wiki/John_Logie_Baird#Death


One of his electro-mechanical units was on display in Victoria, Australia. Most amazing assemblage, you can sort-of get the idea from things.

I read online that at his end, Baird was proposing a TV scan-rate we'd class as HD quality, which lost out to a 405 line standard (which proceeded 625/colour)

There is also a quality of persistence in his approach to things, he was the kind of inventor who doesn't stop inventing.


> We jump so quickly to religious significance.

The article talks extensively about how these monuments were used for timekeeping. Marking the seasons allowed people to predict animal migrations and plan agricultural activities.

It seems that you are the one who has forgotten the practical uses of these artifacts.


Call me a cynic, but legislators own stock in these companies. Their true interest in them is also "line go up".


> It is search if you ask it to produce a list of links.

Unfortunately it can hallucinate those too. I've had ChatGPT cite countless nonexistent academic papers, complete with links that go nowhere.


Which is "fine" so to speak. We do this with using AIs for coding all the time, don't we? As in, we ask it to do things or tell us things about our code base (which we might be new to as well) but essentially use it as a "search engine+" so to speak. Hopefully it's faster and can provide some sort of understanding faster than we could with searching ourselves and building a mental model while doing it.

But we still need to ask it for and then follow file and line number references (aka "links") and verify it's true and it got the references right and build enough of a mental model ourselves. With code (at least for our code base) it usually does get that right (the references) and I can verify. I might be biased because I both know our code base very well already (but not everything in detail) and I'm a very suspicious person, questioning everything. With humans it sometimes "drives them crazy" but the LLM doesn't mind when I call its BS over and over. I'm always "right" :P

The problem is when you just trust anything it says. I think we need to treat it like a super junior that's trained to very convincingly BS you if it's out of its depth. But it's still great to have said junior do your bidding while you do other things and faster than an actual junior and this junior is available 24/7 (barring any outages ;)).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: