Hacker Newsnew | past | comments | ask | show | jobs | submit | tossandthrow's commentslogin

You could say the same about a cfo, if the company does not use financial engineering, etc.

The CTO role is to be invisible to the business, and do that by ensuring that the tech org is working


At every companies I've worked at the CFO always has a large presence at every townhall, after all they are the one who is responsible for sending your paychecks on time. As for CTO, yeah it's a mixed bag in my experience, I mostly see them as just another layer between the CEO/COO and the principle engineers. Maybe that's exactly what they want though.

Yeah, probably in the US, where workers protection are mostly non existent - you need to be friends with the money!

While this technique works for new projects, it takes no more than a couple of pivots for it to completely fail.

A good AI development framework needs to support a tail of deprecated choices in the codebase.

Skills are considerable better for this than design docs.


The Ai legal analysis seemed to be the nail in the coffin.

Adding Ai generated comments are IMHO some of the most rude uses of Ai.


Not sure what exactly you're referring to, but legal is a very interesting field to observe, right? I've been wondering about that since quite early in my LLM awareness:

A slightly sarcastic (or perhaps not so slightly..) mental model of legal conflict resolution is that much of it boils down to throwing lots of content at the opposing side, claiming that it shows that the represented side is right and creating a task for the opposite side to find a flaw in that material. I believe that this game of quantity fits through the whole range from "I'll have my lawyer repeat my argument in a letter featuring their letter head" all the way to paper-tsunamis like the Google-Oracle trial.

Now give both sides access to LLM... I wonder if the legal profession will eventually settle on some format of in-person offline resolution with strict limits to recess and/or limits to word count for both documents and notes, because otherwise conflicts fail to get settled in anyone's lifetime (or won by whoever does not run out of tokens first - come thinking of it, the technogarchs would love this, so I guess this is exactly what will happen barring a revolution)


Ah, sorry. I am not referring to using LLMs for legal work.

I am referring to the act of merely pasting the output of a model as a comment.

Have the decency to understand what the LLM is writing and write your own message.


That comment is wild

> Here's the AI-written copyright analysis...

I'm not going to spend more time reading than what you have spent writing!


What do you mean "not exactly sure what you are referring to"?

The guy just posted a huge ai slop pr, do you think that's the correct place for "very interesting field observations about legal"?

What else could it refer to than you can't back up copyright ownership questions with "ai said so"??


Some maintainers who drank the Kool-Aid, just use AI to answer to issues and review PRs.

Pretty soon we'll have AIs talking to each other.


The cognitive load is in the lack of a "defined problem break".

With Ai, the situations where you know what you are building and you get into flow are fewer and further apart.

So much more time is thinking about the domain, and the problem to solve.

And that is exhausting.


I whole heartedly prefer chat interfaces over inline ai suggestions.

I find the inline stuff so incredibly annoying because they move around the text I am looking at.


Same! It feels like being shouted at nonstop by an overeager teacher's pet who's wrong 60% of the time.

I do appreciate in-IDE functionality that can search the codebase etc etc, but I want to hit a button when I need it.


> Something is systemically wrong in the US when we are cutting off people’s access to meds, like GLP-1s, which have profound health benefits.

The US is a funny thing: no issue cutting access to Healthcare in general, education, healthy food etc.

But it is all the rage when a pill can undo people's bad habits.


Yes, models are aligned differently. But that is a quality of the model.

Obviously it must be assumed that the model one falls back on is good enough - including security alignment.


Sure, in theory. But "assumed good enough" is doing a lot of heavy lifting there. Most people picking a local fallback model are optimizing for cost and latency, not carefully evaluating its security alignment characteristics. They grab whatever fits in VRAM and call it a day.

Not saying that's wrong, just that it's a gap worth being aware of.


That would justify a good multiple of 5 to 10. Not 30 or above as for high growth companies.

multiple of what? there is maybe one software company trading above 30x revenue - palantir. many companies growing at 20% trade at single digit revenue multiples.

Unqualified it almost always means earnings (profits).

that is really not the case in software, people commonly go between EV/S and EV/FCF for high growth names. also earnings could mean: GAAP earnings, non-GAAP earnings, ebitda, ebita, FCF, FCF ex share based comp.

The issue is that you don't know magnitudes.

If the market go up 80% before dropping 20% then you want to have bought in.


I promise you, a person buying a vehicle for their business will be looking at ROI rather then smart features.

Computing at this scale is not marketed to flashy fanbois.


> Computing at this scale is not marketed to flashy fanbois.

Every vain CxO is a flashy fanboi at heart


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: