Hacker Newsnew | past | comments | ask | show | jobs | submit | marcellus23's commentslogin

The GP's use of the word "impose" didn't seem perjorative to me or suggest that Anthropic is the offender and the government is the victim. I think you're reading a lot into a simple word choice and this response seems way too hostile.

A "simple word choice"?? This isn't just about the single word "impose", read the whole post:

> Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment. The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

So first off, regarding that first paragraph, didn't any of these idiots watch WarGames, or heck, Terminator? This is not just "oh, why are you quoting Hollywood hyperbole" - a hallmark of today's AI is we can't really control it except for some "pretty please we really really mean it be nice" in the system prompt, and even experts in the field have shown how that can fail miserably: https://www.tomshardware.com/tech-industry/artificial-intell...

Second, yes, I am relieved Anthropic wanted to "impose" their morals because, if anything, the current administration has been loud and clear that the law basically means whatever they says it does and will absolutely push it to absurd limits, so I now value "legal limits" as absolutely meaningless - what is needed are hard, non-bullshit statements about red lines, and Anthropic stood by the those, and Altman showed what a weasel he is and acceded to their demands.


Are you really going to pretend that “impose their morals” is a completely value-neutral statement?

It certainly was intended as such. In a commercial transaction, that's what they're doing. They don't think it's moral to use their product in certain ways. They are thus prohibiting their customer from using it in such ways.

But, as I've said, I tend to agree with both Anthropic and the Administration's positions. What was wrong here is that rather than just terminating the contract, the Administration went nuclear.


It seems value-neutral to me. It's descriptive. Particularly for anyone who understands that different groups of people will legitimately disagree on many moral questions.

What would be the value neutral way to phrase it?

"Anthropic wanted its product to not be used in ways that contradict its ethics".

"Impose" makes it sound like Anthropic is being hostile here. And also, I don't think this is a situation that calls for moral relativism.


> "Impose" makes it sound like Anthropic is being hostile here.

Anthropic is not asking for their product to be used in line with their ethics, they are basically demanding it. I don’t necessarily think they are wrong but I don’t think we need to sugarcoat it either. It’s a demand and if it differs from what the DoW wants to use the tech for…of course its going to be in conflict. “Impose” is appropriate.


The iPhone 8 was released in 2017 and the Pixel 8 is from 2023.

I'm not sure that's a literal quote from their boss. It seems to be an illustrative example, probably exaggerated.

The beginning describes the formation of an intelligence and it is indeed very dense. You can figure out what's going on but it takes some slow reading, and probably best to revisit it once you have some more context from later in the book.

The whole book isn't like that. Once you get past that part, as the other commenter said, it gets much easier.


I actually love the beginning of Diaspora, and have recommended just that section to people. I found it beautiful and moving. It's starting to learn that people have to "get past" that section...


It really doesn't.


This just takes me to an information-less landing page for some unlaunched product.


He changed his mind? The comment you're citing seems partly tongue-in-cheek anyway, but even if it wasn't, how is this some kind of gotcha?


That's the real reason the conversation seems pointless. Every thread is full of comments from one group saying how useful AI is, and from another group saying how useless it is. The first group is convinced the second group just hasn't figured out how to use it right, and the second group is convinced the first group is deluded or outright lying.


Yes, I'm in the second group and I have that conviction about the first group based on personal experience with LLMs.

But most hype is not delusion. It's people trying to present themselves as "AI" experts in order to land those well paid "AI" positions. I don't think they even believe what they're saying.


All of this is written with a sense of anger and sarcastic invective that doesn't seem appropriate. This is part of learning any new language or API. Going in with an attitude of "I should already know how all this works, why am I forced to do research or look at docs?" seems unfair and will spoil the experience of learning anything.

> Why was that so hard? Why are the models here separate from the ones in the right click menu? Too many questions.

The very screenshot above this paragraph actually answers this, in what admittedly might be an uncharacteristically clear UI: "Siri and Safari will always run translations online."


This is a story about the risks of AI-induced brainrot. You get so used to having the computer just do your work, that the second you need to engage your noggin you’re lost at sea. Or at least just frustrated.

Reading and understanding the docs and reference material has always been part of the work.

Aside from the commentary it read like an advertisement for how great the swift/macos translation APIs are. PEBCAK


Gotta say as a Swift dev I agree—followed the link the to Translate docs and was pleasantly surprised to see a discussion section clearly explaining the usage, which is not always the case for Apple APIs! But this wasn’t really just an article about the API. It was about the complexity of trying to build on the stack of Swift/SPM/ParseableCommand/Foundation/Concurrency/Translation without having a good grasp of any of them. I was frustrated reading it, but I think it does point to the underlying knowledge that’s needed to be proficient at something like this. None of it is a particular indictment of Swift as an ecosystem (though there are lots of valid criticisms)-it’s just the nature of development and something that’s massively eroded by relying too much on these ghosts


The problem is there are a wide class of problems that you want solved but putting on the work will prevent you from actually doing the task because the cost isn't worth the reward. Because it's for a low impact tool. Or you can't imagine yourself dealing with this API again within a year or two by which time it will probably be completely different with v2 of the API.

So, you reach for AI and it works really well. So you start reaching for that more and more...


Having no minimum wage for LLMs is fantastic. It opens up all manner of work that had previously been priced out.


Hm. I thought LLMs weren't free. Am I missing something?


1. You can run decent local AI now - see /r/LocalLlama. You pay the electricity cost and hardware capex (which isn't that expensive for smaller models).

2. Chinese APIs like Moonshot and DeepSeek have extremely cheap pricing, with optional subscriptions that will grant you a fixed number of requests of any context size for under $10 a month. Claude Code is the bourgeois option, GLM-4.7 does quite well on vibe coding and is extremely cheap.


I remember reading and hearing similar rants from programmers 15 years ago, long before LLMs. The author kept going and figured it out, and probably got some pride and enjoyment from finishing the project in spite of the frustrating moments. That’s what learning to code has always been like.


Coming from other languages, figuring out how to get an NWConnection to work was not trivial and just reading the interface docs did not help. I empathise with the frustration of reading apple docs. Sure, the tone isn’t professional, but I don’t believe that is out of place.


> “All of this is written with a sense of anger and sarcastic invective that doesn't seem appropriate.”

Huh. The end of the piece says the author was “frustrated”. And throughout the author says things at the end of paragraphs like, _the result was an error x, but I already have/did x!_

I read the piece because I was curious about this description of “anger”. I relate to the journey as the author describes it—frustration. Though, there are a few “dramatic” words here and there. Not least of which is in the title, “unbearable”. But I usually mark that as a crutch in the personal Blog medium.

On a more pedagogical note, encountering a personal piece like this documenting a journey of idea to (near) success is an important genre of tech writing for one’s self and maybe others. As others have noted the OPs missteps, so too the OP, with hindsight, can revisit the piece and notice where they went wrong. If you’re away from the language/project for a while a piece like this is a good review before starting the next project.


Most of the author's frustration was due to lack of good feedback from Swift.

Like, how can it run a command that has an `async` main despite the fact that you didn't extend the async version of command?? That should've been an error (e.g. "bro, you cannot have an async main in this class, you need the Async version of this class which is called AsyncWhatever").

Not awaiting on an async function should be at least a warning. Another frustrating lack of feedback: it just lets you run it and nothing happens.

The version thing: it should show all possible variants in the enum, even the ones you should not be able to use, and then when you try using it, show an error saying "you need to change the version of this file to at least blah to use this". Why can't the Swift LSP do that? Apple has a trillion dollars, they can afford polishing their stuff.

The author is used to Rust, which would've made it very clear what was wrong in all cases.

Swift, being as modern as Rust, should be doing better than that. Languages that fail to give you proper error messages and warnings are just not worth it: just one of these "wtf is going on?" moments can cost you hours. Just use a better language if you can.


Swift ultimately is a language that is expected to be compiled by xcode. Package.swift isn’t even properly supported by Xcode still.


Disagree here. APIs are meant for using and not learning. But the context matters here. For some paid system, there better be an API thats easy to use or I am throwing AI at the problem or hoping someone else will do it. If its something worth learning - say Guava data structures or RocksDb core - yup I'll invest the time to learn. That learning transfers over. But trying to learn some shitty AWS api and its nuances, no thanks. Some payment system that a handful of people use - no thanks again.


Language is meant for using, not learning. Why is Arabic/French/Chinese/etc so difficult?


Language ≠ API. you shouldn't be learning new grammar just because you visit another municipality. Everyone knows how grammar works in your country (at least they should).

This is the same issue with libraries. They should limit how you build your code. This is why I hate frameworks as a whole. They don't add anything, just abstract and limit.


> I'd be shocked if the developer wasn't actually less productive

I agree 10x is a very large number and it's almost certainly smaller—maybe 1.5x would be reasonable. But really? You would be shocked if it was above 1.0x? This kind of comment always strikes me as so infantilizing and rude, to suggest that all these developers are actually slower with AI, but apparently completely oblivious to it and only you know better.


I would never suggest that only I know better. Plenty of other people are observing the same thing, and there is also research backing it up.

Maybe shocked is the wrong term. Surprised, perhaps.


There are simply so many counterexamples out there of people who have developed projects in a small fraction of the time it would take manually. Whether or not AI is having a positive effect on productivity on average in the industry is a valid question, but it's a statistical one. It's ridiculous to argue that AI has a negative effect on productivity in every single individual case.


It's all talk and no evidence.


We’re seeing no external indicators of large productivity gains. Even assuming that productivity gains in large corporations are swallowed up by inefficiencies, you’d expect externally verifiable metrics to show a 2x or more increase in productivity among indie developers and small companies.

So far it’s just crickets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: