> Or it took an hour of back and forth with ChatGPT loaded up with those 34 pages.
That's exactly what I was thinking when I read that line. And there's nothing necessarily wrong with using AI to help decipher large legal documents, just be honest about it.
I've always been baffled why we just hand the man pages over to a normal pager rather than something that actually understands their structure. That "look up a flag" case is exactly what bothers me constantly when viewing man pages. And the search they say they should be using doesn't even work consistently since the flag might not be the first non-whitespace thing on a line if there's aliases.
Yes AI has taken away the tedium, but a lot of that could already be overcome by leveraging your text editing tools well or with basic code generation (such as being able generate the skeleton of a class from an interface).
And there was something nice about still having to put in the manual work in those cases. It let me process what the code is actually doing and gave me the opportunity to internalize it in a way that just doesn't happen with AI. It also sort of gave me a thinking break where I was engaged at just the right level to let the thoughts about the more interesting parts float around in my head. With AI writing all the code, I feel like I'm either fully engaged with those thoughts or not engaged at all. And that's a bit of a problem because aha moments often happen when the idea is in that middle area of thought.
I'm a little wary of believing this without confirmation. It certainly sounds like something an app from a big Chinese company might do, but the LLM writing style with em-dashes replaced by double hyphens looked like someone trying to hide that they use an LLM. And I noticed that the account for the Gist submission is only 3 hours old. And then looking here the account on HN is also only 3 hours old. Seems a little sketchy to me.
It's not, but do you really think the people having Claude build wrappers around Claude were ever aware of how services like this are typically offered.
If you tell me "no fucking way" by running it through an LLM, I will be far more pissed than if you had just sent me "no fucking way". At least in that case I know a human read and responded rather than thinking my email was just being processed by a damned robot.
I'm feeling the same way. It's quite the contrast from all the hype posts that make it sound like you give the AI a rough idea of what you're looking for and then it will build it from start to finish on its own.
Did that actually have hard data to back that up? Because publications that don't use subscriptions still need people to show up and look at ads. So they are motivated to publish the clickbaitiest things possible. Maybe the difference in that case is that they will publish content that attracts people from various political extremes? That certainly wouldn't make them less polarized though.
Ad based revenue comes with its own problems. But I doubt there's that many readers who so ferociously disagree with an article that they then refuse to consume any more free content from that outlet any more, which is what happens when someone cancels their subscription. So, to me it makes sense that ad supported news outlets don't suffer as much from having a wider range of views.
>Maybe the difference in that case is that they will publish content that attracts people from various political extremes? That certainly wouldn't make them less polarized though
Replace "extremes" with "views". Most people aren't extremists. I don't understand why being exposed to various views would not make them less polarised?
That's exactly what I was thinking when I read that line. And there's nothing necessarily wrong with using AI to help decipher large legal documents, just be honest about it.
reply