Hacker Newsnew | past | comments | ask | show | jobs | submit | more Version467's commentslogin

> anyone with fair knowledge of LLMs and E/R should be able to devise it.

While this may be true, I think it overlooks a really important aspect. Current LLMs could be very useful in many workflows if someone does the grunt work of properly integrating it. That’s not necessarily complicated, but it is quite a bit of work.

I don’t think we’ll hit a capabilities wall anytime soon, but if we do, we’ll still have years of work to do, to properly make use of everything llms have to offer today.


This grunt wall is these 10-20% that the model gets wrong, or just misbehaves. And it can be lotta struggle. I'm not talking about easy stuff like writting a letter to customer, but classification and text2code, which is hard

Stating this after just finished my long overdue masters and the topic was text2sql with a pinch of my own thing. Hundreds of papers are written on this topic, and only when complex agents, multi-prompting + actual discreete systems play together things start ot work. So just tossing all in the context is not a solution.

In practice, I agree, the llms have their role in software, as classifier, graphics segmentation, code assist, etc. But it is very wrong to put all eggs in the same basket, and this basket is very very very shade one.


Yes, but it works with a vpn and the change in latency isn’t big enough to have a noticeable impact on usability.


I can confidently say that a car would be useless to me without gps. I can navigate the small town (10k people) I grew up in, but that’s it. I wouldn’t try to navigate the city I live in now (~350k people) in a car, beyond the street I live on. If I had to and knew gps was broken (hypothetically), then I’d try to reschedule, instead of attempting to navigate with signs and a paper map.

I could probably work with printed turn by turn instructions, but that’s about it.

And I think that probably comes close to what the other post had in mind with „cannot navigate“


You are in for a fun adventure, if you have an afternoon to yourself and no particular destination. My grandmother always began our day trips that way, in her shiny Buick. Remember to stop to rest.


That's unfathomable to me. I live in a 1kk metropolitan area and I can go basically anywhere without a navigation system, in fact I'll only pull the GPS if I'm going to some remote neighborhood.

P.S: I was born here and lived here all my life.


GPS is super-convenient when you're in an unfamiliar area. I sometimes catch myself thinking how did we ever get along without it. But we did. You just looked at a map before you set off, noted street names and turns, and paid attention. You would do the same thing and manage pretty well if you had to.


Of course it's convenient. But I learned to drive before we had it, and learned to find my way around without it. Maps helped, but you can do a lot with logic and by understanding direction.


You don't ever walk in your city without gps? Wouldn't you know the streets after a while?


Lots of people don't live in walkable cities. There's no realistic way they'd get to where they're going by walking. They might walk their neighborhood and know that, but these days lots of people don't even bother walking around their neighborhood.

So they'll end up driving everywhere they go. Work, groceries, restaurants, etc. Always driving. Many won't go down paths that weren't previously suggested by their GPS. And often those destinations aren't designed to be walkable either. Massive parking lots separating the various storefronts. Corporate campuses completely surrounded by a sea of parking lots and garages. Nowhere to walk.

That said, there's still exploring possible in a car-centric place. I tend to take alternate paths to get places, purposefully "get lost" driving around, and explore places I've never been before. But that has costs and lots of people don't bother doing that.


I do, but only for things in my immediate area. Stuff like grocery shopping, haircuts, doctors appointments, etc. Those are all reachable on foot and I know how to get there, because I do it regularly. But the first few times I used gps to find it. Now I know the route.

But that doesn’t help me for anything beyond that radius and probably not even there, because those routes go through parks and other stuff, so the car routes would be different.


Are you alright? This reads like it was written by someone actively experiencing psychosis or a manic episode.


GGP here. This was my reaction as well, though I fully admit to projecting my own experiences. I relate to this style of discourse...

Anyways, I wanted to note that I appreciated seeing this comment. Folks tend to have a hard time empathizing with this sort of thing.


Sorry I was feeding into his manic episode. I do see the point he’s making but it’s flawed. It’s similar to the gun debate.


I am quite alright, thanks.

Just want dog owners to take some responsibility.


Do you have a source for this? Dog attacks being more common than car accidents seems extremely unlikely to me, given how prevalent car accidents are.

I really wouldn't expect a cellphone ban for students to result in a significant rise of delayed treatment for injuries caused specifically by dog attacks. Even if dogs attacking students is such a common occurrence that it warrants consideration in this proposal (which I doubt), it's still a school. Teachers and other staff are around. Just have them call emergency services in case of injuries.


Other commenter posted some data.

My point is that lighter attacks (scratches, jumping, sniffing, licking...) are so common, they can not even be reported.

If students have no cameras, many more teachers would bring their "pets" into school.


> lighter attacks (scratches, jumping, sniffing, licking...)

Sniffing and licking are not “attacks”. If they are, a car honking or braking suddenly is a crash.

> If students have no cameras, many more teachers would bring their "pets" into school

How many teachers brought pets to school before the 2000s? How many people brought pets to the office? (If anything, there are more pets at the workplace now than ever before.)

The taking of a cell phone seems to have emotionally provoked you. Reflecting on why you’re responding to a phone like a crack pipe might be a better use of your time than pretending to have a phobia of dog licks.


[flagged]


> why my kid needs recorder, many abusers try to down play such attacks!

Have you ever had anyone in a position of authority do anything about a dog sniffing or licking your kid? Because if so, that’s national-news level hilarious.


Yes. There are existing leash laws. But it takes a lot of effort to get then enforced. And you need recording for proof!


This might be too basic, but I found this blog post to be an incredible introduction to queues: https://encore.dev/blog/queueing


Does it work well on Linux now? I’ve been looking for a virtual desktop alternative for a while now, but last time I checked out alvr it still seemed quite unstable.


I find this framing to be extremely far away from the reality of any sales conversations I've been a part of. Flipping it like this is a rationalization that helps you sleep better, not some deep insight that the sales profession is great actually.


It means that OpenAIs public commitments to allocate resources for safety research do not track with what they actually do and people who were hired to work on safety (or in schulmans case choose to focus on safety) don't like it, so they leave.


Must've been a difficult decision with him being a cofounder and all, but afaik he's been the highest ranked safety minded person at openai. He says it's not because openai leadership isn't committed to safety, but I'm not sure I buy that. We've seen numerous safety people leave exactly because of that reason.

What makes this way more interesting to me though is how this announcement coincides with Brockmans sabbatical. Maybe there's nothing to it, but I find it more likely that things really aren't going well with sama.

Will be interesting to see how this plays out and if he actually returns next year or if this is just a soft quitting announcement.


Th reality is that every other person in tech now is hoping for Sama to fail. The world doesn't need AI to have a silicon valley face. Anthropic is doing a much, much better PR work by not having a narcissist as CEO.


Contrarily, I think the reality is that most of us couldn't care less about this AI soap opera.


I want the best model at the lowest rate (and preferably lowest energy expenditure) and with the easiest access. Anything else is just background noise.


Some people are wary of enabling ceos of disruptive technologies become the richest people in the world, take control of key internet assets and -- in random bursts of thin-skinned megalomania -- tilt the scales towards politicians or political groups who take action that negatively affect their own quality of life.

It sounds absurd, but some are watching such a procession take place live as we speak.


I still haven't seen it do anything actually interesting. Especially when you consider that you have fact check the AI.


I'm continously baffled by such comments. Have you really tried? Especially newer models like Claude 3.5?


I hear a lot of people say good things about CoPilot too but I absolutely hate it. I have it enabled for some reason still, but it constantly suggests incorrect things. There has been a few amazing moments but man there is a lot of "bullshit" moments.


Even when we get a gen AI that exceeds all human metrics, there will 100% still be people who with a straight face will say "Meh, I tried it and found it be pretty useless for my work."


I have, yeah.

Still useless for my day to day coding work.

Most useful for whipping up a quick bash or Python script that does some simple looping and file io.


To be fair, LLMs are pretty good natural language search engines. Like when I'm looking for something in an API that does something I can describe in natural language, but not succinctly enough to show up in a web search, LLMs are extremely handy, at least when they don't just randomly hallucinate the API. On the other hand I think this is more of a condemnation of the fact that search tech has not 'really' meaningfully advanced beyond where it was 20 years ago, more than it is a praise of LLMs.


> LLMs are extremely handy, at least when they don't just randomly hallucinate

I work in tech and it’s my hobby, so that’s what a lot of my googling goes towards.

LLMs hallucinate almost every time I ask them anything too specific, which at this point in my career is all I’m really looking for. The time it takes for me to realize an llm is wrong is usually not too bad, but it’s still time I could’ve saved by googling (or whatever trad search) for the docs or manual.

I really wish they were useful, but at least for my tasks they’re just a waste of time.

I really like them for quickly generating descriptions for my dnd settings, but even then they sound samey if I use them too much. Obviously they’d sound samey if I made up 20 at once too, but at that point I’m not really being helped or enhanced by using an LLM, it’s just faster at writing than I am.


I don't mean this as a slight, just an observation I have seen many times - people who struggle with utility from SOTA LLM's tend to not have spent enough time with them to feel out good prompting. In the same way that there is a skill for googling information, there is a skill for teasing consistent good responses from LLM's.


Why spend my time teasing and coaxing information out of a system which absolutely does make up nonsense when I can just read the manual?

I spent 2023 developing LLM powered chatbots with people who, purportedly, were very good at prompting, but never saw any better output than what I got for the tasks I’m interested in.

I think the “you need to get good at prompting” idea is very shallow. There’s really not much to learn about prompting. It’s all hacks and anecdotes which could change drastically from model to model.

None of which, from what I’ve seen, makes up for the limitations of LLM no matter how many times I try adding “your job depends on Formatting this correctly “ or reordering my prompt so that more relevant information is later, etc

Prompt engineering has improved RAG pipelines I’ve worked on though, just not anything in the realm of comprehension or planning of any amount of real complexity.


People also continue to use them as knowledge databases, despite that not being where they shine. Give enough context into the model (descriptions, code, documentation, ideas, examples) and have a dialog, that's where these strong LLMs really shine.


Summarizing, doc qa, and unstructured text ingestion are the killer features I’ve seen.

The 3rd one still being quite involved, but leaps and bounds easier than 5 years ago.


I see it do a lot that's interesting but for programming stuff, I haven't found it to be particularly useful.

Maybe I'm doing it wrong?

I've been writing code for ~30 years, and I've built up patterns and snippets, etc... that are much faster for me to use than the LLMs.

A while ago, I thought I had a eureka moment with it when I had it generate some nodejs code for streaming a video file - it did all kinds of cool stuff, like implement offset headers and things I didn't know about.

I thought to myself, "self - you gotta check yourself, this thing is really useful".

But then I had to spend hours debugging & fixing the code that was broken in subtle ways. I ended up on google anyway learning all about it and rewrote everything it had generated.

For that case, while I did learn some interesting things from the code it generated, it didn't save me any time - it cost me time. I'd have learned the same things from reading an article or the docs on effective ways to stream video from the server, and I'd have written it more correctly the first go around.


Your bar for interesting has to be insane then. What would you consider interesting if nothing from LLMs meets that bar?


For example there exist quite a lot of pure math papers that are so much deeper than basically every AI stuff that I have yet seen.


So if LLMs weren't surprising to you, it would imply you expected this. If you did, how much money did you make on financial speculation? It seems like being this far ahead should have made you millions even without a lot of starting capital (look at NVDA alone)


> So if LLMs weren't surprising to you, it would imply you expected this.

I do claim that I have a tendency to be quite right about the "technological side" of such topics when I'm interested in them. On the other hand, events turn out to be different because of "psychological effects" (let me put it this way: I have a quite different "technology taste" than the market average).

In the concrete case of LLMs: the psychological effect why the market behaved so much differently is that I believed that people wouldn't fall for the marketing and hype of LLMs and would consider the excessive marketing to be simply dupery. The surprise to me was that this wasn't what happened.

Concerning NVidia: I believed that - considering the insane amount of money involved - people/companies would write new languages and compilers to run AI code on GPUs (or other ICs) of various different suppliers (in particular AMD and Intel) because it is a dangerous business practice to make yourself dependent on a single (GPU) supplier. Even serious reverse-engineering endeavours for doing this should have paid off considering the money involved. I was again wrong about this. So here the surprise was that lots of AI companies made themselves so dependent on NVidia.

Seeing lots of "unconventional" things is very helpful for doing math (often the observations that you see are the start of completely new theorems). Being good at stock trading and investing in my opinion on the other hand requires a lot of "street smartness".


Re: NVIDIA. I wholeheartedly agree. Google/TPU is an existence proof that it is entirely possible and rational to do so. My surprise was that everyone except Google missed.


Okay so $0 it sounds like, you should figure out a way to monetize your future sight otherwise it comes off as cynicism masquerading as intelligence


> cynicism masquerading as intelligence

Rather: cynicism and a form of intelligence that is better suited to abstract math than investing. :-)


It spends money really well.


then why are you reading hacker news comments about it?


I guess I have a masochistic streak.


I think you are in one of the extreme bubbles. The general tech industry is not subscribed to the drama and has less personal feelings on individuals they do not directly know.


You are right. I should have said every other person (or every person) in HN.


Maybe the vocal minority that have a passionate dislike for someone they don't know?


It's not just the narcissist, it's the betrayal. The least open company possible. How did I end up cheering for Meta and Zuck?


I agree and I think that sane people will eventually prevail over the pathological narcissist.


Outlier success pretty much requires obsessive strategic thinking. Gates and Musk are super strategic but in a "weirdo autist" way, which doesn't have a big stigma attached to it anymore. Peter Thiel also benefits from his weirdness. Steve Jobs had supernatural charisma working in his favor. sama has the strategic instinct but not the charisma or disarming weirdness other tech founders have. Sama is not unusually Machiavellian or narcissistic, but he will get judged more harshly for it.


What is a “Silicon Valley face? Does nvidia’s CEO have it? Google’s founders?

I guess anthropic’s founders don’t have it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: