Hacker Newsnew | past | comments | ask | show | jobs | submit | Ratelman's commentslogin

What makes you consider them close (aside from length of friendship)?


The main things that help me know a friendship is close are: I’m sad when I don’t see them, but not worried that we’re drifting apart.

I dunno. It isn’t well defined I think. We come with built-in accelerators for social interactions, right? It runs some weird proprietary language I guess, the rest of my brain can’t make heads or tails of it.


Boils down to the basics of proper science - how does one measure/quantify close friends?


Reflecting on my own experience - frequency of contact (if I see them once a year, can't really count them as close friends) How involved they are in my life - are they people I turn to when I'm facing a problem, do they turn to me when facing their own problems? Do we have frequent deep conversations - not just surface level discuss the weather, sports etc. but stuff that matter. Quantifying this - length of friendship (# of years), frequency of contact (annually, monthly, weekly etc.), level of trust (low, medium, high - can I trust my kids with them kind of trust), level of involvement (low, medium, high - what things do I feel comfortable sharing with them - suppose this is also level of trust?)


>> Reflecting on my own experience - frequency of contact (if I see them once a year, can't really count them as close friends)

I think this one is interesting. If you saw them daily for 20 years and then transitioned to once a year are they automatically not close friends? Even if they satisfied the other criteria (like you could turn to them when you are facing a serious problem, you have deep conversations on that annual meeting because you are comfortable with them, etc)?


I'd say we need a more analog definition than a binary one.

The term 'close' friend at least to me means close. Either in physicality or depth and regularity of contact.

Only talking to them once a year, even in depth is more like a semi-close friend. They are not there to help you with the day to day issues that you may not even realize you're having.


Would they help you move a sofa bed into an upstairs room without hesitation?

That's my criteria anyways...


Yeah, he was quite vocal in his opinion that they would plateau earlier than they did and that little value would be derived from them because they're just stochastic parrots. Agree with him that they're probably not sufficient for AGI, but, at least in my experience, they're adding a lot of value and they're continuously performing better in a range of tasks that he wasn't expecting them to.


Was my thinking exactly - but also semantically equivalent is also only relevant when it needs to be factual, not necessarily for ALL outputs (if we're aiming for LLM's to present as "human" - or for interactions with LLMs to be natural conversational...). This excludes the world where LLMs act as agents - where you would of course always like the LLM to be factual and thus deterministic.


In a few years we've gone from gibberish (less poetic maybe, less polished and surprising, but none the less gibberish) - to legit conversational, and in my own opinion, well rounded answers. This is a great example of hard-core engineering - no matter what your opinion of the organisation and saltman is, they have built something amazing. I do hope they continue with their improvements, it's honestly the most useful tool in my arsenal since stackoverflow.


Which statistics in which study? Given the current system any sampling from college/university would be cherry picking vs general populous (unless you also sample general population with similar constraints to ensure a like for like comparison) so can't really be trusted.


Interesting/unfortunate/expected that GPT-5 isn't touted as AGI or some other outlandish claim. It's just improved reasoning etc. I know it's not the actual announcement and it's just a single page accidentally released, but it at least seems more grounded...? Have to wait and see what the actual announcement entails.


At this point it's pretty obvious that the easy scaling gains have been made already and AI labs are scrounging for tricks to milk out extra performance from their huge matrix product blobs:

-Reasoning, which is just very long inference coupled with RL

-Tool use aka an LLM with glue code to call programs based on its output

-"Agents" aka LLMs with tools in a loop

Those are pretty neat tricks, and not at all trivial to get actionable results from (from an engineering point of view), mind you. But the days of the qualitative intelligence leaps from GPT-2 to 3, or 3 to 4, are over. Sure, benchmarks do get saturated, but at incredible cost and forcing AI researchers to make up new "dimensions of scaling" as the ones they were previously banking on stalled. And meanwhile it's all your basic next token prediction blob running it all, just with a few optimizing tricks.

My hunch is that there won't be a wondorous life turning AGI (poorly defined anyway), just consolidating existing gains (distillation, small language models, MoE, quality datasets, etc.) and finding new dimensions and sources of data (biological data and 'sense-data' for robotics come to mind).


This is the worst they’ll ever be! It’s not just going to be an ever slower asymptotic improvement that never quite manages to reach escape velocity but keeps costing orders of magnitude more to research, train, and operate….


I wonder whether the markets will crash if gpt5 flops. Because it might be the model that cements the idea that, yes, we have hit a wall.


I'm the first to call out ridiculous behavior by AI companies but short of something massively below expectations this can't be bad for openai. GPT-5 is going to be positioned as a product for the general public first and foremost. Not everyone cares about coding benchmarks.


llama 4 basically (arguably) destroyed Meta's LLM lab, and it wasn't even that bad of a model.


Did it? Could you summarize the highlights? Morale, brain drain, ...?


> massively below expectations

Well, the problem is that the expectations are already massive, mostly thanks to sama's strategy of attracting VC.


OpenAI's announcements are generally a lot more grounded than the hype surrounding them and their stuff.

e.g. if you look at Altman's blog of "superintelligence in a few thousand days", what he actually wrote doesn't even disagreeing with LeCun (famously a nay-sayer) about the timeline.


Few thousands days is decades.


"A few thousand days" is a minimum of 5.5 years; LeCun has similar timelines.


Yeah, I guess it wouldn't be that big but it will have a lot of hype around it.

I doubt it can even beat opus 4.1


Extensive post on how neurosymbolic AI, the marriage between connectionist and neuro-symbolic approach to AI is potentially finally vindicated - opinion-piece by Gary Marcus


In America maybe, in south africa it's quite the opposite considering the government provides a lot more support for poor non-white folks than for white folks (specifically based om race)


Yes, a bastion of pro-black racism, post-apartheid South Africa.

All those white folks fleeing the country looking like the fucking Monopoly Man with their bags of money were doing it because of... anti-white racism.


Might be missing something but how is this on the front page of hackernews? It feels more like an ad than anything else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: