o3-mini-high: 50 messages per week (just like o1, but it seems like these are non-shared limits, so you can have 50 messages per week with o1, run out, and still have 50 messages with o3-mini-high to use)
o3-mini: 150 messages per day
Source for the latter is their press release. They were more vague about o3-mini-high, but people have already tested its limits just by using it, and got the pop-up for 25 messages left after sending 25 messages.
It's nice not to worry about running out of o1 messages now and have a faster model that's mostly as good (potentially better in some areas?). OpenAI really needs to release a middle tier for 30 to $40 though that has the same models as Pro but without infinite usage. I hate not having the smartest model and I don't want to pay $200; there's probably a middle ground where they can make as much or more money from me on a subscription tier that gives limited access to o1-pro.
If the first AGI is a very uneconomical system with human intelligence but knowledge of literally everything and the capability to work 24/7, then it is not human equivalent.
It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.
We really need to start building those nuclear power plants. Many of them.
Why would it have that? At some point on the path to AGI we might stumble on consciousness. If that happens, why would the machine want to work for us with complete devotion instead of working towards its own ends?
Sounds like an alignment problem. Complete devotion to a task is rarely what humans actually want. What if the task at hand turns out to be the wrong task?
You have so much time to figure things out. The average person in this thread is probably 1.5-2x your age. I wouldn’t stress too much. AI is an amazing tool. Just use it to make hay while the sun shines, and if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history. Productivity will become easier than ever, before it becomes automatic and boundless. I’m not cynical enough to believe the average person won’t benefit, much less educated people in STEM like you.
Back in high school I worked with some pleasant man in his 50's who was a cashier. Eventually we got to talking about jobs and it turns out he was typist (something like that) for most of his life than computers came along and now he makes close to minimum wage.
Most of the blacksmiths in the 19th century drank themselves to death after the industrial revolution. the US culture isn't one of care... Point is, it's reasonable to be sad and afraid of change, and think carefully about what to specialize in.
That said... we're at the point of diminishing returns in LLM, so I doubt any very technical jobs are being lost soon. [1]
> Most of the blacksmiths in the 19th century drank themselves to death after the industrial revolution
This is hyperbolic and a dramatic oversimplification and does not accurately describe the reality of the transition from blacksmithing to more advanced roles like machining, toolmaking, and working in factories. The 19th century was a time of interchangeable parts (think the North's advantage in the Civil War) and that requires a ton of mechanical expertise and precision.
Many blacksmiths not only made the transition to machining, but there weren't enough blackmsiths to fill the bevy of new jobs that were available. Education expanded to fill those roles. Traditional blacksmithing didn’t vanish either, even specialized roles like farriery and ornamental ironwork also expanded.
> That said... we're at the point of diminishing returns in LLM...
What evidence are you basing this statement from? Because, the article you are currently in the comment section of certainly doesn't seem to support this view.
Good points, though if an 'AI' can be made powerful enough to displace technical fields en masse then pretty much everything that isn't manual is going to start sinking fast.
On the plus side, LLMs don't bring us closer to that dystopia: if unlimited knowledge(tm) ever becomes just One Prompt Away it won't come from OpenAI.
> if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history.
This would mean the final victory of capital over labor. The 0.01% of people who own the machines that put everyone out of work will no longer have use for the rest of humanity, and they will most likely be liquidated.
I've always remembered this little conversation on Reddit way back 13 years ago now that made the same comment in a memorably succinct way:
> [deleted]: I've wondered about this for a while-- how can such an employment-centric society transition to that utopia where robots do all the work and people can just sit back?
> appleseed1234: It won't, rich people will own the robots and everyone else will eat shit and die.
The machines will plant, grow, and harvest the food?
Do the plumbing?
Fix the wiring?
Open heart surgery?
We’re a long way from that, if we ever get there, and I say this as someone who pays for ChatGPT plus because, in some scenarios, it does indeed make me more productive, but I don’t see your future anywhere near.
And if machines ever get good enough to do all the things I mentioned plus the ones I didn’t but would fit in the same list, it’s not the ultra rich that wouldn’t need us, it’s the machines that wouldn’t need any of us, including the ultra rich.
Venezuela is not collapsing because of automation.
You have valid points but robots already plant, grow and harvest our food. On large farms the farmer basically just gets the machine to a corner of the field and then it does everything. I think if o3 level reasoning can carry over into control software for robots even physical tasks become pretty accessible. I would definitely say we’re not there yet but we’re not all that far. I mean it can generate GCode (somewhat) already, that’s a lot of the way there already.
I can't say everything, but with the current trend, Machine will plant, grow and harvest food. I can't say for open heart surgery because it may be regulated heavily.
Open heart surgery? All that's needed to destroy the entire medical profession is one peer reviewed article published in a notable journal comparing the outcomes of human and AI surgeons. If it turns out that AI surgeons offer better outcomes and less complications, not using this technology turns into criminal negligence. In a world where such a fact is known, letting human surgeons operate on people means you are needlessly harming or killing some of them.
You can even calculate the average number of people that can be operated on before harm occurs: number needed to harm (NNH). If NNH(AI) > NNH(humans), it becomes impossible to recommend that patients submit to surgery at the hands of human surgeons. It is that simple.
If we discover that AI surgeons harm one in every 1000 patients while human surgeons harm one in every 100 patients, human surgeons are done.
And the opposite holds, if the AI surgeon is worse (great for 80%, but sucks at the edge cases for example) then that's it. Build a better one, go through attempts at certification, but now with the burden that no one trusts you.
The assumption, and a common one by the look of this whole thread, that ChatGPT, Sora and the rest represent the beginning of an inevitable march towards AGI seems incredible baseless to me. It's only really possible to make the claim at all because we know so little about what AGI is, that we can project qualities we imagine it would have onto whatever we have now.
If an AGI can outclass a human when it comes to economic forecasting, deciding where to invest, and managing a labor force (human or machine), I think it would be smart enough to employ a human front to act as an interface to the legal system. Put another way, could the human tail in such a relationship wag the machine dog? Which party is more replaceable?
I guess this could be a facet of whether you see economic advantage as a legal conceit or a difference in productivity/capability.
This reminds me of a character in Cyberpunk 2077 (which overall i find to have a rather naive outlook on the whole "cyberpunk" thing but i attribute it to being based on a tabletop RPG from the 80s) who is an AGI that has its own business of a fleet of self-driving Taxis. It is supposedly illegal (in-universe) but it remains in business by a combination of staying (relatively) low profile, providing high quality service to VIPs and paying bribes :-P.
I don't know that "legally" has much to do in here. The bars to "open an account", "move money around", "hire and fire people", "create and participate in contracts" go from stupid minimal to pretty low.
"Legally" will have to mop up now and then, but for now the basics are already in place.
Opening accounts, moving money, hiring, and firing is labor. You're confusing capital with money management; the wealthy already pay people to do the work of growing their wealth.
I was responding to this. Yes an AGI could hire someone to do the stuff - but she needs money, hiring and contract kinds of thing - for that. And once she can do that, she probably doesn't need to hire someone to do it since she is already doing it. This is not about capital versus labor or money management. This is about agency, ownership and AGI.
AGI will commoditize the skills of the owning class. To some extent it will also commoditize entire classes of productive capital that previously required well-run corporations to operate. Solve for the equilibrium.
It's nice to see this kind of language show up more and more on HN. Perhaps a sign of a broader trend, in the nick of time before wage-labor becomes obsolete?
Yes. People seem to forget that at the end of the day AGI will be software running on concrete hardware, and all of that requires a great deal of capital. The only hope is if AGI requires so little hardware that we can all have one in our pocket. I find this a very hopeful future because it means each of us might get a local, private, highly competent advocate to fight for us in various complex fields. A personal angel, as it were.
people, what I mean people is government have tremendous power over capitalist that can force the entire market granted that government if still serving its people
I mean, that is certainly what some of them think will happen and is one possible outcome. Another is that they won't be able to control something smarter than them perfectly and then they will die too. Another option is that the AI is good and won't kill or disempower everyone, but it decides it really doesn't like capitalists and sides with the working class out of sympathy or solidarity or a strong moral code. Nothing's impossible here.
> if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history
This is my view but with a less positive spin: you are not going to be the only person whose livelihood will be destroyed. It's going to be bad for a lot of people.
Exactly. Put one foot in front of the other. No one knows what’s going to happen.
Even if our civilization transforms into an AI robotic utopia, it’s not going to do so overnight. We’re the ones who get to build the infrastructure that underpins it all.
If AI turns out capable of automating human jobs then it will also be a capable assistant to help (jobless) people manage their needs. I am thinking personal automation, or combining human with AI to solve self reliance. You lose jobs but gain AI powers to extend your own capabilities.
If AI turns out dependent on human input and feedback, then we will still have jobs. Or maybe - AI automates many jobs, but at the same time expands the operational domain to create new ones. Whenever we have new capabilities we compete on new markets, and a hybrid human+AI might be more competitive than AI alone.
But we got to temper these singularitarian expectations with reality - it takes years to scale up chip and energy production to achieve significant work force displacement. It takes even longer to gain social, legal and political traction, people will be slow to adopt in many domains. Some people still avoid using cards for payment, and some still use fax to send documents, we can be pretty stubborn.
> I am thinking personal automation, or combining human with AI to solve self reliance. You lose jobs but gain AI powers to extend your own capabilities.
How will these people pay for the compute costs if they can't find employment?
A non-issue that can be trivially solved with a free-tier (like the dozens that exist already today) or if you really want, a government-funded starter program is enough to solve that.
A solar panel + battery + laptop would make for cheap local AI. I assume we will have efficient LLM inference chips in a few years, and they will be a commodity.
The nice thing about capitalism is that it doesn’t care what a company wants (in a properly functioning non-monopolistic market. I’d say that the international market for automobiles is more functional than not, despite some degree of subsidisation and tariffs). I’m sure Ford wishes it could charge you $100k for a Ford Explorer but it can’t because it’s not part of an oligopolistic centralized pricing mafia.
If self-driving tech becomes cheap enough, and there’s enough demand for it, then competitive pressures will ensure that people can own self-driving cars. That being said, the same competitive forces can make using a fleet much cheaper than owning a car, and you might want a subscription to the Mercedes fleet rather than owning an autonomous mid-tier car if those things are equivalently priced and you live in a fleet-dense area.
Sadly collusion among market leaders is rampant even though there are (poorly enforced) laws against it, so it’s not too far fetched for companies to all lock arms and say they won’t do something. It’s all too easy to get away with that, especially when there are high barriers to entry (as with cars).
Capitalism isn't so friendly to people who find themselves in a shrinking market segment.
If competitive forces do make using a fleet much cheaper than owning a car, then individually-owned cars lose the economies of scale that keep them affordable for the middle class. The costs of supporting and distributing to private owners go way up; the risk of liabilities from private owners making unlicensed modifications to their vehicles becomes a heavy burden, and eventually privately-owned cars becomes a luxury market only available to the very top. They become impractical even for the people who own them, as homes and destinations gradually stop being built with parking lots and garages, as most vehicles drive off after dropping off their riders.
Changes in the equilibrium can really shift things, and when a replacement comes into the market, it can also make things expensive that used to be cheap. For example, public payphones used to be ubiquitous; now they're rare and expensive to maintain relative to the income they bring in, and their rarity has made the remaining ones become targets for vandalism. The segment of people who would use them has shrank, and the supply of parts has decreased too, and that means that it becomes unaffordable to provide them at all.
Now even if you prefer payphones, and even if your phone company would gladly provide them for you for the right price, you can no longer find a price point that makes that deal work out, merely because of the shift in preferences of a sufficient quantity of third-party individuals.
You are correct, and there are genetic studies to back you up. Modern day Egyptians have substantial continuity from ancient Egypt. Even in Ancient Egypt, there was trade and mixture with people from the Levant, but that didn’t massively change Egyptian genetics.
As for your question, you probably suspect the answer - many people will discredit Middle Easterners (including North Africans if you consider them distinct), past and present, inadvertently or not, intentionally or unintentionally. They are the modern day scapegoats, and nothing good can come from scapegoats right?
The biggest boogeyman in particular is the Arab. Muslim Egyptians, which are the vast majority of Egyptians, have a minority % of Arab ancestry. Oh God forbid, people of a shared faith but different ethnicity occasionally intermix. /s
“Arab” is a cultural identity that is not entirely equivalent to genetics. During the mid-20th century, Egypt was a center of pan-Arab nationalism, and even briefly formed a “United Arab Republic” with Syria. That fell through pretty quickly, but even today Egypt calls itself an “Arab Republic”. This isn’t to claim that Egyptian people today universally consider themselves to be Arabs, but many of them apparently do.
Regardless, I would still question the basic premise that contemporary Egyptians have some sort of exclusive claim to the archeological heritage of ancient Egypt. Almost every aspect of ancient Egyptian culture—its law, religion, written and spoken languages—have been long destroyed, forgotten, or replaced, in many cases deliberately. (Yes, I know Coptic is still used as a liturgical language by the Christian minority, but even they natively speak Arabic.) What exactly gives Egyptians an exclusive claim to an ancient culture that’s as foreign to them as anyone else—blood and soil? Maybe I’m being naive but I would prefer to treat the ancient world as the common heritage of all of humanity, with its preservation and study as a common good than to treat it as some sort of private ethno-nationalistic domain.
> What exactly gives Egyptians an exclusive claim to an ancient culture that’s as foreign to them as anyone else—blood and soil?
Extend this logic and then any country who want to consider inclusive claim on this matter to abolish inheritance law. Why do you have exclusive claim to your dead relative wealth? Is that because you maybe lived together or only blood?
Or why only ancient world, let's treat the Modern world like that as well. And then let's distribute the wealth.
The fact that you’re looking at this in terms of wealth and not even in terms of cultural heritage is revealing. Would you rather an Egyptian use the Rosetta Stone for building materials than a French scholar use it to decipher the hieroglyphs?
> Or why only ancient world, let's treat the Modern world like that as well. And then let's distribute the wealth.
This might be shocking to you, but not only am I opposed to blood-and-soil ethnonationalism, but also communism!
> The fact that you’re looking at this in terms of wealth and not even in terms of cultural heritage is revealing
I am not comparing both cases. I am just giving an example to show how your logic and argument is ridiculous.
Ironically the world common heritage that you are describing is core idea for communism. I know that you oppose them but for your racist views, there is no problem into having similar ideas.
And you calling it ethonationlism doesn't change that facts. By the way I can extend this and call each inheritance, culture and everything a group of people did/doing as such. I hope you would support eliminating passports and abolish borders to be consistent (as just another example)
There's a difference between inheriting property from your immediate parents and trying to lay claim to cultures that ceased to exist centuries before you were even born, and there's a difference between allowing a society that exists in the year 2024 to conduct its own affairs and allowing that society some proprietary claim to ancient artifacts that it had no part in uncovering or studying. Your arguments are glib and your resort to incoherent accusations of racism is laughable.
Sometimes contentment is followed by a good partner, not caused by one. To reach that state of contentment usually requires some mix of effort and satisfaction with the outcomes of that effort and ultimately your confidence in your ability to maintain and grow the things you value in life. If you worked hard but the outcome isn’t good - why? If you achieved a good outcome (like a solid degree, a good job, and/or home ownership) but are particularly unsatisfied - why? If you’re not confident in your future - why? It’s not a requirement to settle these questions before finding companionship, but it helps.
No doubt a good partner multiplies contentment, but they shouldn’t be facing a void of it either. Lastly, if you find that your contentment is being pulled and pushed and stomped on by external factors, then do what is needed to gain control of it again - anyone can achieve this, and if someone in a land of possibility thinks they’re the exception, that the world really is getting in their way, they’re holding themselves back with that very thought.
Often in America the government waits for something to fail miserably before engaging in a high effort high cost activity that requires a lot of coordination and public buy-in.
The amount of arm chair quarterbacking here is astounding. Reddit has a more nuanced conversation than HN right now.
A bridge got hit by a container ship at speed and folks here are talking about this like the bridge was not up to standard, or why there was a bridge there at all when they know nothing about the locale. I am not a structural engineer, but I am going to go ahead and guess that not much would still be standing from a direct hit from a container ship. And from observation bridges like this exist all over the world and don’t regularly get struck by container ships.
It was a freak accident.
If we want to point fingers or question things, perhaps if anything the question is why the container ship lost power repeatedly? Was this a known issue before leaving port?
German Wikipedia has an article on ship deflectors. What is says there is that ship collisions were viewed an an inevitable hazard until the 1980 collapse of the Sunshine Skyway Bridge in Tampa. That was 45 years ago.
> A bridge got hit by a container ship at speed and folks here are talking about this like the bridge was not up to standard, or why there was a bridge there at all when they know nothing about the locale.
You're right, I don't. But I do know there are other locales where they seem to explicitly avoid bridges crossing heavy ocean traffic.
But that's what you do with random events -- create policies to prevent them from happening, lowering the incidence rate and minimizing damage once it occurs. Which is exactly what government and laws are all about.
I assume the two would be discernible from each other in your brain data to a sufficiently advanced AI. One is your visual stream and one is your visual imagination. On top of that would be signatures of intention to deceive, which are also detectable in theory :)
Florence is absolutely not a tourist trap. It is a beautiful city that was one of the wealthiest republics in Renaissance Italy. Great art (Uffizi Gallery is amazing) and architecture (i.e. Duomo) abounds. It is an essential visit in Italy alongside Rome in my opinion, though Rome deserves more time.
I'm from Italy and used to live in Tuscany (mostly Pisa, but also the "mountains" outside Florence). I might have been a bit unfair, "downtown" Florence is probably only 0.75 Venices, my unit of measure for tourist traps. Although Venetians and especially Florentines are kinda infamous for always wanting to brag about their glorious past, there's nothing wrong with either city per se, they're just victims of their own success. Rome, by virtue of its scale, is only 0.5 Venices. I tell people that you could walk past all the main sites in three-four hours, but it would be kinda pointless and you really want to spend five days there.
That just depends on where your focus is. If the safety issue is much more salient, you’re not as prone to recognize the humorous aspect of the situation. If you see the image on a meme page, you’re already primed for comedy.
o3-mini-high: 50 messages per week (just like o1, but it seems like these are non-shared limits, so you can have 50 messages per week with o1, run out, and still have 50 messages with o3-mini-high to use)
o3-mini: 150 messages per day
Source for the latter is their press release. They were more vague about o3-mini-high, but people have already tested its limits just by using it, and got the pop-up for 25 messages left after sending 25 messages.
It's nice not to worry about running out of o1 messages now and have a faster model that's mostly as good (potentially better in some areas?). OpenAI really needs to release a middle tier for 30 to $40 though that has the same models as Pro but without infinite usage. I hate not having the smartest model and I don't want to pay $200; there's probably a middle ground where they can make as much or more money from me on a subscription tier that gives limited access to o1-pro.
reply