I don't know. The company I work at is inviting candidates for interviews, and we have to make compromises because we can't get the exact profiles we are looking for. Something about your comment does not add up to me.
Locality. People want to work close to where they live and not all places are bustling with all kind of activity. I suspect you're hybrid or on site only, right?
not GP, but we're hybrid but remote-first and 80% is remote and we have the same experience. Getting juniors is easy, getting seniors+ is very difficult.
The model I am mentioning matches with this. Speaking from my own personal experience as well, when you're junior and young, you can move anywhere, especially if you're ambitious. As you gain experience, you also settle down a bit in your life, you have a wife, kids, a house. Their jobs and schools. Moving then is a _big deal_.
Of course, there are other factors that make juniors more abundant on the current job market, namely, most companies don't want them.
That absolutely makes sense, but I'm not sure it is the reason. I mentioned we're remote first: we hire _everywhere_. I've been with this company for 7 years, and haven't traveled to HQ even once, and have worked from home or a spot of my choosing (but honestly, that spot is almost always home!) every day, that's how remote first we are - nobody has to uproot their life to work with us.
But it's still extremely hard to find senior+. I'm sure our tech stack plays a role, and naturally senior developers are much less common than juniors. But whenever I hear about the job market being super hard, I feel like I'm living in a parallel universe.
AI is not replacing anyone from my perspective, but AI might become our only hope at some point, because we're growing aggressively. I have to keep mediocre people because I can't even replace at that level easily - the only ones I'm pruning are the ones that are net-negative contributors.
Ah, sorry, I misunderstood your original post then, I interpreted "hybrid, remote first" as... You can be remote most days but you _need_ to be in office a couple of days. This just goes to teach mea hybrid model has _a lot_ of variants.
Back to the point, I think I'm pretty senior, mostly embedded SW, thankfully I still have work, but the job market seems to havecratered. I have friends that are pretty good that are looking for jobs for about half of year now.
I'm incredibly curious now what is your tech stack. And how do you guys view people looking to switch tech stacks.
We're very boring, our stack is PHP/postgres/mysql. A lot of Symfony, a lot of Symfony-style-code on top of Wordpress (mentioning that usually puts people off but it's all PHP in the end, and you can choose to write clean code on either).
Lots of people see PHP in general as a dead end career-wise and WP specifically as almost an insult, so there aren't many that advanced their skills and have continued to work with PHP (or Wordpress, but I believe that an experienced PHP developer has no trouble picking up WP).
We're generally very neutral on how someone arrived where they are, we don't require certificates or degrees, we focus on experience and skills. I wouldn't hire someone who isn't experienced with at least one side of our stack though (unless they're extremely good) because it takes time from other developers to upskill them and that's the one resource we don't have.
I won't disclose where I work though as that would dox myself and I much prefer anonymity.
There is already the EPC QR code, which contains all the data required to initiate a SEPA credit transfer. This code is supported by practically all banking apps (at least in Germany). The standard is public and free (see https://en.wikipedia.org/wiki/EPC_QR_code)
The merchant's system displays this code, you open your online banking app, scan the code, select "SEPA INST" (here's the usability catch!) to make the payment instantaneous, and confirm. Within 10 seconds, the money is transferred to the merchant's account.
Either the merchant's bank or a third-party Open Banking API immediately informs the merchant's system (e.g. by push notification or webhook), and a receipt is issued.
Everything is already here, but since this system would be virtually free to use, nobody really has an incentive to push it. It costs money to educate the public, and there is no money to be made. Instead, everyone gets paid handsomely by the card mafia.
In general I'm all for free and European systems, but SEPA payments imo still have pain points:
- you can send money to companies and individuals alike. It's easier to trick people into fake shop payments, a card payment provider requires at least a bit it verification/registration
- it's really hard to dispute/call back sepa payments. The card companies often step in there afaik
The name of the recipient is displayed, and since last October it is also verified against the owner of the receiving bank account. The bank explicitly warns you if they differ. Also, you can't open a bank account anonymously, there is KYC.
You can't dispute or call back SEPA INST payments. But you can't dispute cash payments either. This is just fine for most day-to-day transactions, I don't need insurance when I buy groceries or pay the taxi driver.
This. Even online, most transactions I do are with known reputable businesses subject to strict consumer protection laws. I have never in my life had to do a credit card chargeback.
The root of this evil is the deal the card companies made with the EU some 10 years ago: A cap on the interchange fees in exchange for the ban on card surcharges.
If the card processing fees could be added to the customer's bill, it would be in the customer's interest to support a cheaper/free alternative. But since card payments are "free" in the eye of the consumer, why should he be using anything but the most convenient option? And what is more convenient than just touching your card/phone to the terminal?
As long as this deal stands, EU merchants will be slaves to the card companies.
> The important thing is not what merchants want, but what customers want.
What many people in Germany want is a payment system that is as anonymous and is as hard to control by some untrusted entitity (both government and banks are very distrusted) as possible and what cash offers. That's basically cash.
Not without reason, in Germany there exists the well-known phrase "Bargeld ist gelebte Freiheit" ("cash is lived freedom").
Agreed. Customers are benefitted either by paying in cash - for the reasons you described - or by paying with cards, for fraud protection and the ability to make purchases online.
Any other payment method will not give customers any benefits over those methods. Unless banks are willing to take responsibility for fraud like with card purchases.
Well the cap is only on the interchange fee, there are several other fees to add to it... example: https://www.adyen.com/pricing
Processing a Mastercard card is "$0.13 + "Interchange+" + 0.60%" where the "Interchange+" would be 0.30% for EU. So more like €0.10 + 0.90% so for €10.00 product, it would be €1 of fee (1.00%). Much less than here in US, but still not negligible for small businesses that run on thin margins (and 20% VAT).
And like good management, the solution is to define clear domain boundaries, quality requirements, and a process that enables iterative improvement both within and across domains.
like the other comment, openai can force itself onto the massive index like VOO/FXAIX etc to make retail folks to provide liquidity exit for openai investors.
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
You definitely would not know if you were, no one does. I think the healthy position is to assume you are wrong and trying to find evidence of why you are wrong
For instance,I'm very skeptical of AI and, from experience, do not think that the current models are worth the cost, but I'm always in HN trying to find arguments/people that use AI successfully to prove that I'm wrong
I know multiple developers paying thousands of months on AI tooling.
I don't need to convince you it's worth it for you, but it's easy to see that other people have found a way to make it worth it for themselves. I would definitely not spend as much as I personally do if it wasn't worth it to me.
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
I should note for epistemic honesty that I expected I would be able to come up with an example of a mistake I made recently that was clearly equally dumb, and now I don't have a response to offer because I can't actually come up with that example.
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
You can make the argument that the boat variant is ambiguous (but a stretch), it's really not relevant since the point was revealing the underlying failure mode is unchanged, just concealed now.
The original car question is not ambiguous at all. And the specific responses to the car question weren't even concerned with ambiguity at all, the logic was borderline LLM psychosis in some examples like you'd see in GPT 3.5 but papered over by the well-spoken "intelligence" of a modern SOTA model.
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
reply