I would imagine U.S. foreign policy, particularly the prolific use of sanctions contributes to this wane as well. There was some discussion about this a while ago - effectively as the U.S. continues to rely on a strategy of imposing sanctions against foreign adversaries, those adversaries increasingly reorient their economy towards non-U.S. economies such as Russia or China. The more the U.S. utilizes sanctions, the less effective they become.
You're still captive to a product. Which means that when CloudCo. increases their monthly GenAI price from $50/mo. to $500/mo., you're losing your service or you're paying. By participating in the build process you're giving yourself a fighting chance.
I will quickly forget the details about any given code base within a few months anyway. Having used AI to build a project at least leaves me with very concise and actionable documentation and, as the prompter, I will have a deep understanding of the high-level vision, requirements and functionality.
There's a subset of sleepaway camps for adults that are quite popular for musicians - they typically consist of sleeping in cabins at night and attending lessons and workshops throughout the day. I went to one a few years ago, and even though I was a full standard deviation off from the mean in terms of age I still had a good time. This seems like a cool idea too.
Probably depends on BO/stakeholder as well. B2B solution that has a low risk of killing anyone? Maybe fuck it, let the model have its way.
Technology that controls software that keeps people alive, controls infrastructure, etc., uhhhh I don't think so. I guess we're just waiting for the first news story of someone's pacemaker going haywire and shocking them to death because the monitoring code was vibed through to production.
B2B AI LLM vibe-SaaS that has a 10% chance to become profitable and a 10% chance to gift away all money invested into the business ever while leaving founders on the receiving end of 100 lawsuits.
> Isn't the sector for software that is life-critical really small?
I think it's large. Think about the software that goes into something like air travel - ATC, weather tracking, the actual control software for the aircraft... I am aware that nothing is perfect, but I'd at least like to know that a person wrote those things who could be held accountable.
Well I'd probably consider myself a materialist but I'm not sure I'd agree. The evidence to me seems that it can really only come from two places: additional compute or new breakthroughs in AI learning. Compute's coming, certainly, but that only has the potential to improve things if it's added in conjunction with a commensurate AI breakthrough. I think the trend in improvement for transformer could be logistic, not exponential, like a lot of the snake-oil salesman like to state. And while there's plenty of evidence for compute there isn't much for the AI breakthrough that leads to an exponential jump, and if it does exist it's a trade secret, so until we know we don't know.
Okay so, I gave this a shot last week while studying for one of my finals for grad school. I fed it the course study guide and had it prompt me. I got the sense that it wasn't doing anything remarkable under the hood, that it was mostly system prompt engineering at the end of the day. I studied with it for about an hour and a half, having it feed me practice questions and flashcards. I believe that it really only pushed back on me on one answer, which made me feel like I had the thing in the bag. My actual result on the final was fairly bad - which was irritating, because I went in feeling probably a bit better than I should have. I don't know if I can lay that corpse at OpenAI's feet, but regardless I don't think there's enough there for me to keep using it. I could just write my own system prompt if I liked.
The way I think about this is if you split the money-making opportunities into two pools; one is rent-seeking/grifting/outright-scamming/beating greater-fool fallacy, the other is learning some sort of skill/trade and developing a career on that. At some point the perceived opportunity cost for the first eclipsed the latter, and now that's sort of where we're at.
Certain characters love to say things like "no one wants to work anymore." I think the rise of certain scamminess in our culture actually flies in the face of that; people will work insanely hard at whatever their thing is, be it an MLM or a crypto-grift. But they work hard because _they think that's where they can get the most value._ What's the value in going to school for 4, 6, 8, 10 years when you can make it big in the next big thing?
I'll defend that variable selection a little bit, as I feel that the measure they use to capture 'despair' is actually binary in reality. I'd categorize a handful of young men around my age as being in this category. What they seem to have in common with each other is a consistent downtrodden-ness that doesn't fluctuate much from day to day; it's pervasive to their entire personality, it's who they are.
I imagine if you studied this is a less discrete, non-binomial method you'd see even sharper trends. I don't know a single person my age who feels the future has anything for them.
reply