Yes, Opus could check the image to see if it matched the prompt, but I adviced the model to stop and ask the human for a better check and a description of what the cause of the corrupted image could be. But the fact it could catch obvious regressions was good.
I've listened to a handful of podcasts with education academics and professionals talking about AI. They invariably come across as totally lost, like a hen inviting a fox in to help watch the eggs.
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
>It's perhaps to be expected, as these education people are usually non-technical.
I don't think that's totally correct. I think it's because AI has come at everyone, equally, all at once. Educational academics didn't have years to study this because it was released on our kids at the same time.
is not true. It's obvious that certain people and certain fields are technological laggards or technological early adopters.
Other computing and IT technologies also provided a good training ground for this stuff. LLMs have really interesting new properties, but all have familiar properties and decade+ old methods of distribution.
This stuff is difficult, sure. But we have long set a low bar for education management and the results—declining literacy and math in countries which have become stupidly wealthy—speak for themselves.
Ed tech has been like this for a while. Software companies just fleeced the crap out of our school. Why are we paying for gsuite when we have office 365? Why am I getting a one drive account and a google drive account and also a drop box account, while the school rolls their own supercomputer? Why are we changing the website where the slides are posted every three years to a new system no one understands for the first semester or three it is rolled out? LMS software will have 100 features but is just used as a dumping ground for slides and also a clunky spreadsheet for grades 99% of times.
All the administration knows is to spend money and try and buy what others are buying without asking if it would actually be useful. Enterprise sales must be the easiest to land I swear.
> But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
I hate this kind of framing because it puts the burden on the teachers when the folks we should be scrutinizing are the administrators and other stakeholders responsible for introducing it.
AI companies sell this tech to administrators, who then tell their teachers to adopt it in the classroom. A ton of them are probably getting their orders from a supervisor to use AI in class. But it's so easy to condescend and ignore the conversations that took place among decision-makers long before a teacher introduced it to the classroom.
It's like being angry at doctors for how terrible the insurance system is in the US.
Look up some of Tressie McMillan Cottom's writing, podcast appearances, public lectures, etc etc. She's a McArthur-certified Genius and a full professor at UNC, and she's a spectacular writer and public intellectual.
She wrote "Lower Ed", about for-profit colleges in America and has identified places that more elite schools are copying that playbook.
I am pretty close to this because my spouse is a school board member and I do a lot of AI work for my job, and the problems of AI in education are completely intractable for public schools. The educators lack the technical background to use AI effectively, and moreover, they are completely out of the loop in terms of technology decisions, and the technology staff lacks enough knowledge in both education and AI for them to make competent decisions about it.
It’s a recipe for disaster, and you are going to see school systems set money on fire for years trying to do something with AI systems that never get rolled out, or worse, rollout AI systems that tells kids to kill themselves or makes revenge porn of their classmates.
School boards default answer to everything AI related right now should be “no”.
I think a good question to ask, is if these schools would have paid for cliffnotes for all their kids. The answer is of course “no,” not only due to the presumed expense but also the fact cliffnotes are the easy way out of having to use your brain in English class. AI is no different. Kids are using it as an easy crutch to cram and avoid actually learning to study material, not as some research tool like it is pitched. It is like worse than wikipedia, and somehow everyone in education had such strong feelings about wikipedia but are rolling out the carpet for chatgpt school wide subscriptions.
Thought this would be a thread about the crazy offers being handed out to join AI teams at Amazon, Meta, Google, OpenAI, Anthropic, Apple, or any of the massively capitalized 'neolabs'.
I wrote this as a first step in exposing our internal GPU reliability management, inspired in large part by SemiAnalysis' focus on industry best practices in the ClusterMax report and Lepton's `gpud`.
NVIDIA GPUs have 172 "Xid" error codes and increase the active population with each new major driver release. Coming out of 2025, we have a good handle on which Xids (and SXids) are critical and can be automated away.
The interesting next frontier are the Xids that sit ambiguously as maybe hardware issues or maybe application issues. Xid 31, GPU memory page fault, is the most dreaded code, because ~95% of the time it's an application exception but it's pretty tricky for users to debug and confirm.
Automating Xid 31 handling is my new GPU reliability holy grail.
It is a business. Envato was a billion dollar business in 2017. I agree that AI makes these kinds of businesses vulnerable, but it's overstepping to say that these things aren't businesses.
I never said Tailwind the company wasn't a business, when I said "a product is not a business" I meant that as advice to creators in general, not in specific to Tailwind; of course it is, it made millions in revenue. What I meant was that even though businesses may exist, having a long-term, durable business model is not always viable.
"selling premade software assets" is a business, and it's the business both Tailwind and Envato were in. Both businesses got hit hard by AI. Check out Envato's homepage now. It's unrecognizable from what it was in 2017, and completely genAI oriented.
I think you're just repeating the same point I'm making. The point is they're not good businesses, hence why Envato pivoted and Tailwind soon might need to as well.
You're shifting your argument, first you said it's not a business. Any business can be good/bad depends on climate and over time. It was a business and many busienss in the current era of AI will face such challenges. All business just need to constatly adapt over time aka innovate.
You're misunderstanding what I'm saying, I was not talking about Tailwind Labs not being a business, I am saying that in general, products are not businesses by default. In that case, my argument is the same as it has been, agreeing with your last 3 sentences.
This has been posted to HN about a half dozen times now and never taken off, so apologies but this needs an editorialized title.
"Dude comes out of nowhere, posts one of the most impressive tech demos/environments ever seen, refuses to elaborate and leaves"
"This is beyond insane. A lot of work must have went into making the underlying system, but the developer experience looks absolutely stellar. Taking deterministic execution to a whole new level."
Part of the problem of our time is that shared culture has significantly receded. There's little capacity to maintain "classics" as we understand them today. Take any massive artistic output (film, book, TV show) and it's nowadays either not seen/read/heard by more than 20% of the population or it's a flash in the pan hit which will be forgotten in another year or so (e.g. Barbenheimer).
This article ends up seeming like an ad for some dubious derivations of the original novel.
- "Take, for instance, Michael Farris Smith's new novel, Nick. The title refers, of course, to Nick Carraway, the narrator of Gatsby, who here gets his own fully formed backstory."
- "Jane Crowther's newly published novel, Gatsby, updates the plot to the 21st Century, and flips
the genders to feature a female Jay Gatsby and a male Danny Buchanan."
- "And Claire Anderson-Wheeler's The Gatsby Gambit is a murder mystery which invents a younger sister for Fitzgerald's eponymous anti-hero: Greta Gatsby – get it?"
> Jane Crowther's newly published novel, Gatsby, updates the plot to the 21st Century, and flips the genders
I was tired of the idea of gender-flipping a story before the first story was ever gender-flipped, and I'm no less tired of it now.
I suspect that even people who think it's important to perform the rite of gender-flipping a story don't actually like the stories that result. Because it's the same story as before, but now you have the feeling the author is standing in front of you, waiting for an opportunity to remind you that the main character is a woman now and isn't that incredible?
Flip just one character and see if the romance changes in interesting ways, or if you can use prejudice against the character or something. At least a little more interesting than just inverting a dynamic.
well the target audience is people who do enjoy it. so, not you. maybe there's a whole dubious narrative about why... or maybe they just have tastes that seem simple and basic to you. why not?
reply