As a sidenote, Jim Waterson is doing amazing work at London Centric, single-handedly doing the kind of investigative journalism week after week via Substack funding that traditional media has abandoned. I highly recommend subscribing if you are in the London area.
> “Over the years I’ve found that when students read on paper they're more likely to read carefully, and less likely in a pinch to read on their phones or rely on chatbot summaries,” Shirkhani wrote to the News. “This improves the quality of class time by orders of magnitude.”
This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.
Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything.
This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think?
I had to take some literature classes in high school, and had a truly exceptional teacher who facilitated great and interesting discussions. Really opened up my mind and I only later realized how lucky I was.
Those summaries always existed, in the past you could buy them as little books for most of the classic literature we read. Thing is they were always the same trite points even back then.
Our teacher would see right through any BS, but never call it out directly. Instead there would be 1 precise and nicely asked follow-on question or even just asking their opinion on a talking point. Not details, but a regular discussion question.
If someone hadn't read the book they'd stutter and grasp at straws at that point and everyone knew they hadn't actually read it.
On the other hand if you had read the book the answer was usually pretty easy, and often not what the common summaries contained as talking points.
So cheating not only didn't work, the few regular cheaters we had in our class (everybody knew who those were) actually suffered badly.
Only in hindsight did I realize that this is not the normal experience. Most other literature classes in fact do just focus on or repeat the same trite points, is what I've heard from many others.
It takes a great teacher to make cheating not "work" while making the class easy, intellectually stimulating and
refreshing at the same time.
My experience was the exact opposite. I loved reading as a child. But I learned very fast in school that my "own opinion" on books results in bad grades, while reading and reiterating the "official summary" results in OK or even good grades. Like you say, the summaries existed long before AI. It is what the teacher and students make of the class.
Last weekend I was arguing with a friend that physical guitar pedals are better for creativity and exploration of the musical space than modelers even though modelers have way more resources for a fraction of the cost, the physical aspect of knobs and cables and everything else leads to something that's way more interactive and prone to "happy mistakes" than any digital interface can offer.
In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.
> but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.
Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.
If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.
All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.
Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.
When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.
Yes, you can and I know just enough of cars to not be scammed by people, but not to know how the whole engine works, and I also don't think that you should learn everything that you can learn, there's no time for that, that's why I made the bird view comment.
My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.
That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.
If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"
My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI
> And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.
If reading an AI summary of readings is all it takes to make an exercise a facade, then the exercise was bad to begin with.
AI is certainly putting pressure on professors to develop better curricula and evaluations, and they don’t get enough support for this, imho.
That said, good instruction and evaluation techniques are not some dark art — they can be developed, implemented, and maintained with a modest amount of effort.
Kinda sad because I thought of a great, practical use of Cowork to demo to the company today to demonstrate how useful I thought it could be across a range of user types. Such is life.
Many offices (public sector, commercial, etc) are full of people who write reports that no one reads.
With AI, there is a strong incentive for the employee to churn out the report quickly and take Friday off, since no one is going to read it anyway. Maybe they still skim the report and make sure it looks reasonable, but they don't catch the smaller details that are wrong, and more importantly, they don't think deeply or critically about the issue at hand since the report looks reasonable at a glance.
"Show me the incentive, and I'll show you the outcome" - Charlie Munger
My guess is that reporting processes will change and adapt once everyone realizes how useless the old system is in an AI world. A long report used to signal "we thought about this, so rest easy". Now, a long report doesn't signal that. It no longer serves the function it once did.
I've used GIMP and Photoshop for a very long time (close to 30 years).
In ~1998, GIMP was not quite as good as Photoshop 5 and was more awkward to use, but you could see how it could close the gap. It had impressive underlying tech that could handle large images on computers at the time. There was an expansive library of weird and neat plug-ins and scripts. It felt like we were at the start of a great shift in which OSS software would "catch up" and eventually replace desktop power tools, just as Linux had done with web servers. It was... the year of Linux on the Desktop!
By ~2005, GIMP was starting to really catch up to where Photoshop was in 1998, but Photoshop had added lots of quality of line features like adjustment layers and layer effects, way better text rendering, and amazing new features like spot healing brushes, vanishing point warping, etc. The gap was widening. But GIMP still did all the core stuff, and Photoshop was annoying users by shoving Adobe Bridge down their throat, etc. So people were still hopeful for a replacement.
By ~2012, GIMP was adding.. an awkward single window mode? It lacked tons of by-now-basic features that made it totally impractical for professional use. Photoshop, meanwhile, was adding amazing time-saving features like Content-Aware Patch and Move that seemed "magic" at the time. The tech gap was widening, but Adobe was also pushing subscriptions down users' throats, which was very unpopular, so GIMP still had a chance to make a come back.
By ~2018, GIMP was finally adding.. basic CMYK support for printing, something which literally no one uses GIMP for professionally and was a dying need? Meanwhile, Photoshop was demoing an AI object selection tool that could magically select objects without needing to trace them, which came out in 2019. Using GIMP felt like using software from a decade previous.
The last 5 years have been the worst for GIMP. Photoshop has been improving at an astonishing rate. Now it's literally what photo editing looked like in 90's movies - you just open an image, click "select object" and it perfectly selects it, and lets you move/drag/add elements with AI, etc. You can do edits in seconds now that used to take hours, and the results are really good.
None of this is a complaint about GIMP or all the people who contributed to it. It's impossible for a few volunteers to complete with infinite money and hundreds of full-time employees. But Photoshop and GIMP are no longer in the same league. And Adobe knows this, which is why it can get away with punitive subscription-only pricing.
This guy's other video where he covers Clowncore's 'Computers' on computers is one of the most impressive, incredibly niche things I've ever seen on YouTube. He's a serious talent.
The most interesting part of this video to me is the editing. It feels like someone just watched Breathless by Goddard and really wanted to make a French new wave movie, but had a day job making films for IBM.
reply