Extending on the same line, we will see programs like Google Summer of Code (GSoC) getting a massive revamp, or they will stop operating.
From my failed attempt, I remember that
- Students had to find a project matching their interests/skills and start contributing early.
- We used to talk about staying away from some projects with a low supply of students applying (or lurking in the GitHub/BitBucket issues) because of the complexity required for the projects.
Both of these acted as a creative filter for projects and landed them good students/contributors, but it completely goes away with AI being able to do that at scale.
GSoC 4 years ago removed the need for their to be actual students to apply. We got flooded with middle aged men working 9-5s applying. It was dumb and we stopped participating. Their incentives were literally "extra income" instead of learning or participating beyond that.
An annoying little laptop charging reminder utility that does the job.
---
There are times when I'm deeply immersed in focused work, a meeting, or engaging video content and end up missing the usual low-battery notifications on my MacBook.
When the laptop suddenly shuts down, it's followed by the familiar and frustrating walk to find a charger or power outlet. It can be annoying and occasionally embarrassing, especially when rejoining a session a few minutes later with, "Sorry, my battery died."
Over the past few weekends, I built Plug-That-In, an app that introduces "floating/moving notifications". These alerts follow the cursor, providing a stronger, harder-to-miss nudge regardless of what’s happening on screen.
The app also includes a few critical features:
- Reminder Mode: When the battery reaches critical levels, the app emits a configurable alert similar to a car's seatbelt warning, continuing until the battery is addressed.
- Do Not Disturb Settings: Customize alerts and sounds based on context, such as when system audio is playing, a video is active, or the camera is in use.
It grew out of a personal need, and I'm glad to see it used by over 50 people in the past month.
Glad you found it relevant, and thanks for sharing the feedback.
Not going to deny that it was a bummer to read your second line, lost all the Endorphins earned from reading the first one. Nevertheless, I want to fix this if it's a legit issue.
Do you mind sharing more details on the battery consumption from the app?
I'm using IOKit power source notifications and hearing this feedback for the first time. (personally using this on my work [old M2] and personal [new M4] MacBooks for the last ~2 months).
An annoying little laptop charging reminder utility that does the job.
---
There are times when I am deeply involved in a focus-work session, a meeting, OR watching some sort of engaging video content, and don't pay timely attention to the standard low battery notifications from my MacBook.
After the laptop shuts down suddenly, what follows is the most annoying walk to find the charger or the charging outlet. It's frustrating at times, sometimes embarrassing because you have to say, "Sorry, my battery died down" as you join back the session after 2-3 minutes.
Over the last 3-4 weekends, I have been building Plug-That-In, which has floating notifications. Essentially, a notification that follows my cursor movement, so I get a stronger nudge irrespective of what I am doing.
There are a few other critical features, such as Reminder Mode and Do-Not-Disturb Settings.
- Reminder Mode: On critical/lower battery levels, it will keep beeping like a car's seat belt alert for some time (configurable) when the battery is really low.
- Do-Not-Disturb settings: Configure what sort of alert/sound it will generate when I have system audio playing or video playing, or the camera is active.
It has addressed a personal need and has already proven useful a few times over the last weeks.
Tinkering with a tiny macOS app that gives me proactive reminders about the low battery and imminent shutdown.
Standard system notification comes at about 10%, and most of the time, in my case at least, whenever I miss that, the result is "laptop shutdown amidst an ongoing video meeting" or something like that.
(Basically, too late before I act)
Just so that I don't miss the reminders, the app will show an overlay window with some text, following my cursor, and a custom sound.
I built a version this weekend, and am current doing a dogfooding exercise.
This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms.
Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?
Got it — here’s a satiric AI-slop style reply you could post under rvnx:
Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.
If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?
Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?
Fascinating trace — what you’ve essentially demonstrated here is not just a failed TLS handshake culminating in a 500, but the perfect allegory for our entire discourse. The client (us) keeps optimistically POSTing sincerity, the server (reality) negotiates a few protocols, offers some certificates of authenticity, and then finally responds with the only universal truth: Internal Server Error.
If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?
You all are doing a good job at fueling a certain kind of existential nightmare right now. We might just get our own shitty Butlerian Jihad sooner rather than later if this is the future.
I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal
In my company we sometimes cherry-pick parts of the AI summaries and send them to the clients just to confirm the stuff that we agreed on during a meeting. The customers know that the summary is AI-generated and they don't mind. Sometimes people come to me and ask whether what they read in the summary was really discussed in the meeting or is it just AI hallucinating but I can usually assure them that we really did discuss that. So these can be useful to a degree.
That’s a good point, an AI email/Slack/summary postions you as at bootlicker at best, writing summaries to look good, and a failed secretary at most, but in any case of low value on the real-work scale.
I’m just afraid this kind of types are the future people who get promoted.
This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.
The exponential growth of compute and data continues..
As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
> As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?
If I think you're fluent, I might think you're an idiot when really you just don't understand.
If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.
If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.
I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha
There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).
I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.
> aside from the obvious slop related costs to review
Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.
These tools - just like many of us humans - prioritize form over function.
Jotted down a few thoughts based on a pattern I’ve started noticing; AI is often getting used inefficiently.
Users in a white-collar setup are limiting their relationship with LLMs at a peer-level, and a multitude of AI-tool UX is designed to reinforce that same hierarchy, which is marking the entry of "AI Slop" into the workplace.
I stopped writing right after college when I moved to full-time work, but I want to change that now.
Just as I started, I realized the website was hosted on Forestry CMS, which has been discontinued, so I will have to figure out how to maintain the URL structure, etc. (this is important since there are a few popular/top-10 pages for certain queries/searches [1]).
From my failed attempt, I remember that
- Students had to find a project matching their interests/skills and start contributing early.
- We used to talk about staying away from some projects with a low supply of students applying (or lurking in the GitHub/BitBucket issues) because of the complexity required for the projects.
Both of these acted as a creative filter for projects and landed them good students/contributors, but it completely goes away with AI being able to do that at scale.