Then dont sit still for 30 mins - try doing it for 3 minutes at first. If you feel like it, repeat the next day with either the same or longer length. Or dont do it at all. If you do - think of it as a kind of a meditation, without the extra steps. Some isolation from sensory stimulation is good for your brain - there is growing evidence we are all over exposed to attention-robbing mechanisms of the digital world.
Not looking to dismiss the authors long tenure at a major tech company like Google, but the first point kind of stuck like a sore thumb. If the Google culture was at all obsessed about helping users, I wonder why Google UX always sucked so much and in particularly in the recent years seem to be getting even worse. Every single one of their services is a pain to use, with unnecessary steps, clicks - basically everything you are trying to do needs a click of sorts. Recently I was writing an e-mail and noticed I misspelled the e-mail address of the recipient, which I rarely do. So, I should just be able to click the address and edit it quickly, right? Wrong - now you have a popup menu and inside of it you have to search for "edit e-mail" option. Most of the rest of his lessons while valuable in their own right, are not something I would put under the headline of "after X years at <insert-major-tech-company>", as they do not quite seem to be that different from lessons you pick up at other companies ? I´d more interested to hear about how the culture was impacted when the bean-counters took over and started entshittifying the company for both the users and the employees too.
> If the Google culture was at all obsessed about helping users, I wonder why Google UX always sucked so much and in particularly in the recent years seem to be getting even worse.
There was no beancounter takeover and it never was so obsessed. I worked there from 2006-2014 in engineering roles and found this statement was particularly jarring: "User obsession means spending time in support tickets, talking to users, watching users struggle, asking “why” until you hit bedrock"
When I worked on user facing stuff (Maps, Gmail, Accounts) I regularly read the public user support forums and ticket queues looking for complaints, sometimes I even took part in user threads to get more information. What I learned was:
• Almost nobody else in engineering did this.
• I was considered weird for doing it.
• It was viewed negatively by managers and promo committees.
• An engineer talking directly to users was considered especially weird and problematic.
• The products did always have serious bugs that had escaped QA and monitoring.
In theory there were staff paid to monitor these forums, but in practice the eng managers paid little attention to them - think "user voice" reports once a quarter, that sort of thing. Partly that's because they weren't technical and often struggled to work out whether a user complaint was just noise or due to a genuine bug in the product, something often obvious to an engineer, so stuff didn't get escalated properly.
This general disconnection from the outside world was pervasive. When I joined the abuse team in 2010 I was surprised to discover that despite it having existed for many years, only one engineer was bothering to read spammer forums where they talked to each other, and he was also brand new to the team. He gave me his logins and we quickly discovered spammers had found bugs in the accounts web servers they were using to blow past the antispam controls, without this being visible from any monitoring on our side. We learned many other useful things by doing this kind of "abuser research". But it was, again, very unusual. The team until that point had been dominated by ML-heads who just wanted to use it as a testing ground for model training.
Every previous job I've had has a similar pattern. The engineer is not supposed to engage directly with the customer.
I think there are multiple reasons for this, but they are mostly overlapping with preserving internal power structures.
PM's don't want anecdotal user evidence that their vision of the product is incomplete.
Engineering managers don't want user feedback to undermine perception of quality and derail "impactful" work that's already planned.
Customer relations (or the support team, user study, whatever team actually should listen to the user directly) doesn't want you doing their job better than they can (with your intimate engineering and product knowledge). And they don't want you to undermine the "themes" or "sentiment" that they present to leadership.
Legal doesn't want you admitting publicly that there could be any flaw in the product.
Edit: I should add that this happens even internally for internal products. You, as a customer, are not allowed to talk to an engineer on the internal product. You have to fill a bug report or a form and wait for their PMs to review and prioritize. It does keep you from disturbing their engineers, but this kind of process only exists on products that have a history of high incoming bug rate.
Engineers have a perception that most other roles are lesser and if only they were allowed to be in charge things would go better. I certainly used to be this way. When I was an engineer I used to regularly engage directly with customers, and it was great to be able to talk with them one to one, address their specific issues and feel I was making a difference, particularly on a large product with many customers where you do not normally get to hear from customers much. Of course once these customers had my ear, the feature requests started to flow thick and fast, and I ended up spending way too much time on their specific issues. Which is just to say that I've changed my views over time.
In retrospect, the customers I helped were ones that had the most interesting problems to me, that I knew I could solve, but they were usually not the changes that would have the biggest impact across the whole customer base. By fixing a couple of customers' specific issues, I was making their lives better for sure, and that felt good, but that time could have been used more effectively for the overall customer base. PMs, managers etc should have a wider view of product needs, and it is their job to prioritize the work having that fuller context. Much as I felt at the time that those roles added little value, that was really not true.
Of course agreed that all the points made above for PMs, managers, support having their reasons to obstruct are true in some cases, but for a well run company where those roles really do their job (and contrary to popular opinion those companies do exist), things work better if engineers do not get too involved with individual customers. I guess Google might be a good example - if you have a billion customers you probably don't want the engineers to be talking to them 1:1.
> Engineers have a perception that most other roles are lesser
Do they? I always felt I was at the bottom of the chain. "Moving up" means leaving engineering and going into management.
> and if only they were allowed to be in charge things would go better.
Could this be an oversimplification? Engineers understand how the product is built because they are the ones building it. And sometimes they are exposed to what other people (e.g. product people) have decided, and they know a better way.
As an engineer, I am always fine if a product person listens to my saying that "doing it this way would be superior from my point of view", somehow manage to prove to me that they understood my points, but tell me that they will still go a different direction because there are other constraints.
Now I have had many product people in my career who I found condescending: they would just dismiss my opinion by saying "you don't know because you don't have all the information I have, and I don't have time to convince you, so I will just go for what you see as an inferior way and leave you frustrated". Which I believe is wrong.
Overall, I don't make a hierarchy of roles: if I feel like someone is in my team, I play with them. If I feel like they are an adversary, I play against them. I don't feel like I am superior to bad managers or bad product people; I just feel like they are adversaries.
It’s oblique but this puts me in mind of an old adage I recently heard about war: Of 100 men, one should be a warrior, nine should be soldiers, and 90 shouldn't be there at all.
I think this is true of software developers too: only in companies, the 90% don’t really know they shouldn’t be there and they build a whole world of systems and projects that is parallel to what the company actually needs.
This reads like it was written by a PM. You lacked higher level context and prioritization skills early in your career so the take away is it's best to divest agency to others?
There is a whole modern line of thinking that leaders should be providing the context and skills to give high performing teams MORE agency over their work streams.
I think he has a point. These power structures exist for some good reasons as well.
The opposite thing (engineers engaging directly with customers) can eventually lead to customer capture of your engineering org. You shouldn't have a small group of existing, noisy customers directly driving your engineering to the detriment of other existing or future customers.
Microsoft had customer capture institutionally: the existing big corporate customers were all that mattered. It lead to rebooting Windows CE into Windows Mobile way too late to make a difference, for example. But it also meant that backwards compatibility and the desire to ship Windows XP forever were sacred cows.
There are also nasty games that can be played by soliciting negative feedback for political advantage.
Dysfunction can exist with any structure. It's probably best that there's some small amount of direct user feedback as well as the big formalized feedback systems, at least so that one is a check for the performance of the other. If the user engagement team says everything is good, but there are massive Reddit threads about how horrible the product is to work with and the engineers know it could be better, it's time for engineering to start addressing the issues alongside feedback to the user engagement teams.
There's not enough hours in the day for everyone to do everything.
> There is a whole modern line of thinking that leaders should be providing the context and skills to give high performing teams MORE agency over their work streams.
Yes, this is great for agency over implementation, because leaders do not have context to decide and dictate the What/How of implementing every single change or solution to a problem. And the implementers need to know the context to ensure they make decisions consistent with that context.
But "leaders providing the context" is very different from "everyone researching the context on their own." So where are leaders getting this context from? A not-very-differentiated pile of 1000 generalist engineers-who-also-talk-to-customers-frequently-and-manage-their-own-work-streams? Or do they build a team with specialists to avoid needing the majority of people to constantly context-switch in a quest to be all of context-gatherers, work-prioritizers, market-researchers, and implementation-builders?
There are many leaders that use information as a tool that serves their own needs.
They may have the context, but they are either too focused on their own job to share it, or actively manage dissemination so they can manipulate the organization.
In my experience, this is the typical operating mode, though I do not think it is sinister or malicious - just natural.
Agree that this can be an issue but to clarify, I was finding bugs or missed outages, not gathering feature requests or trying to do product dev. Think "I clicked the button and got a 500 Server Error". I don't think random devs should try and decide what features to work on by reading user forums - having PMs decide that does make sense as long as the PM is good. However, big tech PMs too often abstract the user base behind metrics and data, and can miss obvious/embarrassing bugs that don't show up in those feeds. The ground truth is still whether users are complaining. Eng can skip complaints about missing features/UI redesigns or whatever, but complaints about broken stuff in prod needs their attention.
An org can always go too far in the opposite direction, but this is not an excuse to never talk to the customer. The latter is much more likely, so the warning to not get “into bed” with the customer falls flat.
This is a common pattern here. Alice says 0 degrees is too cold, I prefer 20C, Bob chimes in “100C is too hot, it’ll kill us.” Ok, well no one said or implied to crank it to one hundred.
If you have M customer complaints, and each one risks a differently-sized N customers... you better try to triage that vs just playing whack-a-mole with whatever comes to a random engineer first. I've never seen engineers plow through a bunch of 0-or-1-customers-would-actually-churn-over-this papercuts because it was easy and it feels good - the customer mentioned it! i fixed it! - while ignoring larger showstoppers that are major customer acquisition and retention barriers.
Nothing is knowable in only the same way that plans are useless but planning is essential.
> Every previous job I've had has a similar pattern. The engineer is not supposed to engage directly with the customer.
Chiming in to say I’ve experienced the same.
A coworker who became a good friend ended up on a PIP and subsequently fired for “not performing” soon after he helped build a non technical team a small tool that really helped them do their job quicker. He wasn’t doing exactly as he was told and I guess that’s considered not performing.
Coincidentally the person who pushed for him to be fired was an ex-Google middle manager.
I’ve also seen so commonly this weird stigma around engineers as if we’re considered a bit unintelligent when it comes to what users want.
Maybe there is something to higher ups having some more knowledge of the business processes and the bigger picture, but I’m not convinced that it isn’t also largely because of insecurity and power issues.
If you do something successful that your manager didn’t think of and your manager is insecure about their own abilities, good chance they’ll feel threatened.
I worked on an internal tools team for a few years and we empowered engineers to fix user issues and do user support on internal support groups directly.
We also had PMs who helped drive long term vision and strategy who were also actively engaging directly with users.
We had a "User Research" team whose job it was to compile surveys and get broader trends, do user studies that went deep into specific areas (engineers were always invited to attend live and ask users more questions or watch raw recordings, or they could just consume the end reports).
Everyone was a team working together towards the same goal of making these tools the best for our internal audience.
It wasn't perfect and it always broke down when people wanted to become gatekeepers or this or that, or were vying for control or power over our teams or product. Thankfully our leadership over the long term tended to weed those folks out and get rid of them one way or another, so we've had a decent core group of mid-level and senior eng who have stuck around as a result for a good 3 years (a long time to keep a core group engaged and retained working on the same thing), which is great for having good institutional knowledge about how everything works...
Theres another thread on HN at the moment about legislation being written by industry and rubber stamped by law makers. What hit me about this discussion and that one is that there's a lot of self interest out there with very little scrutiny or auditing. It boils down to that basically. If we want to fix problems at the top there needs to be independent auditing, reporting and consequence for people that do the wrong thing. But we all know thats not going to happen so buckle up and learn to live with broken laws and broken software.
Where I work we regularly bring in engineers to talk to clients directly. Clears up a lot of confusion when there’s something technical a PM wouldn’t understand. We still like to have a filter so a client isn’t trying to get the engineer to do free work. Having engineering isolated is pretty bad IMO.
There are very good less-cynical reasons. I've also seen companies with the opposite problem, where the engineers constantly shoot down real, important feedback brought by customer support in order to preserve the superiority of engineering over support.
If you have ten engineers and even just 100 customers, you have a very high number of conversational edges. Good luck keeping things consistent and doing any sort of long-term planning if engineers are turning the output of those conversations directly into features. "Engineers talking to customers but not making any changes" would be more stable, but is still a very expensive/chaotic way to gather customer feedback.
Additionally, very few of those single engineers have a full knowledge of the roadmap and/or the ability to unilaterally decide direction based on some of the customer feedback or questions. "Will this get fixed in the next two weeks?" "Will you build X?" etc. You don't want your customers getting a bunch of inconsistent broken promises or wrong information.
The best-managed orgs I've seen have pretty heavy engineering and user experience in their product and support orgs. You need people in those roles with knowledge of both how it's built AND how it should be used, but you can't continually cram all that knowledge into every single engineer.
A startup should start with the builders talking directly to the customers. But at a some point, if successful, you're going to have too many people to talk to and need to add some intermediaries to prevent all your engineering time going to random interrupts, and centralization of planning responsibilities to ensure someone's figuring out what's actually the most important feedback, and that people are going to work on it.
On the contrary, the best products are typically built by the users of the products. If you are building a product you don't use, it will be worse than if you used it.
Users should be everywhere, in and out of engineering.
> User obsession means spending time in support tickets
That's really funny when Google's level of customer support is known to be non-existent unless you're popular on Twitter or HN and you can scream loudly enough to reach someone in a position to do something.
"10. In a large company, countless variables are outside your control - organizational changes, management decisions, market shifts, product pivots. Dwelling on these creates anxiety without agency.
The engineers who stay sane and effective zero in on their sphere of influence. You can’t control whether a reorg happens. You can control the quality of your work, how you respond, and what you learn. When faced with uncertainty, break problems into pieces and identify the specific actions available to you.
This isn’t passive acceptance but it is strategic focus. Energy spent on what you can’t change is energy stolen from what you can."
------------------------
Point 10 makes it sound like the culture at Google is to stay within your own bailiwick and not step on other people's toes. If management sets a course that is hostile to users and their interests, the "sane and effective" engineers stay in their own lane. In terms of a company providing services to users, is that really being effective?
User interests frequently cross multiple bailiwicks and bash heads with management direction. If the Google mindset is that engineers who listen to users are "weird" or not "sane"/"effective", that certainly explains a lot.
It is an almost universal fact that dealing with retail customers is something that is left to the lowest paid, lowest status workers and often outsourced and now increasingly left to LLM chatbots.
While you obviously can't have highly paid engineers tied up dealing with user support tickets, there is a lot to be said for at least some exposure to the coal face.
> While you obviously can't have highly paid engineers tied up dealing with user support tickets,
You obviously can, that's one of the more visceral way to make them aware of the pain they cause to real people with their work, which sticks better, or simply serves as a reminder there are humans on the other side. There are even examples of higher paid CEOs engaging, we can see some of that on social media
I love reading this insights in a corp structure. Especially the sociological aspect of it (like "• It was viewed negatively by managers and promo committees."). Thanks a lot.
>• It was viewed negatively by managers and promo committees.
>• An engineer talking directly to users was considered especially weird and problematic.
>• The products did always have serious bugs that had escaped QA and monitoring
Sincerely, thank you for confirming my anecdotal but long-standing observations. My go-to joke about this is that Google employees are officially banned from even visiting user forums. Because otherwise, there is no other logical explanation why there are 10+ year old threads where users are reporting the same issue over and over again, etc.
Good engineering in big tech companies (I work for one, too) has evaporated and turned into Promotion Driven Development.
In my case: write shitty code, cut corners, accumulate tech debt, ship fast, get promo, move on.
> only one engineer was bothering to read spammer forums where they talked to each other, and he was also brand new to the team
This revelation is utterly shocking to me. That's like anti-abuse 101. You infiltrate their networks and then track their behavior using your own monitoring to find the holes in your observability. Even in 2010 that was anti-abuse 101. Or at least I think it was, maybe my team at eBay/PayPal was just way ahead of the curve.
Well, the 101 idiom comes from US education, it's a reference to the introductory course. Part of the problem with anti-abuse work is that there's no course you can take and precious little inter-firm job hopping. Anti-abuse is a cost of business so you don't see companies competing over employees with experience like you do in some other areas like AI research. So it's all learning-by-doing and when people leave, the experience usually leaves with them.
After leaving Google the anti-abuse teams at a few other tech companies did reach out. There was absolutely no consistency at all. Companies varied hugely in how much effort and skill they applied to the problem, even within the same markets. For payment fraud there is a lot of money at stake so I'd expect eBay would have had a good team, but most products at Google didn't lose money directly if there was abuse. It just led to a general worsening of the UX in ways that were hard to summarize in metrics.
I seem to recall sitting in weekly abuse team meetings where one of the metrics was the price of a google account on the black market. So at least some of these things were tracked and not just by one individual.
If an engineer talking to users is considered problematic, then it is safe to assume, that Google is about as fast away from any actually agile culture as possible. Does Google ever describe itself as such?
Having only ever worked for startups or consulting agencies, this is really weird to me. Across 6 different companies I almost always interfaced directly with the users of the apps I built to understand their pain points, bugs, etc. And I've always ever been an IC. I think it's a great way to build empathy for the users of your apps.
Of course, if you're a multi billion dollar conglomerate, empathy for users only exists as far as it benefits the bottom line.
Thanks for sharing your valuable insights. I am quite surprised to learn that talking to customers was frowned upon at Google (or your wider team at least). I find that the single most valuable addition to any project - complementary to actually building the product. I have a feeling a lot of the overall degradation of software quality has to do with a gradual creep in of non-technical people into development teams.
There are, and often times they're stuck in a loop of presenting decks and status, writing proposals rather than doing this kind of research.
That said, interpreting user feedback is a multi-role job. PMs, UX, and Eng should be doing so. Everyone has their strengths.
One of the most interesting things I've had a chance to be a part of is watching UX studies. They take a mock (or an alpha version) and put it in front of an external volunteer and let them work through it. Usually PM, UX, and Eng are watching the stream and taking notes.
When you get to a company that's that big, the roles are much more finely specialized.
I forget the title now, but we had someone who interfaced with our team and did the whole "talk to customers" thing. Her feedback was then incorporated into our day-to-day roadmap through a complex series of people that ended with our team's product manager.
So people at Google do indeed do this, they just aren't engineers, usually aren't product managers, frequently are several layers removed from engineers, and as a consequence usually have all the problems GP described.
PM is a fake job where the majority have long learned that they can simply (1) appease leadership and (2) push down on engineering to advance their career. You will notice this does not actually involve understanding or learning about products.
It's why the GP got that confused reaction about reading user reports. Talk to someone outside big company who has no power? Why?
I've had the pleasant experience of having worked for PMs at several companies (not at Google) who were great at their jobs, and advocated for the devs. They also had no problem with devs talking directly with clients, and in fact they encouraged it since it was usually the fastest way to understand and solve a problem.
Almost every job in the US is primarily about pleasing leadership at the end of the day.
If companies didn’t want that sort of incentive structure to play out then they would insulate employees from the whims of their bosses with things like contracts or golden parachutes that come out of their leaderships budget.
They pretty much don’t though, so you need to please your leadership first to get through the threat of at will employment, before considering anything else.
If you’re lucky what pleases your leadership is productive and if your super lucky what pleases them even pleases you.
Gotta suck it up and eat shit or quit if it doesn’t though
> If the Google culture was at all obsessed about helping users
It's worth noting that Osmani worked as a "developer evangelist" (at Google) for as long as I can remember, not as a developer working on a product shipped to users.
It might be useful to keep that in mind as you read through what his lessons are, because they're surely shaped by the positions he held in the company.
I was Addy's manager when he was on Developer Relations.
He moved to an engineering manager role on Chrome DevTools many years ago and has recently just moved on to a different team. I don't think it's fair at all to say he's not a developer working on a product shipped to users when he led one of our most used developer tools, as well as worked on many of our developer libraries prior to moving to the Engineering manager role.
Yeah, maybe I should have been more precise, I meant "end users like your mom" rather than "not real users". Developing for developers, in a engineering-heavy team is obviously different than the typical product-development team.
I think it is more the point that the users for his job were external developers. The role is inherently user facing and user focused. I don’t think anyone was trying to say he wasn’t a developer just that his job wasn’t to directly develop products
Yeah, I guess I just wanted to add that because of the way that quote was cut at the end, made me believe that the person quoting me thought Osmani "isn't a developer".
I think that's more "this sounds great" than "our users are developers". Google's services also aren't aimed at developers, the APIs are often very bureaucratic and not very well done (there's no way to list the available google sheets documents in the sheets api, I need the drive API and a different set of permissions? please.)
It reads exactly like what you'd expect from a "I want to be considered a thought leader" person: nothing you haven't read a hundred times but it sounds nice so you can nod along.
> If the Google culture was at all obsessed about helping users, I wonder why Google UX always sucked so much
Ok, I mean this sincerely.
You must never have used Microsoft tools.
They managed to get their productivity suite into schools 30 years ago to cover UX issues, even now the biggest pain of moving away is the fact that users come out of school trained on it. That also happens to be their best UX.
Azure? Teams? PowerBI? It's a total joke compared to even the most gnarly of google services (or FOSS tools, like Gerrit).
I do agree with you. Teams are a cancer and Azure UI sucks too. I do not use much MS products since essentially Win7 I have mainly used Linux as my work environment. But one thing MS used to be good at at least, was the documentation. If you are that old, you will remember each product came with extensive manuals AND there was an actual customer support. With google its like...not even that.
With continuous delivery and access to preview and beta features, the documentation is fragmented and scattered and half of it technically is for the previous version of the product with a different name but still mostly works because microsoft can't finish modernizing most software...
And the customer support is not great until you start really paying the big bucks for it.
> If you are that old, you will remember each product came with extensive manuals AND there was an actual customer support.
But even then, contemporaries outclassed Microsoft by a lot.
It was culture back then to provide printed user manuals, I still have some from Sun Microsystems because it was the best resource I found to learn how storage appliances should work and the technical trade-offs of them.
Fair enough, everyone delivered software in boxes and with 500 page manuals. I still maintain MS did invest a lot in the quality of their documentation and they cared about developers - otherwise the Petzold series would have never happened (or the MS Press for that matter).
Honestly your entire comment is almost exact polar opposite to how I feel.
GCP Makes total sense if you know anything about systems administration, Google docs is limited for things like custom fonts (IE; not gonna happen) but it's simple at least and I can give people a link to click and it's gonna look the same for them.
But, honestly, the Teams one is baffling. I can't think of a single thing Meet does worse than Teams.
Yeah that seriously whiplashed me too, I'm genuinely confused. Google Meets has always worked completely fine for me, good performance, works well on mobile, Firefox, etc. Nothing special but it works. Probably my favorite of all the meeting apps.
Teams meanwhile is absolutely my least favorite, takes forever to load, won't work in Firefox, nags me to download the app, confusing UI. I don't think I've ever heard anyone say they like teams.
MS Teams might have its issues (and let’s be clear, i agree there are a great many issues) but it has most, if not all, of the Enterprise features you need from a video conferencing suite.
Whereas Google Meets feels more like a cut down toy you’d give to your grandparents.
It’s the same thing with Google Docs. They’re technically impress for the era they were launched, but they’re stuck in the 2010s. Doing anything outside of the basics quickly becomes far far more frustrating than using O365.
Microsoft might write a lot of terrible software with some questionable design choices, but they understand enterprise uses far better than Google.
Even Google Workspaces is severely limited once your business grows beyond 50 people.
I guess if you only work in startups then Google might seem like an easy win. But for any business that’s more established, you just constantly run into huddles with Googles suite of software.
As for GCP, I’ve been burned too many times with their support processes. 7 days to approve a GPU quota. Account managers literally trying to steal business secrets (when I worked for an AI start up and Google were stagnating in the AI space). And so on and so forth. Though I’ve not been hugely impressed with Azure either; they constantly break managed services and ballsup scalability promises and then refuse to admit it until we present them with empirical evidence. It really feels like the best cloud engineers have left Microsoft (or maybe never joined?).
I've used Meet a few times for video calls and I was amazed at how poorly it worked given the amount of resources Google has at their disposal. I've never had a good video call on Meets. I've had a few Meet calls where over time the resolution and bitrate would be reduced to such a low point I couldn't even see the other person at all (just a large blocky mess). Whereas Teams (for all its flaws) normally has no major issues with the video quality. Teams isn't without its flaws and I do occassionally fall back to ZOom for larger group video calls but at the end of the day Teams video calling sort of just works fine. Not great but not terrible either. YMMV of course.
I've had the complete opposite experience. Meet has been rock solid for me whilst Teams has been an absolute nightmare.
The thing is though both Meet and Teams use centralised server architectures (SFUs: Selective Forwarding Units for Google, "Transport Routers" for Teams), so your quality issues likely come down to network routing rather than the platforms themselves. The progressive quality degradation you're describing on Meet sounds like adaptive bitrate doing its job when your connection to Google's servers is struggling.
The reason Teams might work better for you is probably just dumb luck with how your ISP routes to Microsoft's network versus Google's. For me in Sweden, it's the opposite ... Teams routes my media through relays in France, which adds enough latency that people constantly interrupt each other accidentally. It's maddening. Meanwhile, Meet's routing has been flawless.
But even if Teams works for your particular network setup, let's not pretend it's a good piece of software. Teams is an absolute resource hog that treats my CPU like a space heater and my RAM like an all-you-can-eat buffet. The interface is cluttered rubbish, it takes ages to start up, and the only reason anyone tolerates it is because Microsoft bundled it with Office 365.
Your mileage definitely varies... sounds like you've got routing that favours Microsoft's infrastructure. Lucky you, I suppose, but that doesn't make Teams any less dogwater for those of us stuck with their poorly-placed European relays.
As someone who worked on Meet at Google, it seems that it could have been networking to the datacenters where the call is routed from, some issues with UDP comms on your network which triggered a bad fallback to WebRTC over TCP. Could also have been issues with the browser version you used.
Since Teams is using the very old H264 codec and Meet is using VP8 or VP9 depending on the context, it's possible you also had some other issues with bad decoding (usually done in software, but occasionally by the hardware).
Overall, it shouldn't be representative of the experience on Meet that I've seen, even from all the bug reports I've read.
It's not just Google, the UX is degrading in... Well everything. I think it's because companies are in a duopole, monopole etc position.
They only do what the numbers tell them. Nothing else and UX just does not matter anymore.
It's like those gacha which make billions. Terrible games, almost zero depth, but people spend thousands in them. Not because they are good, but because they don't have much choice ( similar game without gacha) and part the game loop is made for addiction and build around numbers.
To offer some additional causes for the degradation of UX:
1. An increasing part of industry profits started coming from entertainment (or worse, psychological exploitation) instead of selling the customer a useful tool. For example, good budgeting-software has to help the user understand and model and achieve a goal, while a "good" slot-machine may benefit from confusion and distraction and a giant pull-handle.
2. "Must work on a touchscreen that fits in a pocket" support drags certain things to a lowest common denominator.
3. UX as a switching-cost for customers has started happening more on a per-product rather than a per-OS basis. Instead of learning the Windows or Mac "way" of screens and shortcuts, individual programs--especially those dang Electron apps--make their own reinventions of the wheel.
To be fair, it reads precisely “1. The best engineers are obsessed with solving user problems”. This doesn’t say those engineers are working at Google, just that it’s something the author learned whilst they worked at Google.
“Some [of these lessons] would have saved me months of frustration”, to quote the preamble.
I was going post exactly this! He was talking about those engineers that really exemplified, from his point of view, good engineers.
And dealing with engineering managers that didn't see much use in such activity might be part of "figur[ing] out how to navigate everything around the code: the people, the politics, the alignment, the ambiguity".
Addy's users have been developers and Google has been very responsive in the past. I was usually able to get a hold of someone from teams I needed from Chrome DevTools and they've assisted open source projects like Node.js where Google doesn't have a stake. He also has a blog, books and often attended conferences to speak to users directly when it aligned with his role. I agree about the general Google criticism but I believe it's unjustified in this particular (admittedly rare) case.
And material UI is still the worst of all UIs. Had the pleasure of rolling out a production oauth client ... jesus christ. Only worse is microsoft in UX. You don't want me to use your services, do you?
I'm not sure how that got approved either, but at least we now know what would happen if a massive corporation created a UI/UX toolkit, driven only by quantitative analytics making every choice for how it should be, seemingly without any human oversight. Really is the peak of the "data-driven decisions above all" era.
I have an issue with the first point as well, but differently. Having worked on a user-facing product with millions of users, the challenge was not finding user problems, but finding frequent user problems. In a sufficiently complex product there are thousands of different issues that users encounter. But it's non-trivial to know what to prioritize.
I was also surprised to read this. I have terrible problems with all Google UIs. I can never find anything and it's an exercise in frustration to get anywhere.
I think your particular Gmail issue exists because they want mobile web and touch screen web users (there are dozens of us!) to be able to tap the recipient to show the user card, like hover does for mouse users. To support your usecase (click to directly edit recipient), touch, click, and hover need to have different actions, which may upset some other users. Unless you mean double click to edit, which I would support.
I save my energy for more heinous UX changes. For example, the YouTube comment chyron has spoiled so many videos for me and is just so generally obnoxious.
There is a lot of nuance to their point. They are saying, in the long run, career wise, focusing on the actual user matters and makes your projects better.
Google UX is decent and the author was not trying to comment on UX as a thing at Google. More that, if you follow the user what you are doing can be grounded and it makes your project way more likely to succeed. I would even argue that in many cases it bucks the trend. The author even pointed out, in essence there is a graveyard of internal projects that failed to last because they seemed cool but did nothing for the user.
Read their point 1 carefully. They are saying, when you are building something or trying to solve a problem (for internal or external users) if you follow the user obsessively you will have a far better outcome that aligns with having impact and long term success. This does imply thinking about UX, but transitively, IMO.
I am not sure I follow - is he, or is he not, writing about his experiences from 14 years at Google? The title suggests he does, yet you suggest that he does not?
Oh, I have no doubt they are at Google. I was just trying to say that the author was not really making a commentary on UX directly. The author was trying to make the point that understanding what sort of products and problems users have is a valid long term strategy for solving meaningful problems and attaching yourself to projects, within Google, that are more likely to yield good results. And if you, yourself, are doing this within Google it benefits you directly. A lot of arguments win and die on data, so if you can make a data driven argument about how users are using a system, or what the ground reality of usage in a particular system is and can pair that with anecdotal user feedback it can take you a long way to steering your own, and your orgs work, towards things that align well with internal goals and or help reset and re-prioritize internal goals.
His learnings from 14 years at Google. Surely we've all learned things working for employers or with engineers that don't do a thing well.
In 14 years he probably also experienced great engineers come and go and start other successful businesses they very likely did not run exactly like Google.
The short answer is that the UI isn’t optimized for users like you.
I haven’t worked for Google specifically, but at this scale everything gets tested and optimized. I would guess they know power users like you are frustrated, but they know you’ll figure it out anyway. So the UX is optimized for a simpler target audience and possibly even for simpler help documents, not to ensure power users can get things done as quickly as possible.
I feel like you're giving too much credit here. I don't know if it was a leak or an urban legend, but I remember the awful win 8 "flat boxes UI" being that way because it could be designed by managers in PowerPoint that way
The specific feature in question...there is nothing "power" about it. It was a non-feature for decades essentially, I dont recall ever not being able to simply change an e-mail address by moving the cursor and typing in something else. How on earth is this something tested and optimised, for whom exactly?
This is almost certainly not the case. The larger the company the more change is viewed as a negative. Yes people may hold titles to do the things you describe but none are empowered to make change.
Google UI seemingly is optimized for happy path cases. Search for the obvious word and click a relevant link on the screen which appears. Write a single response to a single email and abandon than conversation afterwards, always use new conversations for every new email. Click a recommended video thumbnail on the frontpage and then continue with autoplay. Put only short defined text type in the cells of a spreadsheet, like date/number/text etc. And so on with all of their products.
But as soon as user tries to search for something no on the first page, or reply to a 10-20+ message thread with attachments in history, or tries to use playlists or search in YT, or input a slightly more complex data in the sheet cells - then all hell breaks loose.
Just the latest Google thing I've experienced - a default system Watch Later playlist is now hidden on Android. It's gone, no traces, no way to search for it. The only remnant of it is a 2-second popup after adding a new video to Watch Later, you can press "view" and then see it. Meanwhile it is still present as a separate item on PC. I'm writing this eaxmple because that was deliberate, that was no error or regression. Someone created a Jira for that and someone resolved it.
This is definitely an edge case. Most UI/UX from Google is very consistent and just works. Otherwise they won't be in this market.
Only UI/UX issue is that most experienced users want to not adapt to change. It is like people always telling Windows 7 is the best. Don't keep reinventing.
Another one that irks me is every UI/UX dev assumes people have 2 x 4K monitors and menu items overflow.
> Only UI/UX issue is that most experienced users want to not adapt to change
Users will not only adapt, but will even champion your changes if they make sense to said users. For example the web checkout or to name a more drastic example, iPhone and fingers as user interface devices. Once you start convincing the users that the interface is great, but they are too resistant to changes/dumb/uncreative to know how use it... its a different story I´d reckon ;)
> Recently I was writing an e-mail and noticed I misspelled the e-mail address of the recipient, which I rarely do. So, I should just be able to click the address and edit it quickly, right? Wrong - now you have a popup menu and inside of it you have to search for "edit e-mail" option.
I just tested this out and I don't think that's a particularly good example of bad UI/UX. Clicking the email address brings up a menu with options for other actions, which presumably get used more often. If, instead, you right-click the email address, the option to edit it is right there (last item on the bottom, "Change email address"). I don't see this as a huge penalty given that, as you said, it's rarely used.
There's also the "X" to the right of the email address, which you can use to delete it entirely, no extra clicks required.
> I just tested this out and I don't think that's a particularly good example of bad UI/UX
Luckily for both you and me, we dont have to rely on our feelings of what is good UX or not. There are concrete UX metholodogies such as Hierarchical Task Analysis or Heuristic Evaluation. These allow us to evaluate concrete KPIs, such as number of steps and levels of navigation required for an action, in order to evaluate just how good or bad (or better said, complicated a UX design is).
Lets say we apply the HTA. Starting from the top of your navigation level when you want to execute the task, count the number of operations and various levels of navigation you have to go through with the new design, compared to just clicking and correcting the e-mail address in-place? How much time does it take you to write your e-mail in the both cases? How many times do you have to switch back and forth between the main interface and the context menu google kindly placed for us?
Now, phase out of your e-mail writing window and evaluate how many various actions you can execute in the Google Workspace. Most of them are likely to have a few quirks like this. Now multiply the estimated number of actions with the number of quirks and you will slowly start to see the immense cognitive load the average user has to face in using, or shall I rather say "combating" the google products' UX.
https://bluecinema.ch (To buy movie tickets for a certain movie chain in Switzerland. I haven't used this in many years, but at first glance it looks like I remember it. Back then, this was a very smooth experience both on desktop and mobile. Just perfectly done.)
Any spreadsheet program (it's the spreadsheet itself, which I like, not necessarily how the UI is aranged around it)
Apple's Spotlight, GNOME's similiar thing (don't know the name)
For the all the necessary complexity and race-to-the-bottom features, I am a fan of Jetbrains. I like using Uber, Twitch (wrote a plugin for it one weekend to integrate with chrome), Netflix, Discord. There are plenty of companies that manage to be enjoyable to end users and expose apis without the inscrutable abstractions and terminology I encounter using google products. It feels the same as working with Oracle.
Netflix? The barely functional video player accessed via excessively bloated thumbnail gallery? About the only good thing to say about this is that all the other movie streaming platforms somehow are even worse.
Its not hating - just stating the facts. Most companies unfortunately dont have a nice UX these days, because common UX practices like not making user think (i.e. overcomplicating the UIs) and not blocking users (showing annoying popups in the middle of UI workflows) somehow became a lost art. Some products are inherently easy to use like draw.io for example. I really like the UX on Stripe, in particular their onboarding process. There is also a semi-famous e-commerce company, in the furniture space. I forgot their name (something with W?), but I ordered something once, and was really impressed by how smooth and uncomplicated the process from browsing the inventory to checkout and delivery itself was.
No one's. Everyone sucks. Find a product and you'll find a population collating complaints about it. Whining about interface design is like the cheapest form of shared currency in our subculture.
Fundamentally it's a bikeshed effect. Complaining about hard features like performance is likely to get you in trouble if you aren't actually doing the leg work to measure it and/or expert enough to shout down the people who show up to argue. But UI paradigms are inherently squishy and subjective, so you get to grouse without consequences.
As somebody who already does this, I wouldn't say the Thunderbird's UX is the real motivation.
I do it for autonomy and avoiding lock-in, but Thunderbird has some frustrating inconsistencies particularly in its mishmash of searching and filtering.
More seriously - open source software is resistant to enshittification. It's obviously not a panacea, but the possibility of forks (or just the user deciding not to update), combined with the difference in profit motive, tends to result in software that respects the user.
(Taken holistically, the UX of software does not just mean the UI, or the moments when you are using the software. It also includes the stability of the software over time, including whether or not you are able to reject new versions whether you do not like.)
This. The only real risk with open source is that a (fairly niche) project is discontinued/abandoned, and you can't find binaries anymore for it anymore (and you don't have the skills to build it yourself). But this happens to proprietary software all the time (see killedbygoogle.com).
Omni Group. Wolfram. Parts of Apple. Rhino3D. Parts of Breville. Prusa (on device, not on desktop). Speed Queen (dial-based). Just from applications I currently have open and devices I can see from where I'm sitting.
I mean something that has a clear Google analog/equivalent that way can compare on. I personally think Wolfram Alpha (assuming that's what you're talking about) isn't any better than Google.
Never really used Alpha, was talking about Mathematica.
I don’t the the web is compatible with good UX, but that doesn’t mean good UX isn’t possible — it just means that the companies that are successful at UX build native applications, or physical objects, or both.
You are onto something there, if you mean, the design roles being taken over by the people who are not techies - like the POs. But if you just refer to UX being designed for mobile devices - that is not an excuse for an even worse UX on the mobile. If anything I would have expected more effort put in there, given how many more issues the limited screen estate can cause...
> wonder why Google UX always sucked so much and in particularly in the recent years seem to be getting even worse
UX? Google doesn't even bother helping folks locked out of their Gmail accounts. For people who use Android (some 3bn), that's like a digital death sentence, with real-world consequences.
It is almost comical that anyone would think Google is customer-focused, but might if they were being paid handsomely to think otherwise, all the while drinking a lot of kool-aid.
The thing is that at scale your edge cases are still millions of people. Companies love the benefits that come from scale, like having a billion people use their service, but they never seem to be capable of handling the other parts that come with it :(
Google rakes in $100bn a quarter; that's $1bn every day.
That is a great point too. For a company which effectively does not have a customer service, how can they claim to be obsessing about helping users at all?
And how are they supposed to do it if users did not add proper 2FA (and backup those recovery keys)?
Even banks are struggling to authenticate folks. For a longtime in EU people with 3rd world passports cannot create accounts easily.
Google cannot connect identity of a person to email address easily. Or they need to create CS - that will authenticate passports? And hundreds of countries, stolen IDs?
Nay.
> The thing is that at scale your edge cases are still millions of people
> never seem to be capable of handling the other parts that come with it
Same thing with govts. If you go to driver license. passport or any govt office then there will one person with some strange issue.
When Google first launched it's homepage, its emptiness (just a logo & search box) was a stark contrast to the portal pages popular, which were loaded with content.
Some thought the Google homepage "sucked" whereas other liked it. (I was in the latter.)
Likewise, the interface for Gmail. Or the interface for Google Maps. Or the interface for Chrome.
I remember when Google appeared and literally can't recall anyone who thought it sucked. There statistically have to be some people who hated it. But everyone I knew was either on dial-up or low bitrate leased line and it was impossible to dislike that design.
But not everyone was on dial-up. A lot were in dorms w/ (for the time) high speed connections or workplaces with it.
Remember at the time it wasn't clear that search was going to be the dominate pattern for how people found information on the web. It seems crazy now, but in the early days of the web, the space was small enough that a directory-style approach worked pretty well. It was Yahoo's directory that made it initially popular, not its search.
And so there was a fair bit of debate on which was better -- something like a directory + search (a la Yahoo!) vs just search.
It took a bit of time before search proved if it was done really well, you didn't need a directory.
As a developer I took the writer's point to refer to "users" generically, so that even if you work on some internal tools or a backend layer, you still have users who have to use your app or consume your API and there is a lot of learning possible if you communicate and understand them better.
Probably the users he is talking about are not the end users like you and me. It is one team using the tools/software of the other team and so "users" for that other team are the members of the first team.
It will only get better at generating random slop and other crap. Maybe helping morons who are unable to eat and breathe without consulting the "helpful assistant".
> One can treat current LLMs as a layer of "cheese" for any software development or deployment pipeline
It's another interesting attempt at normalising the bullshit output by LLMs, but NO. Even with the entshittified Boeing, the aviation industry safety and reliability records, are far far far above deterministic software (know for a lot of un-reliability itself), and deterministic, B2C software to LLMs in turn is what Boeing and Airbus software and hardware reliablity are for the B2C software...So you cannot even begin to apply aviation industry paradigms to the shit machines, please.
I understand the frustration, but factually it is not true.
Engines are reliable to about 1 anomaly per million flight hours or so, current flight software is more reliable, on order of 1 fault per billion hours. In-flight engine shutdowns are fairly common, while major software anomalies are much rarer.
I used LLMs for coding and troubleshooting, and while they can definitely "hit" and "miss", they don't only "miss".
I was actually comparing aviation HW+SW vs. consumer software...and making the point that an old C++ invoices processing application, while being way less reliable than aviation HW or SW, is still orders of magnitude more reliable than LLMs. The LLMs don't always miss, true...but they miss far too often for the "hit" part to be relevant at all.
They miss but can self correct, this is the paradigm shift. You need a harness to unlock the potential and the harness is usually very buildable by LLMs, too.
Concrete examples are in your code just as they're in my employer's which I'm not at the liberty to share - but every little bit counts, starting from the simplest lints, typechecks, tests and going to more esoteric methods like model checkers. You're trying to get the probability of miss down with the initial context; then you want to minimize the probability of not catching a miss, then you want to maximize the probability of the model being able to fix a miss itself. Due to the multiplicative nature of the process the effect is that the pipeline rapidly jumps from 'doesn't work' to 'works well most of the time' and that is perceived as a step function by outsiders. Concrete examples are all over the place, they're just being laughed at (yesterday's post about 100% coverage was spot on even if it was an ad).
> before coding I just ask the model "what are the best practices in this industry to solve this problem? what tools/libraries/approaches people use?
Just for the fun of it, and so you lose your "virginity" so to speak, next time when the magic machine gives you the answer about "what it thinks", tell it its wrong in a strict language and scold it for misleading you. Tell it to give you the "real" best practices instead of what it spat out.
Then sit back and marvel at the machine saying you were right and that it had mislead you. Producing a completely, somewhat, or slightly different answer (you never know what you get on the slot machine).
AI boosters? Like people are planted by Sam Altman like the way they hire crowds for political events or something? Hey! Maybe I’m AI! You’re absolutely right!
In seriousness: I’m sure there are projects that are heavily powered by Claude, myself and a lot of other people I know use Claude almost exclusively to write and then leverage it as a tool when reviewing. Almost everyone I hear that has this super negative hostile attitude references some “promise” that has gone unfulfilled but it’s so silly: judge the product they are producing and maybe just maybe consider the rate of progress to _guess_ where things are heading
I never said "planted", that is your own assumption, albeit a wrong one. I do respect it though, as it is at least a product of a human mind. But you don't have to be "planted" to champion an idea, you are clearly championing it out of some kind of conviction, many seem to do. I was just giving you a bit of reality check.
If you want to show me how to "guess where things are heading" / I am actually one of the early adopters of LLMs and have been engineering software professionally for almost half my life now. Why do you think I was an early adopter?
Because I was skeptical or afraid of that tech? No, I was genuinely excited. Yes you can produce mountains of code, even more so if you were already an experienced engineer, like myself for example.
Yes you can even get it to produce somewhat acceptable outputs, with a lot of effort at prompting it and fatigue that comes with it. But at the end of the day, as an experienced engineer, I am not being more productive with it, I will end up being less productive because of all the sharp edges I have to take care of, all the sloppily produced code, unnecessary bloat, hallucinated or injected libraries etc.
Maybe for folks who were not good at maths or had trouble understanding how computers work this looks like a brave new world of opportunities. Surely that app looks good to you, how bad can it be? Just so you and other such vibe-coders understand, here is a parallel.
It is actually fairly simple for a group of aviation enthusiasts to build a flying airplane. We just need to work out some basic mechanics, controls and attach engines. It can be done, I've seen a couple of documentaries too. However, those planes are shit. Why? Because me and my team of enthusiast dont have the depth of knowledge of a team of aviation engineers to inform my decisions.
What is the tolerance for certain types of movements, what kind of materials do I need to pick, what should be my maintenance windows for various parts etc. There are things experts can decide on almost intuitively, yet with great precision, based on their many years of craft and that wonderful thing called human intelligence. So my team of enthusiasts puts together an airplane. Yeah it flies. It can even be steered. It rolls, pitches and yawns. It takes off and lands. But to me it's a black-box, because I don't understand many, many factors, forces, pressures, tensors, effects etc that are affecting an airplane during it's flight and takeoff. I am probably not even aware WHAT I should be aware of. Because I dont have that deep educaiton about mechanical engineering, materials, aerodynamics etc. Neither does my team. So my plane, while impressive to me and my team, will never take off commercially, not unless a team of professionals take it over and remakes it to professional standards. It will probably never even fly in a show. And if me or someone on my team dies flying it, you guessed it - our insurance sure as hell won't cover the costs.
So what you are doing with Claude and other tools, while it may look amazing to you, is not that impressive to the rest of us, because we can see those wheels beginning to fall off even before your first take off. Of course, before I can even tell that, I'd have to actually see your airplane, it's design plans etc. So perhaps first show us some of those "projects heavily powered by Claude" and their great success, especially commercial one (otherwise its a toy project), before you talk about them.
The fact that you are clearly not an expert on the topic of software engineering should guide you here - unless you know what you are talking about, it's better to not say anything at all.
How would you know whether he is an expert on the topic of software engineering or not?
For all I know, he is more competent than you; he figured out how to utilize Claude Code in a productive way, which is a point for him.
I'd have to guess whether you are an expert working on software not well suited for AI, or just average with a stubborn attitude towards AI and potentially not having tried the latest generation of models and agentic harnesses.
I think it's worth framing things back to what we're reacting to. The top poster said:
> I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.
The rest of the post is basically their human declaration of obsolescence to the programming field. To which someone reacted by saying that this sounds like shilling. And indeed it does for many professional developers, including those that supplement their craft with LLMs. Declaring that you feel inadequate because of LLMs only reveals something about you. Defending this position is a tell that puts anyone sharing that perspective in the same boat: you didn't know what you were doing in the first place. It's like when someone who couldn't solve the "invert a binary tree" problem gets offended because they believed they were tricked into an impossible task. No, you may be a smart person that understands enough of the rudiment of programming to hack some interesting scripts, but that's actually a pretty easy problem and failing to solve it indeed signals that you lack some fundamentals.
> Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise.
I've read Antirez, Simon Willison, Bryan Cantrill, and Armin Ronacher on how they work or want to work with AI. From none I've got this attitude that they're no longer needed as part of the process.
> Considering those views are shared by a number of high profile, skilled engineers, this is obviously no basis for doubting someone's expertise
Again, a lot of fluff, a lot of of "a number ofs", "highly this, highly that". But very little concrete information. What happened to the pocket PhDs promised for this past summer? Where are the single-dude billion dollar companies built with AI tools ? Or even a multiple-dudes billion dollar companies ? What are you talking about?
I've yet to see it from someone who isn't directly or indirectly affiliated with an organisation that would benefit from increased AI tool adoption. Not saying it's impossible, but...
Whereas there are what feels like endless examples of high profile, skilled engineers who are calling BS on the whole thing.
You can say the same about people saying the opposite. I haven’t heard from a single person who says AI can’t write code that does not a financially interest directly or indirectly in humans writing code.
That seems rather disingenuous to me. I see many posts which clearly come from developers like you and me who are happy with the results they are getting.
Every time people on here comment something about "shilling" or "boosters". It would seem to me that in the rarest of cases someone shares their opinion to profit from it, while you act like that is super common.
Right: they disagree with me and so must not know what they’re talking about. Hey guess how I know neither of you are all as good as you think you are: your egos! You know what the brightest people at the top of their respective fields have in common? They tend not to think that new technologies they don’t understand how to use are dumb and they don’t think everyone who disagrees with them is dumb!
> you are clearly not an expert on the topic of software engineering should guide you here - unless you know what you are talking about, it's better to not say anything at all.
Yikes, pretty condescending. Also wrong!
IMO you are strawmanning pretty heavily here.
Believe it or not, using Claude to improve your productivity is pretty dissimilar to vibe coding a commercial airplane(?) which I would agree is probably not FAA approved.
I prefer not to toot my own horn, but to address an idea you seem to have that I don’t know math or CS(?) I have a PhD in astrophysics and a decade of industry experience in tech and other domains so I’m fairly certain I know how math and computers work but maybe not!
I’m an expert in what I do. A professional, and few people can do what I do. I have to say you are wrong. AI is changing the game. What you’ve written here might’ve been more relevant about 9 months ago, but everything has changed.
Right I’m a bot made to promote AI like half the people on this thread.
I don’t know if you noticed a difference from other hype cycles but other ones were speculative. This one is also speculative but the greater divide is that the literal on the ground usefulness of AI is ALREADY going to change the world.
The speculation is that the AI will get better and will no longer need hand holding.
I'm having a lot of trouble understanding what you're trying to convey. You say there's a difference from previous "speculation" but also that it's still speculation. Then you go on to write "ALREADY going to" which is future tense (speculation), even clarifying what the speculation is.
So let me explain it more clearly. AI as it is now is already changing the game. It will reduce the demand of swes across every company as an eventuality if we hold technological progress fixed. There is no speculation here. This comes from on the ground evidence from what I see day to day and what I do and my experience pair programming things from scratch with AI.
The speculation is this: if we follow the trendlines of AI improvement for the past decade and a half, the projection of past improvement indicates AI will only get better and better. It’s a reasonable speculation, but it is nonetheless speculative. I wouldn’t bet my life on continuous improvement of AI to the point of AGI but it’s now more than ever before a speculation that is not unrealistic.
Nice slop response. This is the same thing said about blockchain and NFTs, same schtick, different tech. The only thing "AI" has done is convince some people that it's a magical being that knows everything. Your comments seem to be somewhere on that spectrum. And, sure what if it isn't changing the world for the better, and actually makes things much worse? You're probably okay with that too, I guess, as long as your precious "AI" is doing the changing.
We've seen what social media and every-waking-hour access to tablets and the internet has done to kids - so much harm that some countries have banned social media for people under a certain age. I can see a future where "AI" will also be banned for minors to use, probably pretty soon too. The harms from "AI" being able to placate instead of create should be obvious, and children shouldn't be able to use it without adult supervision.
>The speculation is that the AI will get better and will no longer need hand holding.
This is nonsense. No AI is going to produce what someone wants without telling it exactly what to do and how to do it, so yes, it will always need hand holding, unless you like slurping up slop. I don't know you, if you aren't a bot, you might just be satisfied with slop? It's a race to the bottom, and it's not going to end up the way you think it will.
>This is nonsense. No AI is going to produce what someone wants without telling it exactly what to do and how to do it, so yes, it will always need hand holding, unless you like slurping up slop. I don't know you, if you aren't a bot, you might just be satisfied with slop? It's a race to the bottom, and it's not going to end up the way you think it will.
You're not thinking clearly. A couple years ago we didn't even have AI who could do this, then chatGPT came out we had AI who could barely do it, then we had AI who could do simple tasks with A lot of hand holding, now we have AI who can do complex human tasks with minimal hand holding. Where do you think the trendline is pointing.
Your hypothesis is going against all evidence. It's more wishful thinking and irrational. It's a race to the bottom because you wish it will be a race to the bottom, and we both know the trendline is pointing in the opposite direction.
>We've seen what social media and every-waking-hour access to tablets and the internet has done to kids - so much harm that some countries have banned social media for people under a certain age. I can see a future where "AI" will also be banned for minors to use, probably pretty soon too. The harms from "AI" being able to placate instead of create should be obvious, and children shouldn't be able to use it without adult supervision.
I agree AI is bad for us. My claim is it's going to change the world and it is already replacing human tasks. That's all. Whether that's good or bad for us is an ORTHOGANOL argument.
I use AI every day, and it's honestly crap. No it isn't significantly improving, it's hitting a wall. Every new model release is getting less and less good, so no, the "trendline" is not going up as much as you seem to think it is. It's plateaued. The only way "AI" is going to change the world is if stupid people put it in places that it really shouldn't be, thinking it will solve problems and not create even more problems.
Proof of what? Should you also have to prove you are not a bot sponsored by short-sellers? It’s all so so silly, anti-AI crowds on HN rehash so many of the same tired arguments it’s ridiculous:
- bad for environment: how? Why?
- takes all creative output and doesn’t credit: common crawl has been around for decades and models have been training for decades, the difference is that now they’re good. Regurgitating training data is a known issue for which there are mitigations but welcome to the world of things not being as idealistic as some Stallman-esque hellscape everyone seems to want to live in
- it’s bad and so no one should use it and any professionals who do don’t know what they’re doing: I have been so fortunate to personally know some of the brightest minds on this planet (Astro departmentments, AI research labs) and majority of them use AI for their jobs.
>Should you also have to prove you are not a bot sponsored by short-sellers?
On a 35 day-old account, yes. Anything "post-AI" is suspect now.
The rest of your comment reads like manufactured AI slop, replying to things I never even wrote in my one sentence comment. And no surprise coming from an account created 1 day ago.
I think it’s quite obvious I’m not writing AI slop.
The latest chatgpt for example will produce comments that are now distinguishable from the real thing only because they’re much better written. It’s insane that the main visible marker rn is that the arguments and writings it crafts are superior then what your average joe can write.
My shit writing can’t hold a candle and that’s pretty obvious. AI slop is not accepted here but I can post an example of what AI slop will now look like, if AI responded to you it would look like this:
Fair to be skeptical of new accounts. But account age and “sounds like AI” are not workable filters for truth. Humans can write like bots, bots can write like humans, and both can be new. That standard selects for tenure, not correctness.
More importantly, you did not engage any claim. If the position is simply “post-AI content from new accounts is suspect,” say that as a moderation concern. But as an argument, suspicion alone does not refute anything.
Pick one concrete claim and say why it is wrong or what evidence would change your mind. Otherwise “this reads like slop” is just pattern matching. That is exactly the failure mode being complained about.
I accused another user of writing AI slop in this specific thread, and here you are inserting yourself as if you are replying to comment I made to the other user. You certainly seem desperate to boost "AI" as much as you can. Your 37 day old account is also just as suspect as their 3 day old account. I'm not engaging with you any more so replying is kind of pointless.
Obviously not troll, I know I’m bragging. But I have to emphasize that it is not some stupid oh “only domain experts know AI is shit. Everyone else is too stupid to understand how bad it is” That is patently wrong.
Few people can do what I do and as a result I likely make more money than you. But now with AI… everyone can do what I do. It has leveled the playing field… what I was before now matters fuck all. Understand?
I still make money right now. But that’s unlikely to last very long. I fully expect it to disappear within the next decade.
You are wrong. People like yourself will likely be smart enough to stay well employed into the future. It's the folks who are arguing with you trying to say that AI is useless who will quickly lose their jobs. And they'll be all shocked Pikachu face when they get a pink slip while their role gets reassigned to an AI agent
> It's the folks who are arguing with you trying to say that AI is useless who will quickly lose their jobs.
Why is it that in every hype there are always the guys like you that want to punish the non-believers? It's not enough to be potentially proven correct, your anger requires the demise of the heretics. It was the same story for cryptocurrencies.
He/she is probably one of those poor souls working for an AI-wrapper-startup who received a ton of compensation in "equity", which will be worth nothing when their founders get acquihired, Windsurf style ;) But until then, they get to threaten us all with the impending doom, because hey, they are looking into the eye of the storm, writing Very Complex Queries against the AI API or whatever...
Isn’t this the same type of emotional response he’s getting accused for? You’re speculating that he will be “punished” just as he speculated for you.
There’s emotions on both sides and the goal is to call it out, throw it to the side and cut through into the substance. The attitude should be: Which one of us is actually right? Rather than: I’m right and you’re a fucking idiot attitude I see everywhere.
Mate, I could not care less if he/her got "punished" or not. I was just assuming what might be driving someone to go and try and answer each and every one of my posts with very low quality comments, reeking of desperation and "elon-style" humour (cheap, cringe puns). You are assuming too much here.
Not too dissimilar to you. I wrote long rebuttals to you points and you just descended into put downs, stalking and false accusations. You essentially told me to fuck off from all of HN in one of your posts.
Bro idk why you waste your time writing all this. No one cares that you were an early adopter, all that means is that you used the rudimentary LLM implementations that were available from 2022-2024 which are now completely obselete. Whatever experience you think you have with AI tools is useless because you clearly haven't kept up with the times. AI platforms and tools have been changing quickly. Every six months the capabilities have massively improved.
Next time before you waste ten minutes typing out these self aggrandizing tirades maybe try asking the AI to just write it for you instead
Maybe he's already ahead of you by not using current models, 2026 models are going to make 2025 models completely obsolete, wasting time on them is dumb.
This is such a fantastic response. And outsiders should very well be made aware what kind of plane they are stepping into. No offence to the aviation enthusiasts in your example but I will do everything in my power to avoid getting on their plane, in the same way I will do everything in my power to avoid using AI coded software that does anything important or critical...
> but I will do everything in my power to avoid getting on their plane
speaking of airplanes... considering how much llm usage is being pushed top-down in many places, i wonder how long until some news drops of some catastrophic one-liner got through via llm generated code...
"Littered" is a great verb to use here. Also I did not ask for a deviated proxy non-measure, like how many people who are choking themselves to death in a meaningless bullshit job are now surviving by having LLMs generate their spreadsheets and presentations. I asked for solid proof of succesful, commercial products built up by dreaming them up through LLMs.
The proof is all around you. I am talking about software professionals not some bullshit spread sheet thing.
What I’m saying is this: From my pov Everyone is using LLMs to write code now. The overwhelming majority of software products in existence today are now being changed with LLM code.
The majority of software products being created from scratch are also mostly LLM code.
This is obvious to me. It’s not speculation, where I live and where I’m from and where I work it’s the obvious status quo. When I see someone like you I’m thinking because the change happened so fast you’re one of the people living in a bubble. Your company and the people around you haven’t started using it because the culture hasn’t caught up.
Wait until you have that one coworker who’s going at 10x speed as everyone else and you find out it’s because of AI. That is what will slowly happen to these bubbles. To keep pace you will have to switch to AI to see the difference.
I also don’t know how to offer you proof. Do you use google? If so you’ve used products that have been changed by LLM code. Is that proof? Do you use any products built by a start up in the last year? The majority of that code will be written by an LLM.
> Your company and the people around you haven’t started using it because the culture hasn’t caught up.
We have been using LLMs since 2021, if I havent repeated that enough in these threads. What culture do I have to catch up with? I have been paying top tier LLM models for my entire team since it became an option. Do you think you are proselytizing to the un-initiated here? That is a naive view at best. My issue is that the tools are at best a worse replacement for the pre-2019 google search and at worst a huge danger in the hands of people who dont know what they are doing.
Doesn’t make sense to me. If it’s bad why pay for the tool?
Obviously your team disagrees that it’s a worse replacement for google or else why demand it against your will?
> at worst a huge danger in the hands of people who dont know what they are doing.
I agree with this. But the upside negates this and I agree with your own team on that.
Btw if you’re paying top dollar for AI.. your developers are unlikely using it as a google search replacement. At top dollar AI is used as an agent. What it ends up doing is extremely different from a google search in this mode. That may be good or bad but it is a distinctly different outcome then a google search and that makes your google analogy ill fitted to what your team is actually using it for.
Have you had your head in the sand for the past two years?
At the recent AWS conference, they were showcasing Kiro extensively with real life products that have been built with it. And the Amazon developers all allege that they've all been using Kiro and other AI tools and agents heavily for the past year+ now to build AWS's own services. Google and Microsoft have also reported similar internal efforts.
The platforms you interact with on a daily basis are now all being built with the help of AI tools and agents
If you think no one is building real commercial products with AI then you are either blind or an idiot or both. Why don't you just spend two seconds emailing your company AWS ProServe folks and ask them, I'm sure they'll give you a laundry list of things they're using AI for internally and sign you up for a Kiro demo as well
Amazon, Google and Microsoft are balls deep invested in AI, a rational person should draw 0 conclusions in them showcasing how productive they are with it.
I'd say it's more about the fear of their $50billion+ investments not paying off is creeping up on them.
It’s ok to have this prior but these are not speculative tools and capabilities, they exist today. If you remain unimpressed by them that’s fine, but to deny real people (not bots!) and real companies (we measure lots of stuff, I’ve seen the data at a large MAANG and have used their internal and external tools) get serious benefits _today_ and we still have about 4 more orders of magnitude to scale _existing_ paradigms, the writing on the wall is so obvious. It’s fine and reasonable to be skeptical and there are so many serious serious societal risks and issues to worry about and champion but to me if your position is akin to “this is all hype” it makes absolutely no sense to me
I'm sure you're interacting with a ton of tools built via agents, ironically even in software engineering people are trying to human-wash AI code due to anti-AI bias by people who should know better (if you think 100% of LLM outputs are "slop" with no quality consideration factored in, you're hopelessly biased). The commercialized seems like an arbitrary and pointless bar, I've seen some hot garbage that's "commercialized" and some great code that's not.
> I'm sure you're interacting with a ton of tools built via agents, ironically even in software engineering people are trying to human-wash AI code due to anti-AI bias
Please just for fun - reach out to for example Klarna support via their website and tell me how much of your experience can be attributed to an anti-AI bias and how much to the fact that the LLMs are a complete shit for any important production use cases.
My man here is reaching out to Klarna Support, this tells a LOT about his life decision making skills which clearly shine through as well in his comments on the topic of AI
> I’ve done things with Claude I never thought possible for myself to do,
That's the point champ. They seem great to people when they apply them to some domain they are not competent it, that's because they cannot evaluate the issues. So you've never programmed but can now scaffold a React application and basic backend in a couple of hours? Good for you, but for the love of god have someone more experienced check it before you push into production. Once you apply them to any area where you have at least moderate competence, you will see all sorts of issues that you just cannot unsee. Security and performance is often an issue, not to mention the quality of code....
This is remarkably dismissive and comes across as arrogant. In reality they assist many people with expert skills in a domain in getting things done in areas they are competent in, without getting bogged down in tedium.
They need a heavy hand to police to make sure they do the right thing. Garbage in, garbage out.
The smarter the hand of the person driving them, the better the output. You see a problem, you correct it. Or make them correct it. The stronger the foundation they're starting from, the better the production.
It's basically the opposite of what you're asserting here.
> So you've never programmed but can now scaffold a React application and basic backend in a couple of hours?
Ahaha, weren’t you the guy who wrote an opus about planes? Is this your baseline for “stuff where LLMs break and real engineering comes into the room”? There’s a harsh wake up call for you around the corner.
What wake up call mate? I've been on board as early adopter with GH Copilot closed beta since 2021, it was around time when you did not even hear about the LLMs. I am just being realistic about the limits of the technology. In the 90s, we did not need to convince people about the Internet. It just worked. Also - what opus? Have the LLMs affected your attention span so much, that you consider what typically an primary school first-grader would read during their first school class, an "opus" no less? No wonder you are so easily impressed.
I expect it’s your “I’m an expert and everyone else is merely an idiot child” attitude that’s probably making it hard to take you seriously.
And don’t get me wrong - I totally understand this personality. There are a similar few I’ve worked with recently who are broadly quite skeptical of what seems to be an obvious fact to me - their roles will need to change and their skillsets will have to develop to take advantage of this new technology.
I am a bit tired of explaining, but I run my own company, so its not like I have to fear my "roles and responsibilities" changing - I am designing them myself. I also am not a general skeptic of the "YAGNI" type - my company and myself have been early adopters on many trends. Those that made sense of course. We also tried to be early adopters of LLMs, all the way since 2021. And I am sorry if that sounds arrogant to you, but anyone still working on them and with them to me looks like the folks who were trying to build computers and TVs with the vaccuum tubes. With the difference that vaccuum tubes computers were actually useful at the time.
95% of companies fail. Yours will too, don't worry. Amazon themselves have already been using in-house versions of this to build AWS for over a year https://kiro.dev/ you can either continue adopting AI in your company or you can start filing your company bankruptcy papers
What would you need to see to change your mind? I can generate at mind-boggling scale. What’s your threshold for realizing you might not have explored every possible vector for AI capabilities?
What you wrote here was relevant about 9 months ago. It’s now outdated. The pace and velocity of improvement of Ai can only be described as violent. It is so fast that there are many people like you who don’t get it.
The last big release from OpenAI was a big giant billion-dollar flop. Its lackluster update was written about far and wide, even here on HN. But maybe you're living in an alternate reality?
My experience comes from the fact that after over a decade of working as a swe I no longer write code. It’s not some alternate reality thing or reading headlines. It’s my daily life that has changed.
Have you used AI before? Agentic systems are set up so it gives you a diff before even making committing to a change. Sounds like you haven’t really used AI agentically yet.
Disrespect the trend line and get rolled over by the steamroller. Labs are cooking and what is available commercially is lobotomized for safety and alignment. If your baseline of current max capability is sonnet 4.5 released just this summer you’re going to be very surprised in the next few months.
Right, like I was steamrolled by the "Team of Pocket Ph.D Experts" announced earlier this year with ChatGPT 5 ? Remember that underwhelming experience? The Grok to which you could "paste your entire source code file"? The constantly debilitating Claude models? Satya Nadella desperately dropping down to a PO role and bypassing his executives to try and micro-manage Copilot product development because the O365 Copilot experience is experiencing a MASSIVE pushback globally from teams and companies forced to use it ? Or is there another steamrolling coming around? What is this time? Zuckerberg implements 3D avatars in a metaverse with legs that can walk around and talk to us via LLMs? And then they sit down at virtual desks and type on virtual keyboards to produce software? Enlighten me please!
First examine your post. Can you create a 3D avatar with legs that can walk and talk?
If not, then for this area you’ve been steam rolled.
Anyway main point is, you’re looking at the hype headlines which are ludicrous. Where most optimists come from is that they are using it in the daily to code. To them it’s right in front of their eyes.
I’m not sure what your experience is but my opinion on AI doesn’t come from speculation. It comes from on the ground experience on how AI currently has changed my job role completely. If I hold the technology to be fixed and to not improve into the future then my point still stands. I’m not speculating. Most AI optimists aren’t speculating.
The current on the ground performance is what’s causing the divide. Some people have seen it fully others only have a rudimentary trial.
I have a hard time trusting the judgement of someone writing this:
> I no longer write code. I’ve been a swe for over a decade. AI writes all my code following my instructions. My code output is now expected to be 5x what it was before because we are now augmented by AI. All my coworkers use AI. We don’t use ChatGPT we use anthropic. If I didn’t use AI I would be fired for being too slow.
You should drop the prejudice and focus to be aware of the situation. This is happening all over the world, most people who have crossed this bridge just don’t share, just like they don’t share that they’ve brushed their teeth this morning.
People are sharing it. Look at this entire thread. It’s so conflicted.
We have half the thread saying it’s 5x and the other half saying they’re delusional and lack critical thinking.
I think it’s obvious who lacks critical thinking. If half the thread is saying on the ground AI has changed things and the other half just labels everyone as crazy without investigation… guess which one didn’t do any critical thinking?
Last week I built an app that cross compiled into Tauri and electron that’s essentially a google earth clone for farms. It uses mapbox and deckgl and you can play back gps tracks of tractor movements and the gps traces change color as the tractor moves in actual real time. There’s pausing, seeking, bookmarking, skipping. All happening in real time because it’s optimized to use shader code and uniforms to do all these updates rather than redrawing the layers. There’s also color grading for gps fix values and satellite counts which the user can switch instantaneously to with zero slow down on tracks with thousands and thousands of points. It all interfaces with an API that scans gcp storage for gps tracks and organizes it into a queryable api that interfaces with our firebase based authentication. The backend is deployed by terraform and written in strictly typed typescript and it’s automatically deployed and checked by GHA. Of course the electron and tauri app have GUI login interfaces that work fully correctly with the backend api and it all looks professionally designed like a movie player merged with Google earth for farm orchards.
I have rudimentary understanding for many of the technologies involved in the above. But I was able to write that whole internal tool in less than a week thanks to AI. I couldn’t have pulled it off without rudimentary understanding of the tech so some novice swe couldn’t really do it without the optimizations I used but that’s literally all I needed. I never wrote shader code for prod in my life and left to its own devices the AI would have come up with an implementation that’s too laggy to work properly.
That’s all that’s needed. Some basic high level understanding and AI did everything else and now our company has an internal tool that is polished beyond anything that would’ve been given effort to before AI.
I’m willing to bet you didn’t use AI agents in a meaningful way. Maybe copying and pasting some snippets of code into a chatbot and not liking the output. And then you do it every couple of weeks to have your finger on the pulse of AI.
Go deeper. Build an app with AI. Hand hold it into building something you never built before. It’s essentially a pair programming endeavor. Im willing to bet you haven’t done this. Go in with the goal of building something polished and don’t automatically dismiss it when the AI does something stupid (it inevitably will) Doing this is what actual “critical thinking” is.
> I think it’s obvious who lacks critical thinking.
My critical thinking is sharp enough to recognize that you're the recently banned
ninetyninenine user [0]. Just as unbalanced and quarrelsome as before I can see. It's probably better to draw some conclusion from a ban and adjust, or just leave.
Bro no one said 5x now or your fired that’s your own imagination adding flavor to it.
It’s obvious to anyone if your output is 5x less than everyone else you will eventually be let go. There’s no paradigm shift where the boss suddenly announced that. But the underlying unsaid expectation is obvious given what everyone is doing.
What happened was this, a couple new hires and some current employees started were using AI. There output was magnified and they were not only having more output but they were deploying code outside their areas of expertise doing dev ops, infra, backend and frontend.
This spread and within months everyone in the company was doing it. The boss can now throw a frontend job to a backend developer and now expect completion in a day or less. This isn’t every task but such output for the majority of tasks it’s normal.
If you’re not meeting that norm it’s blindingly obvious. The boss doesn’t need to announce anything when everyone is faster. There was no deliberate culture shift where the boss announced it. The closest equivalent is the boss hiring a 10x engineer to work alongside you and you have to scramble to catch up. The difference is now we know exactly what is making each engineer 10x and we can use that tool to also operate at that level.
Critical thinking my ass. You’re just labeling and assuming things with your premeditated subconscious bias. If anything it’s your perspective that is religious.
Also can you please stop stalking me and just respond to my points instead of digging through my whole profile and attempting to do character assassinations based off of what I wrote in the past? Thanks.
Whether you agree with it or not is besides the point. The point is it’s happening.
Your initial stance was disbelief. Now you’re just looking down at it as unprofessional.
Bro, I fucking agree. It’s unprofessional. But the entire point initially was that you didn’t believe it and my objective was to tell you that this is what’s happening in reality. Scoff at it all you want, as AI improves less and less “professional” people will be able to enter our field and operate at the same level as us.
I don't understand this idea that non-believers will be "steamrolled" by those who are currently adopting AI into their workflows. If their claims are validated and the new AI workflows end up achieving that claimed 10x productivity speedup, or even a 2x speedup, nobody is cursed to be steamrolled - they'll simply adopt those same workflows same as everyone else. In the meantime they aren't wasting their time trying to figure out the best way to coax and beg the LLM's into better performance.
That's actually what I'm arguing for; use tools where they are applicable. I'm against blind contrarianism and the 'nothing ever happens' attitude since that IME is being proven more wrong each week.
Seems fine, works, is fine, is better than if you had me go off and write it on my own. You realize you can check the results? You can use Claude to help you understand the changes as you read through them? I mean I just don’t get this weird “it makes mistakes and it’s horrible if you understand the domain that it is generating over” I mean yes definitely sometimes and definitely not other times. What happens if I DONT have someone more experienced to consult with or that will ignore me because they are busy or be wrong because they are also imperfect and not focused. It’s really hard to be convinced that this point of view is not just some knee jerk reaction justified post hoc
Yes you can ask them "to check it for you". The only little problem is as you said yourself "they make mistakes", therefore : YOU CANNOT TRUST THEM. Just because you tell them to "check it" does not mean they will get it right this time. Again, however it seems "fine" to you, please, please, please / have a more senior person check that crap before you inflict serious damage somewhere.
Nope, you read their code, ask them to summarize changes to guide your reading, ask it why it made certain decisions you don’t understand and if you don’t like their explanations you change it (with the agent!). Own and be responsible for the code you commit. I am the “most senior”, and at large tech companies that track, higher level IC corresponds to more AI usage, hmm almost like it’s a useful tool.
Ok but you understand that the fundamental nature of LLMs amplifies errors, right? A hallucination is, by definition, a series of tokens which is plausible enough to be indistinguishable from fact to the model. If you ask an LLM to explain its own hallucinations to you, it will gladly do so, and do it in a way that makes them seem utterly natural. If you ask an LLM to explain its motivations for having done something, it will extemporize whichever motivation feels the most plausible in the moment you're asking it.
LLMs can be handy, but they're not trustworthy. "Own and be responsible for the code you commit" is an impossible ideal to uphold if you never actually sit down and internalize the code in your code base. No "summaries," no "explanations."
So your argument is that if people don't use the tool correctly they might get incorrect results? How is that relevant? If you Google search for the wrong query you'll similarly get incorrect results
I cannot stop thinking about the LLMs having this Midas touch quality, because everything they touch seems to ruin things or make people want to avoid them, for example:
- Ghibli studio style graphics,
- the infamous em-dashes and bullet points
- customer service (just try to use Klarnas "support" these days...)
- Oracle share price ;) - imagine being one of the worlds most solid and unassailable tech companies, losing to your CEOs crazy commitment to the LLMs...
- The internet content - We now tripple check every Internet source we dont know to the core ...
- And now also the chips ?
Where does it stop? When we decide to drop all technology as it is?
I am not sure how, or even if, it does stop. I assume once the hot air from LLM company CEOs starts being treated as the flatulence that it is, things will wind down. The sentiment against generated content is not going away.
In previous eras there were many purists who considered photography not-art, sequencer and synthesizer made music not-music, other forms of (non-AI) digital art less legitimate than their more manual classical counterparts, etc. This is the same discourse all over again.
Is electronic music where the artist composes it on a screen and then hits 'play' music? I think it is, of course, but I have had experiences where I went to see a musician "live" and well... they brought the laptop with them. But I think it still counts. It was still fun.
AI slop is to AI art what point and shoot amateur photography is to artistic photography. The difference is how much artistic intent and actual work is present. AI art has yet to get people like Ansel Adams, but it will -- actual artists who use AI as a tool to make novel forms and styles of art.
Anti-photography discourse sounds exactly like anti-AI discourse to the point that you could search and replace terms and have the same rants.
Another thing I expect to see is novelists using AI to create at least passable live action versions of their stories. I don't think these will put real actors or actresses out of work for a long time, but I could see them serving as "sizzle reels" to sell a real production. If an author posts their AI-generated film of their novel and it gets popular, I could see a studio picking it up and making a real movie or TV show from it.
> Is electronic music where the artist composes it on a screen and then hits 'play' music?
If X composes something, X is an artist. The person playing a composed work is a performer. Some people have both the roles of artist and performer for a given work.
To say an AI composes something is anthropomorphizing a computer. If you enter a prompt to make a machine generate work based on existing artists' art, you're not composing (in the artistic sense) and neither is the computer. Math isn't art even if it's pretty or if mathematical concepts are used in art.
The term "director" instead of composer or artist conveys what's happening a lot better with telling machines to generate art via prompts.
I mostly agree with your sentiment, but saying "math is not art" is the same as saying "writing is not art". Calculation isn't art. But math isn't calculation. Math is a social activity shared between humans. Like writing, much of it is purely utilitarian. But there's always an aesthetic component, and some works explore that without regard to utility. It's a funny kind of art, accessible to few and beautiful to even fewer. But there is an art there.
Incorrect. Art is practice. It's literally what the word means historically. Put in "Etymology of the word 'art'" in your favorite search engine or LLM.
If someone is entering a prompt to generate an image in a model I have access to, I don't really need to pay them to do it, and definitely don't need to pay them as much to do it as I would an actual artist, so it is deceptive for them to represent themselves as someone who could actually draw or paint that. If the product is what counts then truth in advertising is required so the market can work.
The vast majority of artists in all fields don't really have their own style and are just copying other people's. Doesn't matter whether we're talking about art, literature, music, film, whatever.
It takes a rare genius to make a new style, and they come along a few times a generation. And even they will often admit they built on top of existing styles and other artists.
I'm not a fan of AI work or anything, but we need to be honest about what human 'creativity' usually is, which for most artists is basically copying the trends of the time with at most a minor twist.
OTOH, I think when you start entering the fringes of AI work you really start seeing how much it's just stealing other people's work though. With more niche subjects, it will often produce copies of the few artists in that field with a few minor, often bad, changes.
Sure, you can say that AI is just "stealing like an artist", but that makes the AI the artist in this scenario, not the prompter.
It bothers me that all of the AI "artists" insist that they are just the same as any other artist, even though it was the AI that did all of the work. Even when a human artist is just copying the styles they've seen from other artists, they still had to put in the effort to develop their craft to make the art in the first place.
Keyboards have had functions that let them play music at the touch of button for decades.
Decades later we still don't consider anyone using that function a musician.
>actual artists who use AI as a tool to make novel forms and styles of art.
writing a prompt lol
We don't compare Usain Bolt to Lewis Hamilton when talking about fastest runners in the world.
But hey think about how much money you could save on a wedding photographer if you just generate a few images of what the wedding probably looked like!
The (wedding) photographer is likely going to use this AI themselves though. They used Photoshop way back in the day to touch up images. They're going to be doing the same with genAI. Content-aware fill is one of the most useful tools they have.
There is a "demo" button on synthesizers that plays a canned melody, therefore playing canned melodies is all synthesizers can do, therefore nobody that uses a synthesizer is a real musician.
I'm not against AI art per se, but at least so far, most “AI artists” I see online seem to care very little about the artistry of what they’re doing, and much much more about selling their stuff.
Among the traditional artists I follow, maybe 1 out of 10 posts is directly about selling something. With AI artists, it’s more like 9 out of 10.
It might take a while for all the grifters to realize that making a living from creative work is very hard before more genuinely interesting AI art starts to surface eventually. I started following a few because I liked an image that showed up in my feed, but quickly unfollowed after being hit with a daily barrage of NFT promotions.
I don't believe that there is near enough room for creativity to shine through in the prompt-generation pipeline, and I find the mention of a talent like Ansel Adams in this context asinine. There is no control there, and without control over creation I don't believe that creativity CAN flourish, but I may be wrong.
Electronic music is analogous to digital art made by humans, not generated art.
How much room for creativity is there with a camera? Angle, lighting, F-stop, film type, film processing? I have a local image generator app called Draw Things that has many times more options than this.
Early synthesizers weren't that versatile either. Bands like Pink Floyd actually got into electronics and tore them apart and hacked them. Early techno and hip-hop artists did similar things and even figured out how to transform a simple record player into a musical instrument by hopping the needle around and scratching records back and forth with tremendous skill.
Serious AI artists will start tearing apart open models and changing how they work internally. They'll learn the math and how they work just like a serious photographer could tell you all about film emulsions and developing processes and how film reacts to light.
Art's never about what it does. It's about what it can do.
> How much room for creativity is there with a camera? Angle, lighting, F-stop, film type, film processing?
How many subjects exist in the world to be photographed? How many journeys might one take to find them? How many stories might each subject tell with the right treatment?
> Serious AI artists will start tearing apart open models and changing how they work internally. They'll learn the math and how they work just like a serious photographer could tell you all about film emulsions and developing processes and how film reacts to light.
I agree that "AI art" as it exists today is not serious.
"AI art" today is mostly play, which is usually the first thing you get with new artistic tools. People just fool around with them in an un-serious way. There's also some porn. Porn is always early. It was one of the first uses for moving pictures, for example.
"The early adopters of new technologies are usually porn and the military." Forget where I heard that but it's largely true.
I do not think that the things you say will happen, will ever happen.
Also, photography has the added benefit of documenting the world as it is, but through the artist's lens. That added value does not exist when it comes to slop.
Defining art in this way is like defining intelligence as the possession of a degree from Stanford. It's just branding.
Art shouldn't make you feel comfortable and safe. It should provoke you and in this sense AI art is doing the job better than traditional art at the moment here.
Other than the technological aspect, there's nothing new under the sun here. And at its very worst, AI art is just Andy Warhol at hyperscale.
I think it's actually quite apt to look at all of "AI art" as a single piece, or suite, with a unified argument or theme. Maybe in that sense it is some kind of art, even if it wasn't intended that way by its creators.
Similarly, I'm not sure that argument is making the point those who deploy it intend to make.
I think the entire fear of AI schtick to farm engagement is little more than performance art for our FAANNG overlords personally. It behaves precisely like the right wing manosphere but with different daily talking points repeated ad nauseum. Bernie Sanders has smelled the opportunity here and really stepped up his game.
But TBF, performance art theatre is art as well.
The end game IMO will be incorporation of AI art toolsets into commercial art workflows and a higher value placed on 100% human art (however that ends up being defined) and then we'll find something new and equally idiotic to trigger us or else we might run out of excuses and/or scapegoats for our malaise.
> incorporation of AI art toolsets into commercial art workflows and a higher value placed on 100% human art
I don't even really believe serious artists need to totally exclude themselves from using genAI as a tool, and I've heard the same from real working artists (generally those who have established careers doing it). Unfortunately, that point inhabits the boring ideological center and is drowned out by the screaming from both extremes.
They aren't, but some are already using pseudonyms to experiment with it to avoid the haters condemning them for doing so. And their work is predictably far superior from the get-go to asking Sora to ghiblify your dog.
> Art shouldn't make you feel comfortable and safe. It should provoke you and in this sense AI art is doing the job better than traditional art at the moment here.
jumpscares and weapons being used at others aren't art
I Ghiblified a photo of my dog when chatgpt 4 came out. I was utterly horrified by the results.
It's exciting being able to say that I am an artist, I always wondered what my life would have been had I gone into the arts, and now I can experience it! Thank you techmology.
If you really want to experience the struggles and persecution of an artist, you should empty your bank account and find a life partner to support you while you struggle with your angst and inner trauma that are the source of your creativity. But, to be fair, complaining about AI art is a great start down that path!
How else would you address the incessant ramblings of people who figuratively curse the sunset daily? After AI art has been integrated into the already existing suite of digital art applications (which themselves were once not considered art), whatever shall you complain about next?
Now if you wanted to define art to require 100% bodily fluids and solids 100% handcrafted to be the only real art, now that I'd understand.
You may check these videos by Oleg Kuvaev.
100% generated using AI.
Everything: text, music, characters, voices, editing -- all done via prompts, using multiple engines (I think he mentioned about a dozen services involved).
I would not call it "high art", but it's definitely not a slop, it's an artist skillfully using AI as a tool.
While we're sharing AI generated videos, IGORRR's ADHD music video [0] is definitively art, zero question about it. I don't think typing a prompt in and taking the output as it comes is art -- good art, anyway (the point-and-shoot photography comparison is apt) -- but that doesn't mean AI can't be used to make truly new, creative and unique art too.
This is absolutely slop. Higher quality slop, but slop nonetheless. Ask yourself: what does it say? What does it change in you? How this makes you feel?
Artists use their medium to communicate. More often than not, everything in a piece is deliberate. What is being communicated here? Who deliberated on the details?
Those videos are as much "art" as Marvel's endless slop is "art".
You know that you can give a drawing as input for image generation, right? I think there's a lot of creativity possible with AI image generation. Things like ControlNet and the various loras and upscale methods etc all add a lot of choice.
> I don't believe that there is near enough room for creativity to shine through in the prompt-generation pipeline
I mean you are building a prompt and tweaking it. I mean even if you didn't do that you could still argue that finding it is in itself a creative akin to found art [1].
I suppose. You're "finding" something that didn't exist and that nobody ever cared about. Something that you wrote, mashed against the tensors trained on real artist creations, and out came the thing that you "found".
I'm genuinely amazed at how some people perceive art.
To me art has always been "an interesting idea". Decorative things that take skill to me are crafts. Sure, it's a water color of your garden, but what does it tell us about the human condition? Sure, it's skilled... but it's empty. Give me Jackson Pollock or Picasso. Give me a new way to see the world. Pure skill to me is as impressive as cup-stacking personally.
Not saying you have to agree, but it is a distillation of how some portion of the world sees the world.
I don't believe that there is near enough room for creativity to shine through in the prompt-generation pipeline
You seem so sure that you'll always be able to tell what you're looking at, and whether it's the result of prompting or some unspecified but doubtlessly-noble act of "creativity."
> AI slop is to AI art what point and shoot amateur photography is to artistic photography.
Sorry... It's all slop buddy. The medium is the message, and genAI's message is "I want it cheap and with low effort, and I don't care too much about how it looks"
It is more useful to think about it in terms of what that effort actually entails.
If you haven't ever written a novel, or even a short story, you cannot possibly imagine how much of your own weird self ends up in it, and that is a huge part of what will make it interesting for people to read. You can also express ideas as subtext, through the application of technique and structure. I have never reached this level with any form of visual art but I imagine it's largely the same.
A prompt, or even a series of prompts, simply cannot encode such a rich payload. Another thing artists understand is that ideas are cheap and execution is everything; in practice, everything people are getting out of these AI tools is founded on a cheap idea and built from an averaging of everything the AI was trained on. There is nothing interesting in there, nothing unique, nothing more than superficially personal; just more of the most generic version of what you think you want. And I think a lot of people are finding that that isn't, in fact, what they want.
At the very least, art usually contains effort signifiers. Yes, an artist could potentially employ gingerbread men cut from construction paper in a work, but no, construction paper gingerbread men are typically not in the same league as David.
For fun I decided to try out the find and replace on this comment
> Sorry... It's all slop buddy. The medium is the message, and photography's message is "I want it cheap and with low effort, and I don't care too much about how it looks"
Hmm... it seems like you have failed to actually make an argument here
The logical implication of your view is that if someone or something has a halo, they can shit in your mouth and it's "good." The medium is the message, after all.
This is the same pretentious art bullshit that regular people fucking hate, just repackaged to take advantage of public rage at tech bro billionaires.
Whatever, man, this guy isn't wrong. Look at the example he gave how a camera made it so that anyone could do what only a few could. Novel art is just a candid shot now. It forced art to completely change its values. Much of the same will happen now. The difference is that with the past, we still needed artists to take advantage of them while now, it all can be completely automated. It's disgusting but I'm sure purest thought the same of every innovation.
I'm not sure people remember when PCs and inkjet printers became affordable while MS Word added cliparts at around the same time. Those black figurines with a light bulb above them while some text was written above or below in either Comic Sans or 3D "word art" were absolutely everywhere. Digital typesetting was bad when it started (see Donald Knuth's rant about it, leading to TeX), but you have to imagine the horror of normal people trying to layout stuff in Word all of the sudden without a hint of competence. This is exactly what happens right now with LLMs, some people will find the right amount of usage, the others won't, but that's OK. The problem back the wasn't MS Word per se (bar some stupid defaults Microsoft had borked completely), neither are LLMs inherently the problem right now. We are in the seemingly never-ending hype cycle, but even that will pass.
> Where does it stop? When we decide to drop all technology as it is?
Whenever you want.
Of course you can't directly control what other people do or how much they use t0echnology. But you have lots of direct control over what you use, even if it's not complete control.
I stopped taking social media seriously in the early 2010's. I'm preparing for a world of restricted, boring, corporate, invasive Internet, and developing interests and hobbies that don't rely on tech. We've had mechanisms to network people without communications tech for thousands of years, it's probably time to relearn those (the upper classes never stopped using them). The Internet will always be there, but I don't have to use it more than my workplace requires, and I can keep personal use of it to a minimum. Mostly that will mean using the Internet to coordinate events and meeting people and little else.
Ha, I can only wish. Maybe true if you live in NYC, SF, Berlin or London.
But most of these don't exist or help with socializing and making new connections where I live (medium sized European university city).
Everyone here only hangs out with their family and school/university mates and that's it. Any other available events are either for college students or lonely retirees but nothing in between.
> Everyone here only hangs out with their family and school/university mates and that's it.
If you can get a few people from 2 of these groups together more than once, you've started solving this problem. Of course keeping it going for a long time is a challenge, and you want to avoid always being in the situation where you are doing all the work and others aren't contributing, but it gets easier and better with experience.
Except that if you're not anyone's family and not in university anymore then you're shit out of luck as people in their 30s already have their social circles already completed and don't have space, time and energy to add new strangers when they barely have free time to hang out with their existing clique.
There are also private group chats open only to selected elite and wealthy people. When you see several prominent people suddenly make similar public statements on a particular issue there's a good chance they used those group chats behind the scenes to coordinate messaging.
I agree with you and I will also anecdotally note that I've been personally observing more and more of the younger generations (Z, esp gen Alpha) adopt these mechanisms en masse, viewing social media as the funhouse simulation of socialization that it always was and finding true social connection through other manners.
> Where does it stop? When we decide to drop all technology as it is?
It doesn't stop. This is because that it's not the "technology" driving AI. You already acknowledged the root cause: CEOs. AI could be great, but it's currently being propped up by sales hype and greed. Sam wants money and power. Satya and Sundar want money and power. Larry and Jensen want to also cash in on this facade that's been built.
Can LLMs be impactful? For sure. They are now. They're impacting energy consumption, water usage, and technology supply chains in detrimental ways. But that's because these people want to be the ones to sell it. They want to be the ones to cash in. Before they really even have anything significantly useful. FOMO in this C-suite should be punishable in some way. They're all charlatans to different degrees.
Blame the people behind this propping up this mess: the billionaires.
The weird thing is that the AI companies themselves are hiring like there's no tomorrow, doing talent aquisitions etc. Why would you do that if the purpose of your product is to reduce necessary workforce?
Why isn't that the first question that comes to mind for a journalist covering the latest acquisition? It's like an open secret that nobody really talks about.
To answer your questions (I don't think it's what you wanted, but people will scratch their heads after reading them):
On reality, they are hiring because they have a lot of (investment) money. They need a lot of hardware, but they also need people to manage the hardware.
On an alternative reality where their products do what they claim, they would also hire, because people working there would be able to replace lots of people working in other jobs, and so their workers would be way more valuable than the average one, and everybody would want to buy what they create.
Journalists don't care about it because whatever they choose to believe or being paid to "believe", it's the natural way things happen.
Just to clarify: Most AI companies don't own their hardware, with a select few exceptions. That's why a handful of hyperscaler stock has rallied recently on letters of intent on large orders from AI companies. Which technically is a handful of shell companies under complete control of their parent companies, which can then take on credit without it being visible on the parent company balance sheet.
But addressing the specific question: It is still a valid. If the product sold is a 10x developer force multiplier, you'd expect to see the company fully utilizing it. Productivity would be expected to increase, rapidly, as the product matures, and independently of any acquisitions made at the same time.
Some people are dropping things in response to how things are being ruined. Many people are not.
I hope you're right but I imagine with more computing power used more efficiently, the big companies will hoard more and more of the total available human attention.
Absolutely, not to say "You are right!" :) The items touched by Midas were if nothing else, shiny and gold after all is a precious metal preserving its material properties for a long time...Whereas the stuff produced by LLMs...yes, quite resembles the key properties of feces, come to think of it. It is a simple meshup of whatever was digested over a timespan, it stinks, and it does not require special skills - anyone can produce one!
I think you explained it very well.
For now all sorts of "creative finance" are being invented to give AI momentum.
At the same time, some of us that have to work with this monstrosity for 10 hours a day are nauseated.
The same feeling I had towards putrid technology is now extended to generative technology. I would rather fight and lose my job than call this intelligence of any form. It is a generative thingy.
Was very enthusiastic in tabnine days. Used copilot since closed beta. Use it for 10 hours a day. I rather not use it, though.
I have to use C#. Would kill not to use this bullshit anymore. Would never,ever, touch Microsoft without being paid. Feel the same about AI in general.
Betting on AI becoming lame would be the safest bet I ever did.
When I see someone worshiping generative technology I just know what to expect and then I leave.
In some levels, opinions on generative technology are very similar to politics. Tell me how you interact with it and how you feel about it, I won't ever need to ask a second question.
Now, I think this sentiment will inevitably arrive to the masses. Yeah, sure I am fatigued and most people don't have to deal with generative tools for 44 hours a week, but it will slowly creep.
Tell me again how excited everyone is to fiddle with SAP, Oracle, Microsoft, react components, Vercel. The most shilled convenience of our timeline will become cringe, as always.
> Was very enthusiastic in tabnine days. Used copilot since closed beta. Use it for 10 hours a day.
I sort of have a similar story with it. Was also one of the earliest GH Copilot users...but now I find its just utter crap. The one thing that worries me though is, while most of the tech folks have grown disillusioned, for each engineer who now rejects LLMs, there seems to be 20 "common" persons who just absolutely love it, for their ephemeral use cases, like planning their next trip, or asking if it will rain tomorrow and similar. And this sort of usage I think quietly underpins the drive. Its not just the CEOs, it is also the masses that absolutely love to use it, unfortunately.
> There are lots of online resources for outbound sales which will likely be better than advice you’ll find on a forum full of engineers (unless engineers are your target market)
I mean, the man asked here as a starting point and was probably looking to hear from other engineers who already were in a similar situation. If you don´t have something concrete to offer in way of help, then its better to suppress that urge to sound smart by dropping around general-sounding "pearls of wisdom". You offered a lot of "whats" and very little "hows", which is what I assume the OP was asking for in the first place.
No offense but he asked specifically for resources to get better at outbound sales. You gave him a lot of generic advice he could find in his LinkedIn feed any time of the day really. Actionable means something applicable, "get better at building something people want" is not quite applicable I am afraid.
I completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.