Right now I paste screenshots of AWS/Azure/GCP into Claude and ask it questions on how to navigate around / what to do / how to set things up. This seems like a much better experience solely to not have to deal with the weird mac screenshot UX.
Too fast. It's already coding too fast for us to follow, and from what I hear, it's doing incredible work in drug discovery. I don't see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.
I'm not sure this is true... we heavily use Gemini for text and image generation in constrained life simulation games and even then we've seen a pretty consistent ~10-15% rejection rate, typically on innocuous stuff like characters flirting, dying, doing science (images of mixing chemicals are particularly notorious!), touching grass (presumably because of the "touching" keyword...?), etc. For the more adult stuff we technically support (violence, closed-door hookups, etc) the rejection rate may as well be 100%.
Would be very happy to see a source proving otherwise though; this has been a struggle to solve!
Why would you buy a 5-year-old iPhone for the same price you can get a new Android with comparable specs though? If I'm gonna spend 2-3 hundred on a phone, I'd like it to last at least a couple more years. Regardless of OS, you're more likely to get that on a new phone vs any phone 5+ years old.
If Apple's still selling it, they'll almost certainly support it at least as long as an above-average Android manufacturer.
The current iOS supports things back to iPhone 11 and the SE2, so you can expect the SE3 and iPhone 13 to get at least two more years of support (no real guarantees, but they're still selling new stock of both, and they have a reputation to protect).
> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).
A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).
What I find the most "enlightening" and also frightening thing is that I see people that I've worked with for quite some time and who I respected for their knowledge and abilities have started spewing AI nonsense and are switching off their brains.
It's one thing to use AI like you might use a junior dev that does your bidding or rubber duck. It's a whole other ballgame, if you just copy and paste whatever it says as truth.
And regarding that it obviously doesn't apply to small fixes: Oh yes it does! So many times the AI has tried to "cheat" its way out of a situation it's not even funny any longer (compare with yesterday's post about Anthropic's original take home test in which they themselves warn you not to just use AI to solve this as it likes to try and cheat, like just enabling more than one core). It's done this enough times that sometimes I don't trust Claude with an answer I don't fully understand myself well enough yet and dismiss a correct assessment it made as "yet another piece of AI BS".
These promises are worth nothing without a contract that a consumer can sue them for violating. And hell will freeze over before megacorps offer consumers contracts that bind themselves to that degree.
This isn't feasible for a huge swathe of the USA, often because of costs/insurance but sometimes literally just accessibility/availability. A few years ago it took me nearly 8 months to find a PCP in my city that was accepting new patients (and, wee, they dropped my insurance less than a year after).
Is there a proven and guaranteed way to do this? Because otherwise it sounds very idealistic, almost like "if everything were somehow better, then things would be less bad". Doctor time will always be scarce. It sounds like it delays helping people in the here and now in order to solve some very complicated system-wide problem.
LLMs might make doctors cheaper (and reduce their pay) by lowering demand for them. The law of supply and demand then implies that care will be cheaper. Do we not want cheaper care? Similarly, LLMs reduce the backlog, so patients who do need to see a doctor can be seen faster, and they don't need as many visits.
LLMs can also break the stranglehold of medical schools: It's easier to become an auto-didact using an LLM since an LLM can act like a personal tutor, by answering questions about the medical field directly.
LLMs might be one of the most important technologies in medicine.
I think the "we" that can work on these systemic problems and actually improve them are a very different "we" than those of us who just need basic health care right now and will take anything "we" can get.
Maybe time to ask AI why you’re looking for a technical solution rather than addressing the gaslighting that has left you with such piss-poor medical care in the richest country on earth?
if its not solved in the richest country maybe its not so easy to solve unless you want to hand wave the diffuclt parts and just describe it as "rich people being greedy"
It's such a dysfunctional situation that the "rich people being greedy" is the most likely explanation. Either that or the U.S. citizenry are uniquely stupid amongst rich countries.
reply