Amodei's work history indicates that his background as a software developer is a single part-time job that he held for a year-and-a-half after college. As far as I'm concerned, he wouldn't even make it as a junior on my team. I'm not inclined to believe anything he says about what it takes to write production-ready code.
I think Graphene gets posted here yearly. Having tested a variety of ROMs dedicated to different elements of security, I can attest that Graphene allows the most "normal" phone usage compared to many others. The biggest factor is the sandboxed Google Play Services, which allow you to use a lot of apps that you wouldn't be able to otherwise.
I've used Lineage without MicroG, as a comparison, and that's becoming more-and-more unusable every day some lousy Android developer tethers their company's app to some feature exclusive to Play Services.
Google Pay is not available on any alternative OS due to Google blocking it. It's unfortunately their choice, rather than a lack of support on our end.
Depending on where you are in the world, there might be other NFC payment options for you.
In the EEA and UK, Curve pay works. Paypal made their own solution and is rolling it out, starting with Germany. Both work with GrapheneOS. Many banks also have their own solutions.
I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?
And being non-native with a good English level is nothing compared to people who might have autism, etc.
I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.
An LLM's output being a reflection of its output would imply determinism, which is the opposite of their value prop. "Garbage in, garbage out" is an addage born from traditional data pipelines. "Anything in, generic slop, possibly garbage, out" is the new status quo.
No, but at that point, why even leverage a stochastic text generator? Placing hard constraints on a generative algorithm is just regular programming with more steps and greater instability.
Edit: Also, one could just look to the world of decision tree and route-finding algorithms that could probably do this task better than a language model.
I don't know, I think some improved hardware would greatly improve the aesthetics of the Lost Woods, which severely drops in frame rate when docked. Handheld, the diminished fidelity at 720p buys back some frames.
I'd be inclined to agree about some older Zelda games though, namely Wind Waker. I replayed it on GCN recently, and can attest that HD Wii U version really didn't add anything to the aesthetics.
When there's millions of doctors, not only are there going to be more mediocre doctors than anything, but there has to be a bottom of the barrel as well.
It took me years to be diagnosed with PTSD, a problem I knew I had. Because I am not a vet, I had to go through every other diagnosis first -- schizo, bipolar, borderline -- each with a new set of pills to take. Some of the shrinks who diagnosed me wouldn't do anything but open my file, make some remarks, and fill out a prescription, with nary any eye contact.
Finally got a very expensive doctor who wasn't under the thumb of insurance companies. Her first question, upon hearing my issues, was "how is your sleep?" "I don't, really" was my reply. Screened me for PTSD and I clocked 76/80 pts. She set me up with the proper therapy, and within a year, I was screening at 30/80 pts. All it took was asking me one question that wasn't loaded towards the doctors favorite diagnosis & prescription.
An LLM salesman assuring us that $1000/mo is a reasonable cost for LLMs feels a bit like a conflict of interests, especially when the article doesn't go into much detail about the code quality. If anything, their assertion that one should stick to boring tech and "have empathy for the model" just reaffirms that anybody doing anything remotely innovative or cutting-edge shouldn't bother too much with coding agents.