i'm very doubtful gmail mails are used to train the model by default, because emails contain private data and as soon as this private data shows up in the model output, gmail is done.
"gmail being read by gemini" does NOT mean "gemini is trained on your private gmail correspondence". it can mean gemini loads your emails into a session context so it can answer questions about your mail, which is quite different.
True, but one definition of intelligence is the ability to deal with a novel situation. You can't get more experienced if you're "too stupid" to learn and adapt to the challenge.
after my father got an old work notebook without windows preinstalled, i suggested trying ubuntu, his first contact with linux. installation went without problems and a few days later i asked him wheter everything was ok. he answered that everything was great, except for that "edgy desktop background of a skull" (he mentioned something about that being a typical linux hacker thing).
it was the "intrepid ibex" version and the "skull" was actually a stylized ibex.
But there's a real difference how easy it is to write crappy code in a language. In regards to java that'd be, for example, nullability, or mutability. Kotlin, in comparison, makes those explicit and eliminates some pain points. You'd have to go out of your way and make your code actively worse for it to be on the same level as the same java code.
And then there's a reason they're teaching the "functional core, imperative shell" pattern.
On the other hand, Java's tooling for correctly refactoring at scale is pretty impressive: using IntelliJ, it's pretty tractable to unwind quite a few messes using automatic tools in a way that's hard to match in many languages that are often considered better.
I agree with your point, and I want to second C# and JetBrains Rider here. Whatever refactoring you can with Java in JetBrains IntelliJ, you can do the same with C#/Rider. I have worked on multiple code bases in my career that were 100sK lines of Java and/or C#. Having a great IDE experience was simply a miracle.
i'm not sure this is an easily solvable problem. i remember reading an article arguing that your cloud provider is part of your tech stack and it's close to impossible/a huge PITA to make a non-trivial service provider-agnostic. they'd have to run their own openstack in different datacenters, which would be costly and have their own points of failure.
I run non trivial services on EC2, using that service as a VPS. My deploy script works just as well on provisioned Digital Ocean services and on docker containers using docker-compose.
I do need a human to provision a few servers and configure e.g. load balancing and when to spin up additional servers under load. But that is far less of a PITA than having my systems tied to a specific provider or down whenever a cloud precipitates.
The moment you choose to use S3 instead of hosting your own object store, though, you either use AWS because S3 and IAM already have you or spend more time on the care and feeding of your storage system as opposed to actually doing the thing you customers are paying you to do.
It's not impossible, just complicated and difficult for any moderately complex architecture.
Even on non-AWS projects, I still use S3. I haven't really explored the other options, but if you have opinions or advice I'd love to hear them.
One thing very important, is that I can authorise specific web clients (users) to access specific resources from S3. Such as a document that he can download, but others with the link should not be able to download.
The way I solved auth in my case was just proxying everything through my backend and having that do the auth. I have my own URL scheme and the users never see the URL for the file in S3.
Another way you can do it is generating pre-signed URLs in your backend on each request to download something... but the URL that is generated when you do that is only valid for some small time period, so not a stable URL at all.
In my use case, I needed stable URLs, so I went the proxy route.
in the future everyone will have a personal AI assistant subscription. the better the subscription (i.e. the more expensive) is, the less it'll be influenced by corporate and political interests. the poor population with cheap or even free agents will be heavily influenced by ads and propaganda, while the one percent will have access to unmodified models.
"gmail being read by gemini" does NOT mean "gemini is trained on your private gmail correspondence". it can mean gemini loads your emails into a session context so it can answer questions about your mail, which is quite different.