I am an American and also don’t have anyone I know, nor far off acquaintance killed, therefore I conclude the US is actually fine with it comes to traffic.
Why are you using an anecdote to define your data here?
I didn't see if there was a re:invent talk about it, but is it actuallly Cassandra under the hood? It seemed like it might just be the Cassandra API akin to Aurora being MySQL.
It happens to be Cassandra, but that did make me think about the way Amazon brands the Postgres compatible Aurora as "Aurora PostgreSQL".
That's pretty lousy of them to take advantage of the name. I imagine the uptake would be lower if it weren't in the name, and they had to settle for just saying "Postgres Compatible" in the description.
I also imagine AWS would come after me if I launched "XYZ Fargate" or similar.
There are two separate offerings. AWS offers Aurora/Postgres which is a fork of Postgres with Amazon’s own code and there is regular RDS/Postgres which is basically managed Postgres.
The storage backend isn't Postgres, and I assume the repeated use of the words "compatible" and "wire protocol" is on purpose, so they can continue to change it.
I realise the parent comment is being voted down, but I think it’s a good point. If you’re buying a managed service, why should it matter if Amazon twiddled with how it stores data? What matters is that it behaves exactly like PostgreSQL in every way—which I’m led to believe it does.
There may be marginal resultant performance characteristics but they’re unlikely to be significant or wildly non-linear. My understanding is that this isn’t a storage engine rewrite, but a modification to the IO layer at the bottom of the storage engine.
Still, if you want “pure” anything, run it yourself.
Clients no, but if you have had a Postgres DBA optimize your database to take advantage of known Postgres storage backend behavior you may be in for unexpected performance degradation under the assumption "it's just Postgres".
Well, you get the same problem if you have “network administrators” who took one AWS certification and call themselves “AWS Consultants”.
In both cases you end up with suboptimal solutions. The lesson to learn is not that AWS shouldn’t be making storage optimizations, it’s that you don’t depend on a bunch of old school net ops “lift and shifters” who didn’t take the time to learn the environment and who think that the cloud is just an overpriced colo.
That's certainly the case at least for Aurora PostgreSQL, but then again, Aurora PostgreSQL lags MySQL significantly [1] in features; maybe that is related.
seems like it is just email address harvesting. counter measures linked in the article are just ways to more anonymize data, but I'm not sure why we want that. they even suggest not using a username to identify a user...
seems like this should be flagged for being clickbait/trying to induce fear
Recommend taking a look at this blog post: https://www.justinobeirne.com/google-maps-moat the author has updated a couple times. The changes are subtle and are amazing for your benefit. Not seeing the change is them getting better at easing you into the new functionality, not them stagnating.
Funny enough, at my company we did just this. Having local state is a requirement for our uptime and reliability goals, and we have a dumb service that pushes this back to the services via DynamoDB streams, and also dumps the data into S3 in case that service is down.
They just host the site, it is their wiki product called Confluence. The same way Github hosts pages at github.io but has nothing to do with content that people host there.
Oh, I know, I administer Confluence, Jira and Bitbucket. Bar from bitbucket, the other two hosted versions are woefully slow. Don't get me started about the Jira Calendar.
If they're blogging about some tech, maybe they need to dogfood that first.. especially when it's due to performance
As per a previous comment, the contents of the pages linked have as much to do with Atlassian as the contents of a page on github.io have to do with Github.
This is a stanford.edu project wiki page, hosted by Atlassian. Nothing more.
By the same logic, any performance improvement that Github might see from random projects hosted on Github, should be "dogfooded" in the same way.
It's not dogfooding when it's someone else's work.
Atlassian is not blogging about it. Someone else created a Confluence wiki site for RAMCloud using the shared *.atlassian.net domain. Anyone can create a site with a subdomain there.
I don't think it was GitHub's memcached instances. It was other public instances that with spoofed network requests ended up sending traffic back towards GitHub's network.
Why are you using an anecdote to define your data here?