This seems to be sugarcoating things a bit:
"... we soon realized that our communication did not register with everyone and there were some users who were caught off guard and lost access to their data."
Not sure they've really helped themselves much with this one.
So I see a lot of what was suggested on the mega thread reflected here in the linked posting. Thanks for listening to everyone Paul. Good luck. We always screw up, but rare is it we listen. Thanks for listening.
The fact that InfluxDB has been around for 9 years at this point, and raised over 100 million dollars in the process, only makes this even more baffling.
I agree. This should be an indication to all current users that they should no longer trust InfluxData with their business.
The CTO seems to have been checked out for a long time (just look at how little developer engagement there is on here) and the CEO seems to have no idea how to run a DBaaS. The fact that nobody else from the company has stepped in to try and defuse this should terrify anyone who has data on InfluxData's cloud.
This is the beginning of the end. It seems like all of the good people have left the company, and being willing to destroy credibility to cut costs is a clear sign that the company is running on fumes.
So, now is the time - find your alternative, whether it's Timescale, QuestDB, VictoriaMetrics, ClickHouse, or just self-hosting.
It's the same "we 'tried'" message they have here. Even worse, this wasn't a regulatory shut-down, this was a lack of demand decision. They had 100% control over the timing and means of the shut-down. They didn't even keep backups! They just deleted everything.
Some highlights from the blog. It reads like a "cover my ass" to the board, rather than fixing problems for customers.
* > Over the years, two of the regions did not get enough demand to justify the continuation of those regional services.
* In other words, they had no external pressure. They just shut this down entirely on their own accord.
* Immediately, blames customers for not seeing notifications. Explaining "how rigorous" their communication was.
* > via our community Slack channel, Support, and forums, we soon realized that our communication did not register with everyone
* In other words, "we didn't look at any metrics or usage data. How could we have possibly known people were still relying on this?"
* > Our engineering team is looking into whether they can restore the last 100 days of data for GCP Belgium. It appears at this time that for AWS Sydney users, the data is no longer available.
* That's literally unbelievable. They didn't even keep backups! They deleted those too! Even it the region is going down, I'd expect backups to be maintained for their SLA.
* Lastly, a waffling "what we could have done better" without any actual commitment to improvement. Insane.
Not sure they've really helped themselves much with this one.