I think that's a thing? It's been a while since I've looked. Even if not, multi-AZ replication with failover is a standard setup and it has the same issues. Suppose you have some frontend servers and an RDS primary instance in us-west-2a and other frontend servers and an RDS replica in us-west-2b. The link between AZs goes down. For simplicity, pretend a failover doesn't happen. (It wouldn't matter if it did. All that would change here is the names of the AZs.)
Do you accept writes in us-west-2a knowing the ones in 2b can't see them? Do you serve reads in 2b knowing they might be showing old information? Or do you shut down 2b altogether and limp along at half capacity in only 2a? What if the problem is that 2a becomes inaccessible to the Internet so that you can no longer use a load balancer to route requests to it? What if the writes in 2a hadn't fully replicated to 2b before the partition?
You can probably answer those for any given business scenario, but the point is that you have to decide them and you can't outsource it to RDS. Some use cases prioritize availability above all else, using eventual consistency to work out the details once connectivity's restored. Others demand consistency above all else, and would rather be down than risk giving out wrong answers. No cloud host can decide what's write for you. The CAP theorem is extremely freaking relevant to anyone using more than one AZ or one region.
(I know you weren't claiming otherwise, just taking a chance to say why cross-AZ still has the same issues as cross-region, as I have heard well meaning people say they were somehow different. AWS does a great job of keeping things running well to the point that it's news when they don't. Things still happen though. Same for Azure, GCP, and any other cloud offering. However flawless their design and execution, if a volcano erupts in the wrong place, there's gonna be a partition.)
Do you accept writes in us-west-2a knowing the ones in 2b can't see them? Do you serve reads in 2b knowing they might be showing old information? Or do you shut down 2b altogether and limp along at half capacity in only 2a? What if the problem is that 2a becomes inaccessible to the Internet so that you can no longer use a load balancer to route requests to it? What if the writes in 2a hadn't fully replicated to 2b before the partition?
You can probably answer those for any given business scenario, but the point is that you have to decide them and you can't outsource it to RDS. Some use cases prioritize availability above all else, using eventual consistency to work out the details once connectivity's restored. Others demand consistency above all else, and would rather be down than risk giving out wrong answers. No cloud host can decide what's write for you. The CAP theorem is extremely freaking relevant to anyone using more than one AZ or one region.
(I know you weren't claiming otherwise, just taking a chance to say why cross-AZ still has the same issues as cross-region, as I have heard well meaning people say they were somehow different. AWS does a great job of keeping things running well to the point that it's news when they don't. Things still happen though. Same for Azure, GCP, and any other cloud offering. However flawless their design and execution, if a volcano erupts in the wrong place, there's gonna be a partition.)