They're using specialized hardware to accelerate their development feedback loop. Without a doubt researchers and hackers will find ways to cut down model sizes and complexity, to run on consumer hardware, soon enough. Just use stable diffusion as an example: 4GB for the whole model. Even if text models are 16GB that'd be great.