PHP's concurrency model is really nice: Spawn a process per request, and otherwise don't do concurrency because that's a mistake.
Yes, you can use CGI to get the same model in practically any language, but we see very few languages these days making this tradeoff.
For some reason, people decided they wanted the performance gain of spawning threads per request instead of processes per request. Honestly, it's ridiculous. Reasoning about isolated processes per request is easy compared to many threads. Sure, in php you can't share a pool of db-connections between requests, and if you need to lock a filesystem resource you're stuck because you can't share a mutex between requests so you have to make an ad-hoc one some other way, and the overhead is like 50MB/request meaning the difference between a server handling 1000 requests with php and 60000 requests with go...
But being able to just have processes which were all isolated and request scoped was nice. The OS was your GC so you didn't have to worry about freeing memory, and the OS's scheduler did a damn good job of making sure php processes blocked on IO got context switched between.
PHP no longer works like this in practice. FastCGI etc uses persistent processes with a thread per request, and thread per request is pretty standard for most platforms that don't have good async support at this point.