They are fallible, but they quite clearly exhibit superhuman smartness when compared to the average human.
As a thought experiment, assume the average human may be able to translate text between two human languages, or write code in two-three programming languages. GPT4 can perform those tasks on a much more diverse set of human _and_ programming languages. Is that not superhuman?
Yes, it makes mistakes. But take a hundred humans off the street and ask them to write an NGINX configuration or translate between Indian and French - how many would be able to do that? How many would be able to do that without any mistakes?
As a thought experiment, assume the average human may be able to translate text between two human languages, or write code in two-three programming languages. GPT4 can perform those tasks on a much more diverse set of human _and_ programming languages. Is that not superhuman?
Yes, it makes mistakes. But take a hundred humans off the street and ask them to write an NGINX configuration or translate between Indian and French - how many would be able to do that? How many would be able to do that without any mistakes?