This was the kind of reasoning that led gun makers to develop safety mechanisms.
It isn't clear what a similar safety mechanism would look like for an ML algorithm. Modern ML algorithms are nothing more than generic pattern extractors. What would a safety mechanism look like?
And led to government regulation of distributing guns.
Have government grants for ML research be contingent on training on diverse datasets. Fund research into more diverse benchmarks and push them as the goal to beat.
It isn't clear what a similar safety mechanism would look like for an ML algorithm. Modern ML algorithms are nothing more than generic pattern extractors. What would a safety mechanism look like?