Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We generally want ML to imitate us (or human generated data), thus we teach it to do so. But there are also generative ML (like GANs) and simulation based ML (like AlphaGo) which can be more creative. There is nothing stoping us from letting agents evolve in a complex environment and be creative. It's just not commercially useful to do that yet. Doctorow writes like he doesn't understand much math behind ML, yet has strong opinions on it.

Every time a random number is involved in the training process (stochastic selection of mini batches, noise injection, epsilon-greedy action selection), a form of exploration is taking place, finding novel paths of solving the problem at hand. The mixing of noise with data is paradoxically useful, even necessary in learning.



> ... finding novel paths of solving the problem at hand ...

This only applies to reinforcement learning (and similar). Classification problems (and similar) are fundamentally conservative, because you feed them with a finite set of preexisting training data that they must extrapolate from.


Sure, but no matter what, you can't derive an "ought" from an "is". At the end of the day we merely tell these algorithms what is the case. No matter what goes on inside, they cannot output moral prescriptions.


>Sure, but no matter what, you can't derive an "ought" from an "is".

The same applies to humans, no?


No it doesn't. When a humam gets wronged by what "is" they can likely feel or imagine a better "ought".


They can feel or imagine an "ought", but the point of Hume is that they can't argue for it.


And yet humans do, all the time. It's similar to Hume's attack on causality. You can't show that A caused B, but yet we all act like it's the case, when B always follows A. Kant's critique comes next.


You don’t think an machines can learn to classify situations as good or bad for them ?


> It's just not commercially useful to do that yet.

Some people are working on it : https://news.ycombinator.com/item?id=21728776 (dec 2019)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: