Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thought experiment definitely makes me think of the parallelizability of tasks. There's definitely kinds of tasks that this setup as described wouldn't be very good at accomplishing. It would be better for accomplishing tasks where you already know how to do each individual part without much coordination and the limiting factor is just time. (Say you wanted to do detail work on every part of a large 3d world, and each of yourselves could take on a specific region of a few square meters and just worry about collaborating with their immediate neighbors.)

Though I think of this setup only as the first phase. Eventually, you could experiment with modifying your copies to be more focused on problems and to care about the outside world less, so that they don't need to be reset regularly and can instead be persistent. I think ethical concerns start becoming a worry once you're talking about copies that have meaningfully diverged from the operator, but I think there are appropriate ways to accomplish it. (If regular humans have logical if not physical parts of their brain that are dedicated to specific tasks separate from the rest of your cares, then I think in principle it's possible to mold a software agent that acts the same as just that part of your brain without it having the same moral weight as a full person. Nobody considers it a moral issue that your cerebellum is enslaved by the rest of your brain; I think you can create molded copies that have more in common with that scenario.)



I wonder if these sorts of ethical concerns would/will follow an "uncanny peak", where we start to get more and more concerned as these brains get modified in more and more ways, but then eventually they become so unrecognizable that we get less concerned again. If we could distill our ethical concerns down to some simple principles (a big if), maybe the peak would disappear, and we'd see that it was just an artifact of how we "experience our ethics"? But then again, maybe not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: