> unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!)
Re-implementing it may be more likely than you think. The field of connectomics concerns itself with modeling natural brains and is currently constrained by scaling problems, much like genomics was a couple decades ago. As computing power continues to grow, it's entirely likely that humans will eventually be able to simulate an actual natural brain, even if that does little to further our understanding of how it works.
The current state of the art in AI is attempting to reach consciousness via a different route altogether; by human design. Designed "brains" and evolved brains have a crucial difference; the survival instinct. Virtually all of ethics stems from the survival instinct. A perfectly simulated survival instinct would be ethically confusing to be sure, but the appearance of a survival instinct in current LLM's is illusory. An LLM plays no role in ensuring it's own existence the way us and our ancestors have for billions of years.
Re-implementing it may be more likely than you think. The field of connectomics concerns itself with modeling natural brains and is currently constrained by scaling problems, much like genomics was a couple decades ago. As computing power continues to grow, it's entirely likely that humans will eventually be able to simulate an actual natural brain, even if that does little to further our understanding of how it works.
The current state of the art in AI is attempting to reach consciousness via a different route altogether; by human design. Designed "brains" and evolved brains have a crucial difference; the survival instinct. Virtually all of ethics stems from the survival instinct. A perfectly simulated survival instinct would be ethically confusing to be sure, but the appearance of a survival instinct in current LLM's is illusory. An LLM plays no role in ensuring it's own existence the way us and our ancestors have for billions of years.