At the risk of drawing an inordinate volume of fire and critique, I’m going to be candid and state openly that there’s no way ai could possibly integrate this kind of material into my world view and therefore it stays on the cutting room floor.
I’m so utterly materialist and rationalistic that the lack of plausible mechanisms by which any of this might occur leads me to (arguably ‘unscientifically’) dismiss it out of hand.
Not can I support the idea that promoting a benevolent fiction can in any way be positive. After all, one could always hew to honesty (as indeed this piede does) by arguing that benefits can accrue from believing in something irrespective of its veracity, without directly engaging with the truthfulness of the beneficial statement itself.
> the lack of plausible mechanisms by which any of this might occur
But some of the mechanisms themselves are already reasonably well known. We can already make light points appear to another person via electric signals in spite of the fact no such light points actually exist in front of that person's eyes. So it's not a question of whether these phenomena exist or not, but rather how exactly they are related to the brain.
I’m so utterly materialist and rationalistic that the lack of plausible mechanisms by which any of this might occur leads me to (arguably ‘unscientifically’) dismiss it out of hand.
Not can I support the idea that promoting a benevolent fiction can in any way be positive. After all, one could always hew to honesty (as indeed this piede does) by arguing that benefits can accrue from believing in something irrespective of its veracity, without directly engaging with the truthfulness of the beneficial statement itself.