After randomly assigning men to mixed-gender and single gender squads for Norwegian Bootcamp, this study found no performance or perceived satisfaction impacts.
Moreover, the men in the integrated squads developed more egalitarian attitudes. Interestingly, these attitudes were not maintained once no longer in a mixed squad.
This was fascinating, although I strongly disagree with a lot of the points - and arguments used, it's such a wildly different perspective that I couldn't help but be intrigued. In many ways it is a masterclass on how to write persuasively for what feels a ridiculous premise.
The drunk driving parallel I thought was more that a little amusing. A more apt comparison might just be driving itself - it's a risky activity both for yourself and the others around you. Perhaps even to the speculated 1% marker but that doesn't appear to create any sort of moral imperative to change our willingness to drive.
Equally I couldn't help notice the level of subtle anthropomorphism going on - name dropping depression as related to anhedonia is a clever literary technique for encouraging this natural thought pattern; even while adding the "perhaps", "some think" it makes no difference to the "bias ... easily distorting our intuitions".
I'd love to see a similar essay for plants - or even the AI that they mention. I could see the outline for an equally persuasive essay (a shame that the author sees AI being trivially less likely to be sentient - one I have trouble believing beyond a "biological bias")
You need to say more about why you don't find the driving parallel the author provides appropriate.
Your say:
> A more apt comparison ... doesn't appear to create any sort of moral imperative to change our willingness to drive.
How convenient. You choose something that we can't live in our modern world without (driving a car) and say "see, no need to change our behavior".
Meanwhile, farming bugs for food is not at all essential in the current world - we can easily feed everyone without subjecting ourselves to the possibility of doing something morally horrible.
Here's another parallel, you decide to start building sheds and burning them down. You now realize there is a 1% chance that the current shed has a child who decided to hide in it; do you burn it down because "1% is a low chance"? Of course not. And that's the point the author is making - when there is even a low percent chance of something very bad happening, and you're not doing anything that is essential or necessary, you ought not do it.
It's a very modest proposal sort of paper. You can tell it's a utilitarian paradox paper in disguise because the author never distinguishes between death and suffering. If you could reasonably minimise or eliminate the insect suffering I don't see any ethical dilemma here. A black fly's natural lifespan is only a few weeks!
The author is subtly invoking a utilitarian trick where you multiply a tiny number by a very large number and arrive at a nonsensical result.
So for example, the tiny harm of killing 1 insect times trillions of insects = unspeakable abomination.
If we follow this a bit further we can reasonably conclude that one of the most important moral problems for the human race to address is insect welfare and life extension.
> when there is even a low percent chance of something very bad happening, and you're not doing anything that is essential or necessary, you ought not do it.
It's not a utilitarian "trick" to care about numbers. No one will disagree that 2 death is worse than 1.
And people who are insensitive to numbers, I think, just don't understand math. Breaking a bone is not as bad as a death, but 100 billion people experiencing broken bones is, I would say, worse than 1 death.
No negative utilitarianism needs to be invoked to make the argument that creating suffering is bad. The claim isn't that there is no possible benefit that we could have from farming bugs (even if they are sentient), but that the expected benefits are mediocre at best (just buttressing an already horrific status quo of farming animals).
The key phrase in my sentence you quoted is "essential or necessary". In order to counter my claim you would need to explain how the benefits of bug farming outweigh the potentially horrific state of torturing trillions of sentient beings.
ps - in principle, I am on board with "happy bug farms" where the needs of bugs are met, and they are happy. If we could guarantee this, I would be for it -- more happy sentient creatures the better. But you and I know that in fact, in our world, with profits being anti-correlated with taking care of the sentient beings at factory farms, starting bug factory farms shouldn't be embraced.
It seems sensible to say - these creature with nocicepters and familiar aperture for feeling pain, who react to to bodily damage in ways that suggest that they can suffer, and for whom pain can obviously serve the same evolutionary benefit as it does in humans - they have a moderate to high chance of feeling pain and being conscious to some degree, so we should be careful.
Plants may release chemicals in response to physical threats, but they don't have the pieces of the nervous system we attribute pain to, don't seem to have any level of consciousness, and don't have an evolutionary benefit to subjective suffering. Therefore, morally, there's no reason to treat them as more than an inanimate object.
I feel like AI could theoretically someday have the potential to suffer, but that isn't really a current concern.
Based on all available evidence, the article's argument for at least being careful about insects' potential suffering seems sensible, but the plant argument strikes me as absurd.
Pain != consciousness though and while I agree that in principle we should strive to reduce total pain, I also think there is a strong nonlinearity in how much we should weight nervous systems of different complexities. I would say the UK legal cutoff for lab animals (where we care about vertebrates and octopuses but not invertebrates) is about right. It is a complex issue - the presence of an isolated, active, nociceptor (or even thousands of them) isn't of concern to me. But somehow the conscious readout of them is...
I do think there's a big moral difference between species. I care about protecting a human from suffering much more than a dog, a dog more than a mouse, a mouse more than a cricket...
But I see no reason for some cutoff, where I arbitrarily decide to care for everything above a certain level of complexity, and decide not to care about anything below. "Vertebrates and octopuses" does not seem like a group that share any exclusive traits, i.e. any moral reason I have to care about an octopus seems like it would lead me to care about an insect, just maybe to a lesser extent.
Even if you value insect lives extremely lowly - if there's any moral value to them at all, mass farming them in the trillions or, possibly someday, quadrillions would be a moral travesty, even of their moral value is extremely small per individual, right?
I feel like "there seems to be a reasonable possibility that insects suffer" implies that we should have some level of interest in preventing their excess suffering, where practical.
Three things occur to me:
1: I think there is a reason for a cutoff. To take the AI case: I have no doubt that any AI worth its salt could suffer. But that AI will (probably) be build from transistors and I think that it is meaningless to consider the suffering of a thermostat. So there's a cutoff somewhere - maybe it isn't that 'hard' but some things are on one side and some on the other. Flies, I think, are more like thermostats and I value each one of them at very close to zero. But you are right, possibly not >exactly< zero and that eps might add up in some way. Would destroying a billion thermostats be ethically problematic?
3: Some time ago I seem to remember the Swiss Government discussing a proposition to recognise the 'dignity' of insect and plants. Dignity was held to be a property distinct from suffering and I think part of the argument was that it was somehow mutual: damaging life in any form was harmful to the dignity of both the damager and the damagee. I have some sympathy with that idea. In other words, we should care about the bugs because we care about ourselves.
Disclosure, I do a lot of scientific work with Drosophila and this is something that I mull over from time to time as I slaughter them in their thousands...
Insofar as I can tell, thermostats have a 0% chance of feeling pain (or as close to 0 as is logically possible), so I see no need to afford them moral consideration. If we had evidence suggesting that AI who could suffer currently existed, and if there was any overlap between the suffering-relevant pieces of that AI and thermostats, I'd start to worry about "killing" thermostats. It still seems like the chance of insects subjectively suffering is orders of magnitude greater than the chance of modern thermostats suffering.
I really enjoyed the ACX article; thank you for the recommendation! I agreed with a lot of the general points he was making.
Not sure about the "dignity" of e.g. plants, as I think there are plenty of ways to harm plants and animals that is helpful to humans. It's an interesting idea, though, and I do like the attempt to make a practical argument instead of a moral one.
In any case, while (to some extent) I care about the possible suffering of insects who are farmed in the trillions, I'm not particularly morally concerned about thousands killed for greater scientific gain. Could I ask generally what work you're doing with them? I'd be interested in learning more about it
Frankly, right now anyone who isn't already in the game isn't able to acquire Ether without paying for it.
Mining at a rate necessary to get any reasonable amount of Ether is a huge investment, and is already out of reach for the average person. Setting your desktop computer to mine definitely won't pay for your small smart contract.
> Mining at a rate necessary to get any reasonable amount of Ether is a huge investment,
I started mining on my own PC a couple weeks ago, with the GPUs I already had (2x 1080ti) and made about 0.5 Ether so far. So it's definitely not impossible to get enough currency that enables you to interact or even deploy a smart contract on a consumer PC with a little bit of time.
I just use PhoenixMiner. Anything that has the feature of reducing memory latency ('-straps 2'). Getting 47 mhash/s per 1080ti with that.
I actually had to spin up a Virtual Machine with GPU passthrough to launch Windows because I couldn't get the GPU tweaks working on Linux. Really nice how vfio is now in the Linux kernel, it's been a breeze going through the setup (compared to a couple years ago when you needed a custom kernel.)
Probably a bit of both - it feels too excited. Podcasts tend not to be things you shake your fist in the air for - that's more winning the lottery territory.
Looks interesting! I'll be honest took me a little longer than you'd want to work out what the product was.
I know now that it is in the name! However I'm used to ignoring names to understand what a product does. Replacing the "connecting organisations with ears" tag line with something like "Access your podcast via the phone, and get more listeners" would be more clearer even if it's not as clever.
Looks like Google has a mostly internal user id ("gaia_id"), once you've got this small identifier you can look on each Google related site for publicly available information relating to that user id.
Looks like Google is aware of it as the reference to gaia_id has been removed from Youtube pages source (Aug 2020), locked down public accessibility of photo albums (Sep 2020), and prevented connected email address leaks from Webmaster tools (Sep 2020).
But no one appears to have taken the risk/time to properly validate it.