so he was mostly unresponsive but could still feel pain. they were doing stuff with his tubes with no pain meds for almost two decades. that has to be one of the most horrifying things ive ever imagined.
That was really surprising to me. I’m not a doctor but it seems easy enough to assume that patients in a vegetative stage can feel pain and medicate them accordingly. The risk of over medicating in that situation seems to be outweighed by the risk of someone having to suffer in silence.
The assumption that someone who can’t communicate pain is unable to feel pain seems misguided at best.
This type of thing is standard in healthcare. Doctors only began saying babies felt pain in the late 1980s, and only then because a mom made a big fuss in the media about it when she discovered on accident that her baby received open heart surgery without anesthesia.
All of a sudden the decades of research that had been interpreted as showing babies didn't feel pain was overturned and new research showed they did.
Medical research really is mostly a series of fads.
I can imagine that there are legitimate reasons not to use anesthesia on babies who show no signs of remembering the pain when they've become able to communicate. Anesthesia is not without risk, probably doubly so for a developing brain.
You are in an accident and have to be operated on, but using anesthesia would drop probability of survival from 90% to 10%. They can give you instead a drug that will prevent you from remembering pain that has no influence on probability of success operation.
What would you chose?
I'm not saying that this is the exact case with infants. Just trying to illustrate there might be valid reasons to inflict temporary pain to save health.
Technically we don't know whether general anesthesia actually prevents you from feeling pain, or whether it just prevents you from forming memories. As far as I know at least.
General anesthesia is usually a combination of multiple drugs. We know exactly what each drug does, medical science isn’t so good at explaining why the drugs have these actions. anesthetists combine these drugs to suit the requirements of the procedure and the patient
Anesthesia usually includes drugs with distinct, separate actions, like an analgesic (to block pain receptors), muscle relaxants (so patients can be moved/organs accessed), CNS depressants (to prevent unexpected spasms disturbing the surgeon), coma-inductors (so patients don’t chat with the staff and freak them out after practicing on cadavers for years), and amnesia-inductors (because surgery is kind of gross).
We know enough about them to use them safely, but most anaesthetic drugs are somewhere on the spectrum from "not fully understood" to "a complete mystery". Science knows how anaesthesia works in the same sense that I know how a car works - I can push the pedals and turn the wheels with sufficient proficiency to get to the grocery store in one piece, but I'll be damned if I can explain what's happening under the hood.
I guess that's technically correct, in that it's difficult-to-impossible to "know" about someone else's subjective experiences.
On the other hand, painful stimuli tend to (e.g.) increase a person's heart rate and blood pressure, and that the proper anesthetic plane blocks those responses, so it's clearly doing something in the moment. What we don't know every well is the mechanism of action for general anesthetics.
I would imagine that we must know that by now. We should be able to just see on a brain scan whether the pain centers are lighting up in response to pain.
It seems like a case of motivated reasoning. Doctors needed to operate on babies; anesthetizing them would be hard. So they decided that babies can't feel pain and never investigated methods of anesthesia on babies.
More like anesthetizing them would be risky and they're not gonna remember it so it's not worth the increased risk to their life since infants are already fairly fragile and saving their life is the primary goal.
It's not just "it's hard and we're lazy so lets make up an excuse" like you seem to be implying.
Edit: Since apparently it wasn't clear, we're talking about the timeline when this issue was hashed out, so the 1980s. Anesthesia carried substantially more risk then because back then we didn't understand it nearly as well.
We anesthetize infants fairly often now[1]. It's still a bit risky, but it's fairly common. It's gotten less risky because we do it a lot and have gotten quite good at it. If we'd never revised the science on infant pain, we never would have even bothered to try. Turns out, it's possible to do it fairly safely.
It's one thing to say, we're not going to use anesthesia because it's too much risk, it's another to say that they don't feel pain at all so let's not worry about it.
We're better at anesthesia now, but would we have ever gotten good at anesthetizing babies if we kept believing that they didn't feel pain in the first place? Why bother working out how to do it if we think they don't feel pain?
im very glad to find a person who shares my feelings. every time i open my feeds and i see a headline about some kind of machine learning or ai breakthrough, i feel physically uncomfortable. every time i open one of those links there is a chance that it will change the equation of life.
the other day i opened one of those links and it was GTP-2. besides all the insane implications of GTP-2, what bothers me is that i am no longer able to assume that any internet comment is written by a human, no matter how convincing. there are still comments that GTP-2 could not write but anyone who points that out is pretty short sighted because it wont be long before there are vanishingly few comments that could not have been generated. i kind of liked knowing that a person was typing out (almost) all those comments.
one of the biggest realizations ive had recently is that technology does not cut equally in both directions. everyone in my generation has thought of technology as a neutral entity: for every benefit of a given technology, one can point out a corresponding disadvantage. on the surface it seems like the scale dips neither for the societal disadvantages nor for the societal benefits. this is a very fundamental belief. and its wrong. its funny how people put so much faith in such fuzzy logic.
the implications of that realization are difficult to swallow. it means that with every new technology introduced into the world, there is the potential for it to harm peoples quality of life. or improve it. but there is no regulation of technology so its a crap shoot. weve been rolling the dice for a long time and we didnt even know it. and i think weve been winning. but i think that high level automation is not going to be a win for us.
besides all of that, there is absolutely no debate that these advancements in ai are to our generation what personal computers were to the baby boomer generation. without close attention, we will fall behind and our kids will have fluency in the new world of automation while we cling to very old and outdated patterns. in other words, it makes me feel very old.
> one of the biggest realizations ive had recently is that technology does not cut equally in both directions. everyone in my generation has thought of technology as a neutral entity: for every benefit of a given technology, one can point out a corresponding disadvantage.
Maybe that's just because technological innovation is slowing down.
I'm not sure I agree that I see it slowing down. I wish it would for a bit so everyone could catch their breath. Social we are just catching up with the implications of social media and there is so much we haven't come to terms with like CRISPR. It seems like just what we've accomplished in the last 10-15 years would happen over several generations previously. We really aren't ready for the changes that are baked in now.
> I'm not sure I agree that I see it slowing down. I wish it would for a bit so everyone could catch their breath. Social we are just catching up with the implications of social media and there is so much we haven't come to terms with like CRISPR.
Societal changes lag technological ones by at least 5-10 years. So the changes we're feeling now were largely the result of technological changes in the early 2010s. But I do think technology today is slowing down. Individual processor speed certainly has, which has far reaching implications. Cloud computing and GPUs have given general purpose processes another step in "perceived" performance, but those are pretty much one-trick ponies.
If processor individual performance doesn't increase, the economies of scale that a large data center gives you eventually has diminishing marginal returns, and you're again limited by individual processor speed. GPUs similarly give a speed-up for applications that can optimize for them, but eventually they will run into the same performance walls that general purpose chips run into.
Other technologies, like machine learning and much of genetics heavily rely on exponential improvements in the underlying hardware.
If the death of moore's law is really happening, it will have far reaching implications in all computational based industries.
people in here keep on saying that uber is not making profit. where is the source for that? i remember people saying the same thing about tesla. complete dogma. nobody seemed to understand that tesla was investing huge amounts of money into the development of other cars and expanding their factories. so what are ubers expenses? it does not pass the smell test. what is the expense that is killing them?
and people in here also dont seem to appreciate that uber can change their prices. they cant right now, but they will be able to soon. all the investor money floating around means that their competition may be able to operate in the red for extended periods of time. when the investor money dries up and everyone is surviving on profit, prices can go up. and they will go up because rideshare is the most efficient and cheapest way to do taxis -- nobody is going to come in and disrupt uber. except for driverless cars. but driverless cars arent going to happen. not anytime soon.
edit: i just looked at the chart in the document and as far as i can tell they are 3B in the red. not really sure what the units are in that chart. ok, well there are a lot of expenses where i cant tell exactly what they are, but their marketing expenses were 3B. 3 fucking billion dollars -- am i reading that correctl? thats the same amount by which they are in the red. i also see some very high numbers for management. all uber has to do is cut the fat and they will be making a nice profit.
maybe its silly but i think california will always attract lots of great people because the weather everywhere else fucking sucks. i did a huge road-trip across the US last year and the biggest lesson i drew from it was that california is paradise compared to the rest of the country. i never traveled as a kid, so i assumed that things were nice in other places too. seriously, i dont understand why anyone chooses to live somewhere else. other places are cheap but they also suck massively. and people who live in NY? its just as expensive over there, even more restrictive gun laws (you cant even carry a fucking taser) and the weather SUCKS. why someone would know about both places and choose NY over CA is a mystery to me.
i dont know why people think there is a paradox. the way that life springs from barren rock is not known. if we dont know how that works, then we cant assign a probability to it happening on a given planet. people just assume that the probability is very high. it could be next to nothing for all we know, small enough so that even the entire universe only produces one. the paradox is all based on huge assumptions. there is no paradox until we prove that the probability is high.
well thats not true, because you need life but you also need intelligence. and again, everyone assumes that if you have life it will eventually become intelligent. and people assume that if life is intelligent it will eventually build space shuttles. its all a huge, huge assumption. look at all the animals that qualify as intelligent. some birds and monkeys are hugely intelligent, but they dont build space shuttles. this shows that intelligence doesnt equal space shuttles and that even when life springs up, and even when it becomes intelligent, it still could be super unlikely that it will build space shuttles.
hell, there are even humans that might have never built space shuttles. there are indigenous communities all over the planet that never developed technology and probably never would have. when you live in a warm climate and food is abundant, there may never be a reason to.
it is unproven that it is likely at all for space-shuttle level intelligence to spring up from bare earth. there is no paradox. its probably just really unlikely.
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
i wish people would stop pretending that there is some good way to bring this technology into existence. yes, its nice to try and let the good guys use it first but its just irrelevant in the long-term. ultimately the result is going to be total proliferation of this technology in all areas where it has utility, and it will be used to maximum extent in every application it is suitable for, including the really bad ones. the roll-out will make the transition smoother but it wont change whats actually important: the end result on the lives of our grandchildren.
growing up around rapidly advancing technology, i thought of technology as a double-edged sword: it cuts equally in both directions. but after thinking about it for a long time, i now believe that, in relation to human well-being, the presence of a given technology or combination of technologies can be a net positive or a net negative as well as neither. we need to think more carefully before letting these genies out of their bottles.
this is not an example that i think will be very negative, but its very powerful and unexpected for me at least. the next powerful and unexpected thing may not be benign. banning development of these kinds of technologies should not be off the table.
after reading this: https://blog.openai.com/better-language-models/#sample8 and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.
>i have realized that from now on i cannot trust 90% of the comments i read on reddit. this is insane.
I hate to be cynical here but I'm glad this has made you realize something that's been true since the Internet started; you shouldn't trust what's written on any forum! Be skeptical.
Hmm, video evidence may be trustworthy - eg. video from a CCTV system. Perhaps it could be written onto some write-once, tamper resistant format? Not sure how that would look.
I suppose one place to start thinking about this would be photos. Are photos admissible evidence or do courts only allow negatives? Photos have been modified for a very long time. This is probably the most famous example: https://amp.businessinsider.com/images/52af668569bedd3b2643d...
But yes, I am far more worried about how much more effective fake news will be once they start coming with actual videos.
How long? Several years ago. Videos have been faked forever. There’s been all sorts of optical illusions, forced perspective, and special effects for 100 years.
You shouldn’t trust any single source. Only a preponderance. Even then be open to skeptics
Someone's skepticism, knowledge, and careful assessment might lead them to think that a forum post has a X% of being machine generated (as one example scenario). There are big differences between values of 0.1%, 1%, 10%, 50%, 90%, etc, and the resulting impact of the who are involved in that system.
Because of this, it isn't helpful to say, "Oh you should always be skeptical! It doesn't matter if things have changed significantly such that we have more reason to be skeptical now."
> i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.
Ever since Photoshop got good (20+ years now?) we haven't been able to assume that images are "real" either and things turned out fine. We'll have to learn to be skeptical.
Anyway Reddit already has dedicated bots (account names ending in "SS") posting and commenting on their own content, mostly hilarious but sometimes fairly "real" checkout /r/SubredditSimMeta
I personally have huge concerns regarding the public global distribution of what is clearly a weapons grade technology.
Authoritarian countries are already heavily invested in utilising these technologies for the purposes of suppressing the wills of their people.
However, there is nothing that will stop them from further developing these technologies even without access to the research from more liberal nations.
To halt development is to drop out of an arms race that cannot afford to be lost.
>and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.
I wonder if eventually we'll have sites like reddit or forums that require you to demonstrate who you are before joining. Eg they require a photo of you and your passport. The site wouldn't use that information for anything, but this would reasonably guarantee that there's a real identity behind every poster.
i never understood people who think that there will be water wars. RO plants like these are all you need. there was some scare-mongering about brine-cakes, and i never understood how anyone fell for that because the amount of salt produced by these plants compared to the volume of the ocean is beyond minuscule. lo and behold, this article says the brine problem was found to be a false alarm after being looked at again. slap a solar field on this baby and youve got sustainable, mostly disaster-proof water. exciting times.
technologies having inherent political traits is a consequence of a much deeper and more important aspect of technology -- that it has inherent traits of human economics. if you take a set of technological realities that might be imposed on some society, it leads to that society eventually reaching exactly one stable state.
a very simple example is the technology of guns. this technology leads inevitably to a state of the world that is characterized by the presence of gun-utilizing nations. this is because the world is a kind of market, and when guns exist the only entities that are competitive are those that use guns.
right now, market economies dominate the world. even china utilizes markets for its own internal economic affairs. when AI comes, this will turn on its head -- market economies will no longer be competitive and centralized ones will replace them. this will be a pretty shocking change.
also, rather soon, humans will stop being present. this is because they will no longer be competitive, their existence will be vestigial and therefore fragile and vulnerable to the slightest perturbation. it will be similar to endangered animals in the present -- no longer competitive, their existence no longer perpetuates itself and therefore is terminated for any old reason, such as condominium developments or pollution.
> market economies will no longer be competitive and centralized ones will replace them
Assuming that the AI's have accurate and timely information. I suspect that one of the (many) reasons why modern economies can be dysfunctional is that the information feedback loop is often either inaccurate, incomplete or lagging badly. Solve that problem and you're gold.
your comment is rude. the emotional nature of your comment reflects the fact that you find something in my comment troubling but do not have any way of proving it wrong. what you do instead is attack the character of the person who said it. do you seriously think that you are able to see into the mind of a person based on a terse and straight-forward relation on the economics of AI? can you not recognize that this is impossible? if you have any actual, substantive counter-argument to what i have said, i will gladly receive it. otherwise, i must say that it is you who should keep comments to oneself.
doesnt this make you think that we shouldnt give websites the ability to control so much stuff? why cant we have a browser that has basic video, photo, text, forum, etc functionality baked in, with no nonsense like a turing-complete programming language and all the complexity and exploits that come with it?
i dont mean to be negative but ML and even conventional computing are starting to make me tired. im always wondering what it will be next that ML can do better than humans. what is next up for automation? will this be the one to send a shock-wave through an industry / the economy? i feel like i need to constantly watch and keep track of the progress thats being made. and im starting to get tired from having to re-think life again and again.
for example, google has published voice synthesis samples, voices generated from text, that are indistinguishable from real human speech. it hasnt been perfected yet, but i think most people would agree that we basically now live in a world where voice recordings cant be automatically trusted the way they used to be. it completely changes the way you think about and navigate the world. it will open up a universe of new schemes, methods of fraud, etc etc that we will have to adapt to.
then there are deepfakes. there are limitations, and the results arent perfect, but its very early days. again i would say that the consensus among us is that we now live in a world where video evidence is basically no longer intrinsically trust-worthy in the way that it used to be.
i practically grew up inside a computer. but i am now sensing that as ML fills in, its going to be a very uncomfortable ride for me personally -- and i dont understand how it couldnt be for anyone else. and what about when AGI comes? just curious to see if anyone else shares my experience with this.