Even though I had a very good understanding of the anatomy of the eye and largely how vision works, my internal concept of vision was still kind of like what is described in the title.
Then I realized your retinas are more like a specialized tongue covered with lightbuds, and my sense of vision changed dramatically. I suddenly had a sense of blindness, that I could see only as far as the tips of the rods and cones in my eye, and everything beyond that was just lining up the photons so they made sense when they arrived.
I think what you did is rolled back a layer of abstraction your brain created for you. Sure, your eyeballs are high-sampling-frequency tongues swimming in a sea of light. But the brain processes those raw sensory readings, fuses them with a bunch of other things, and presents it as a view of the world, and then to simplify things, it tells itself that this is the world.
I'm actually super-interested in ways of building those layers of abstractions. Consider tool use. If you've used any tool a lot, you know the feeling of it becoming an extension of yourself. Proficient drivers don't think about moving the keyboard or turning the wheels, they just sort of are the car and will themselves to move where they want to go. As I type this comment, I don't think about my fingers pounding on the keyboard, I just will the words to appear on screen. The brain is good at papering over the mechanics of interacting with the world, and I think it's entirely feasible to start incorporating new senses if our devices can be designed around this purpose, instead of relying on explicitly displaying data for our eyes.
I remember reading about pilots placing an electrode on the tongue and eventually learning to intuitively feel what the external sensor was telling them.
Or another one about a person wearing a belt which always vibrated in the spot facing north. This helped them navigate cities more efficiently after getting accustomed to the vibrations.
Anecdotally, I participated, as a non-blind control, to a study assessing cross-modal perception in blind folks (how they use their visual cortex to analyze haptic stimuli).
The meat of the experience was evaluating the performance of pattern recognition on an array of electrode placed on the tongue.
Before and after training, all subjects underwent evoked potential recordings of their tongue (recodring the EEG following electric stimuly on the tongue).
Being aware of the protocol in advance, during the "before" evoked potential session, I concentrated on my mouth, visualizing it as a place I was sitting in.
I ended up experiencing blue phosphenes (light flashes) sinc'ed with the electric pulses, and my visual cortex was lighting up along with my sensory cortex. That was my first and only proper synesthetic experience (I used to associate vowels and colors as a kid, each vowel clearly mapped to a color, but it was conceptual, not perceptual).
Since I was an outlier (similar evoked potentials were only seen in blind subjects after training), I was excluded from the rest of the study, so I could not experience the complex patterns on the "tongue display unit".
This is discussed in the book Incognito by David Eagleman. I remember reading about someone learning to "see" with their tongue. Along the same line, his team developed a vest which translates information into vibration patterns on the torso: https://www.smithsonianmag.com/innovation/could-this-futuris...
> Or another one about a person wearing a belt which always vibrated in the spot facing north. This helped them navigate cities more efficiently after getting accustomed to the vibrations.
IIRC, the participants also felt incredibly lost after they had to give back the belt
This is a strong area of personal interest for me. I think BCI will eventually begin to incorporate those ideas of turning external stimuli such as haptics into representations of data our brain can understand.
ML is popular because it’s a set of learning algorithms that we can feed numeric data and output some learned prediction. Once we learn how to feed this numeric data to the brain (similar to the tongue sensor or a haptic vest), then we have the most efficient learning algorithm consuming and making sense of this data!!
I'm very excited for BCI, more so than for virtual reality or augmented reality, if only because it'll eventually allow me to remove any physical barrier to interacting with a computer.
I suspect that, due to the abstraction process you've noted, that it won't be long until the computer is a direct extension of will. Once people start realizing this, I think society is going upend, and a small number of extremely high-output people are going to remake the world.
The Singularity is real, and I think simply obsoleting the mouse and keyboard is going to kick it off. We could even replace screens, but I see it as less of a factor.
His position is less about speed and more about creation. If you didn’t have to shift cursor focus or “delete 3 chars and replace with these other 3 chars” would you represent things directly to the machine like you would to yourself, unbridled by the necessity to serialize these thoughts via the keyboard.
It may be the difference between “Refactor > Extract to parameter” and “Find all uses of function. Replace signature with modified signature, setting new parameter to value desired, using parameter from signature instead of local variable”.
When you are a child, you express yourself by constructing words of letters, then sentences of words. What if we’re at the sentence stage right now and the future is expressing ideas as paragraphs all at once.
That would be a shift akin to macros or HKT in programming languages.
That’s what he’s selling. Not the typing. The typing just hurts immersion a little bit.
I don't know if it's typing speed or not, but dictating a narrative and typing it feel completely different. I type at about 80-90wpm and probably speak at 150-180. That impedance mismatch is enough to block free-flowing thought and cause me to repeat/rewind sentences. I don't know if it's limiting, as the output of typing is usually less verbose, but it takes me longer to get it down.
It's not so much about typing speed for me, more about the physical act of sitting at a computer and coding. If I could think code into existence, then I can code literally anywhere. Laptops are way more portable than desktops, but not nearly portable enough.
I think it's really interesting how when we learn new things (such as this lesson about how the eye works) we often only learn them "halfway." E.g., obviously, you're not blind, and you see as far as you like- but your old sense of how sight works involved contact, and your new concept didn't fully rid you of the notion that contact was a limiting factor. Only because these two contradictory notions could coexist in your mind did you feel "blind."
Similarly, I've had people tell me that reading about the "microbiome" is really distressing, since afterwards they feel like they have some alien infestation in their bodies- as though they were full of bugs. On one level they learned a new thing, that healthy human bodies contain lots of various microorganisms, but some part of their psyche still maintained that any such creatures must be pathological. Their obvious health doesn't make them feel like microbiome theory is false, and nor does it fully convince them that their microbiome is benign.
There's probably a lot of more subtle examples of this, and it probably goes a long way to explain why so many people find science to be distressing or depressing.
> I've had people tell me that reading about the "microbiome" is really distressing, since afterwards they feel like they have some alien infestation in their bodies- as though they were full of bugs. On one level they learned a new thing, that healthy human bodies contain lots of various microorganisms, but some part of their psyche still maintained that any such creatures must be pathological.
We also all maintain the perception of "wholeness" of our body. But at the level of microbiome, it's just bunch of cells mixing with another bunch of cells, with different DNA. It's all one large bag of smart sand. It's another realization that may be hard for some people to handle.
> But at the level of microbiome, it's just bunch of cells mixing with another bunch of cells, with different DNA. It's all one large bag of smart sand.
Well, in terms of biomass not quite, and our gut flora are also topologically still outside of our bodies. But I agree with the overall sentiment, whether we call it a bag of smart sand or a portable sealed-off climate controlled micro-ecology is just details.
Interesting, my model of a pathogen (mostly shaped by my biology teacher), was one of a symbiont not completely adapted to the host body (yet). This was decades ago, but when I later read up on the microbiome, the image of an alien infestation did not really cross my mind. What caused me more concern was the (halfway) realization that this symbiosis goes as far a actually forming the human body.
Classic example of using logical mind to become less effective as a human being: to bring the intellect into an area which already works well subconsciously and try to do it's job for it using thinking. :)
The solution is to just trust your eyes and vision (and by extension, trust the universe at large, if you extend this into the spiritual realm): whatever the exact mechanism of the eyes is, they obviously work well for you. It doesn't make sense to take one small aspect of it (low number of lightbuds) and read too much into it, when its shortcomings are easily abstracted away by for example innate intelligence that scans the environment using those few bulbs, to produce a high-megapixel internal representation which will work well enough! Look at the big picture.
I remain skeptical. The effect size here is really small.
They presented subjects with an image of a rectangle sitting on a table with a human face either looking at it or not looking at it, from either the left or the right, and then asked them to work out what angle the rectangle would need to be tilted at to be perfectly balanced on one point. The idea being that if people model vision as a force beam emitted from the eyes, then people will assume that that force needs to be countered by leaning the rectangle slightly towards the eyes.
The difference between a face looking at it or not looking at it was less than 1 degree of rotation[0]. They also don't say how many pixels tall the rectangle was. For all we know it might only be 1 pixel different in the position at the top.
If the images in the article are at scale, the height of the rectangle is 1/8 of the width of the image, so if we assume that it is as width as posible with a screen of ~1200 pixels, then the height of the rectangle is ~200 pixels.
Also, 1 degree = pi/180 ~=1/60 radians. The difference they are measuring is approximately 1/2 degree. So with a height of 200 the change of 1/2 degree is approximately 2 pixels. [In case it is not obvious, I'm making a lot of assumptions and approximation.]
The text explain the size of the groups: 175 (online), 25, 15, 15. [I'm not sure if this numbers are counted before or after eliminating the subjects that didn't understand the task.]
Looking at the graph my objection is that the three results where the "face" was looking at the rectangle are too consistent. They are .64±.23, .67±.30 and .63±.26. The difference in the three averages is only .04, but the standard deviation is like .25 in each case. And with 15 subjects you don't reduce the deviation of the average too much, so these values look fishy.
The other five results look more natural. The standard deviation is something like .20 or .30, and they are scattered randomly in a range of a similar width.
Are they reporting all the experiments they made, or they made a lot of experiments and cherrypicked four of them?
Also, in the graphic "The asterisk indicates D significantly greater than 0, P < 0.05." not that the two measurements have a difference that is statistically significantly. In particular in the third graph the error bars are too close to each other.
I always used to find it difficult to walk through a crowd of oncoming people. I'd inevitably get caught in a "dance" with somebody as we tried to figure out which way the other person was trying to go.
My problems were greatly alleviated once I figured out this one weird trick for navigating through pedestrians: stare a path through the crowd. People will subconsciously move out of your way. It's as if they pay more attention to your eyes than the actual direction your body is moving.
A relative of mine had asked their psychologist about them feeling that they kept dodging and being stuck by people while walking in crowded areas, and they were recommended to look up at a point above the oncoming people, in the direction you are heading, and that after trying it people were suddenly getting out of their way when they walk.
The reason this works is that the oncoming person thinks you have not seem them. If someone is coming directly towards you and you think you have seen them before they have seen you, it makes sense that you should move out of their way rather than wait for them to eventually notice you. Also, when you do move, you are guaranteed that they will not move in the same direction (because they would need a little extra time to spot you and perform the evasive maneuver).
I don't think this observation necessarily supports the extramission theory of vision described in the study though. You could achieve exactly the same result by looking at your feet, coving your eyes or wearing a blindfold; anyone coming towards you is still going to get out of your way.
Except that this technique has worked for me after I look at a person too. We make eye contact, then I look to their right and they automatically pass on my left. I think it's more subconscious than you're making it out to be.
I used this technique to cross the street in Vietnam, oncoming motorcycles look where they’re going, and they trust that you will too. Looking at an oncoming motorcycle just results in both of you coming to a stop. Guess the ski instructors were right: you go where you look.
You look where you want to go -- same with driving a car or bike. If you look elsewhere too long you'll run into something/someone.
What is really noticeable is some people look at the floor in front of them. These stand out in an oncoming crowd because they never flow around obstacles in their path, they walk towards them and turn 0-4 m before collision.
This. There is a term for failing to look where you want to go, the result of which is going where you are looking: target fixation. It's a frequent cause of motorcycle crashes, and I see it quite a bit at the track. A rider gets scared, and starts to look at something off track - the grass, a rock, etc. Panic sets in, they are unable to avert their gaze and are drawn, as if by magnetism, into the very thing they would most like to avoid.
Another variation of "look where you want to go" is "look through the crash". If a couple of people get tangled up in front of you, don't watch the crash! You'll likely join it. Look through the crash, to where you want to go.
I can't upvote this enough. I learned that trick on this very site long ago and it works wonders. I just stare straight ahead past the crowd and they part like the Red Sea.
The last leg of my commute has me walking against the massive flow of people from a rail station so it's served me very well.
> In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun.
P.S. : I really find it fascinating that, provided we could bring these ancient thinkers to our times and teach them the discoveries science has made since they died, they would still hold their initial view in some way. This is obvious with what modern physics calls 'atom' (etymologically: 'not divisible'), and I could see Empedocle saying "At least, I was not totally off the tracks when it comes to what sight is psychologically".
I saw a study a few years back (not necessarily a rigorous one) in which a surprisingly high proportion of the public have this mechanism as how the eye worked. But, IIRC, had no response to the "so why can't we see as well in the dark".
There's also the following from the Gospel of Thomas (verse 26):
Jesus said, "You see the sliver in your friend's eye, but you don't see the timber in your own eye. When you take the timber out of your own eye, then you will see well enough to remove the sliver from your friend's eye."
You're misreading "sliver" as "silver" here. Some people have also translated these as "mote" and "beam".
The idea is something like that people are judging others with harsh standards that they can't apply to themselves, so they will notice or point out a very tiny problem related to another person, but not notice or point out a big problem related to themselves. It's a metaphor about having objects of different sizes stuck in one's eye and then a different activation threshold, so to speak, for acting on it depending on whose eye it is. This is akin to several kinds of cognitive bias.
The previous commenter mentioned that this saying appears in the Gospel of Thomas but it's much better known in this form from the Gospel of Matthew, which is considered canonical in Christianity.
I’d say this is because the concept of a “stare down” is instictive to us primates (or even mammals in general). That is, asserting dominance via staring at others is something we’re born with, and for good reason: our ancestors’ survival hinged on this when being confronted with someone bigger/stronger. That’s why you look down when you’re anxious or ashamed. Canines and felines do the same.
So it’s just natural that people subconsciously transfer this dominance-via-staring to inanimate objects as well, because it comes from a very ancient mammalian instinct.
> That is, asserting dominance via staring at others is something we’re born with, and for good reason: our ancestors’ survival hinged on this when being confronted with someone bigger/stronger.
This explanation is circular. Trying to stare down an enemy who doesn't already care about staring-as-dominance is a losing strategy; they'll just attack you. It doesn't explain how staring-as-dominance came about.
Staring is a precursor to attacks; animals which responded to other animals getting ready to attack them by posturing as if they were ready to counterattack avoided many fights.
Over time, this interaction was offloaded into specialized hardware analyzing gaze -- the one more interested in fighting was the one who stared longer, and that interaction became associated with dominance in social groups.
This was accelerated as the system became more advanced, and more gestures could be recognized as implicit confrontations (and their resolutions).
If you're about to get in a fight, it's important to use the center of your field of vision, because that's the only part that's worth a damn.
For a quadruped, you can see the head, and forelimbs. You get a lot more information from the head.
I was taught, when sparring, to 'loosen' my vision and look a little lower, roughly between the pectoral muscles. But that's bipeds; there's more to track. The stare-down pre-fight behavior was already well-established, and still matters for quadrupeds, many of which are dangerous, so there's been no evolutionary pressure to change it.
>Even though eye beams do not exist in reality, and even though most people do not intellectually believe in them, they may exist as a part of the rich, implicit social model that we naturally apply to seeing agents.
It makes sense that people would attempt to reconstruct a model or theory of mind from scant cues based on other's gazes. In many contexts, such as encountering strangers, you don't have anything else to work with, and so the importance is on leveraging what you can.
It's also something that even if eye beams or extramissons have no empirical reality, the concept of them can still be utilized in representations of other's minds. It reminds me of Zizek's interest in the "reality of the virtual" that is, even things that aren't empirically real but are merely conceived can have real world impacts.
The linked paper actually refutes this number based on the experiments described. It suggests the number is closer to 5% of adults who knowingly believe in some sort of emission theory, directly arguing against the Winer et al. finding.
Raytracing demonstrates that, although our physics and physiology work on raycasting, the 'visual ray' theory could be correct in some other hypothetical world.
This phenomenon was known as one of manifestations of 'qi', life force to the ancient chinese. Specifically, the form that can be better aligned with what can be described as 'intent'.
Which brings up the interesting idea that qi may be phenomenologically "real", insomuch as it's something our brains percieve, even if it has no basis in physical "reality".
Alright, I'm generally sorta amused by those ideas popping up in the brain that minuscule interactions are somehow vaguely important, like the “don't step on cracks”―though I keep wondering if they're just remnants of child's imagination or more of a symptom of some peculiarity.
But this one takes the cake.
Until now, Piotr Kamler's surrealistic otherworldly cartoons were the closest thing (for me) to these ideas of the brain that the world functions by different laws―animations like ‘Chronopolis’ and ‘The Ephemeral Mission’: https://www.youtube.com/watch?v=me5UUK37Vc4
I'm not sure what you mean, I don't stop seeing when I stare at something.
Yes, the sharpness obviously drops away from your central vision, yet you can still see with FOV well over 180°. (Not sure if it's actually 190°, so I edited my question)
The sharpness drops away really fast, so you can really imagine the actual focused FOV as a narrow beam when projected. Also, you don't really just "stare" at something, unless it's far away. For objects of interest that are larger than 1-2 degrees in your FOV, your eyes will have to move a lot, really fast, to continuously scan it. You don't notice your own eyes moving, though. See https://en.wikipedia.org/wiki/Saccade.
(Also, what you "see" is in large parts made up by fusing the "real" visual input with knowledge and expectations you have in your head.)
It indeed seems I have much better peripheral vision than most people, so you might be right. As I wrote in the other post, I have tested that I can definitely see color in my peripheral vision.
Random color generator. The precision goes down on the far periphery, (the colors usually look more saturated than they are), but the basic color is right. In near-mid periphery (~45°), the colors seem normal.
Good work. Have you included randomised brightness to avoid accidentally learning the difference between what the computer thinks is constant brightness and what your eyes respond to constantly? (Such as “green seems brighter than red”)?
If it had been HSV(rand, 1, 1), which you didn’t do, I would anticipate accidentally learning to map subjective brightness to hue. But you avoided that entirely, so no matter :)
Focus the center of your visual attention on something for a few minutes, and you will indeed stop seeing. In reality you think you are seeing continuously because your brain timeshifts your consciousness during your eyeball's saccades.
Try it. Stare at some spot on the wall. If you do it for few minutes you'll notice that the rest of your field of vision goes to black. Not to total black but you'll see worse and worse.
the most noticeable effect of doing this (at least in my experience) is you can't pick out the shade of colours in your peripheral after a while. You'll see a dull or pastel colour, turn your eyes directly to look at the object and it's actually brightly coloured.
The brain fills in what it expects in your peripheral vision. For example, you can see colors in your peripheral vision, but it is only an illusion made in the brain.
You can see in your peripheral vision, but not clearly, and not colors. This is because the cones in the eye are all in the center, and they are responsible for most of your daytime vision. Around the cones are rods, they are used for night vision and motion detection.
The colors you see in your peripheral vision are not real, but an illusion made by the brain.
I know, it's less clear, but there are colors, I have tested that with a random color generator. The colors are imprecise in the far periphery, but the basic colors can still be seen.
I guess my vision may indeed be abnormal as somebody else above suggested, since I don't get the blid spot in the central vision in the dark, either.
As for the sound, I think you just learn to hear the sine waves as speech - I tried to convert a part of audiobook (that I haven't listened to before) and I can understand it a little bit.
I used praat with this script (it doesn't seem to like long sounds): http://www.mrc-cbu.cam.ac.uk/people/matt.davis/sine-wave-spe...
pretty sure that is exactly how magic (tricks) work.
you do not see it all. you do see /are aware of the model of the FOV you have constructed and are piece by piece maintaining, slight of hand tricks seem to be holding the focus on updating one part of the model while altering another part.
You’re 100% right, and it why many magicians paint s single nail, use an extraneous prop, or have other little tricks to draw the attention. In addition to an element of theatricality, it’s a necessary part of anything involving sleight of hand.
You don't stop seeing but your visual attention typically goes hand in hand with what you focus on, except in a few surprise cases of change picked up by your periphery, or your conscious effort to look straight ahead but focus on something peripheral.
Humans are extremely good at analysing the focus of human-like eyes (where a person/animal/robot is looking). This says nothing about any internal theory of how vision works -- a 2yo can tell where one is looking, he has no explicit theory of vision-(and-mind). It's mechanistic, not perceptual.
While due to the overall reproducibility crisis in psychology I wouldn't put one dollar on this, it's would make some sense.
A person (or a dog) could at any moment start pushing/punching/biting in the direction it is looking at, so it's important to keep track of. In a sense, visual attention predates later force.
I think it's more likely to be a component of bootstrapping intelligence when you're a baby, by having having built-in algorithms telling you what those around you are looking at.
I've seen that people are quite good at tracking the gaze of others for mating purposes. Who is looking at whom and how they are looking is all quite interesting and informative.
it's probably very very important evolutionary to know if the lion is looking at you or the gazelle 5 meters to the left. (and also very frustration-inducing for people who have strabismus)
how so? if you are talking about males, it doesnt matter who the females are looking, the strongest male will fight or dominate the other males and get the females anyway. If it is females, well the females dont really need to compete for male attention. a look from a potential predator on the other hand is an immediate and lethal threat.
that's probably a more logical explanation than "people envision an imaginary beam of force". First, they 'd have to control for that by including people who had no cultural knowledge of a laser beam.
The history of theories of vision contains repeated cases of "extramission" hypotheses, even if you don't believe that 50% of adults now subscribe to a folk "emission" theory of vision.
The article actually claims that "One of the most common extramission beliefs is that people can “feel” someone else’s gaze as a pressure or heat on the skin." so I don't think your original point (that people were more likely to be tracking gaze to anticipate action) corresponds to the facts. Now, the claim I've quoted could be false, but it's not presented as a hypothesis, but as an explanandum.
...So, if we're going to put on our evolutionary biology (AKA bullshit) hats, this allows your brain to abuse its 3d modeling circuitry (which is incredibly advanced) to figure out what people are looking at. I bet that "let's put a tiny force in and see what it pushes on" was way easier and higher-quality than some special thing dedicated solely to tracking people's attention. It's an amazing hack.
Would you mind explaining why you think explanations in the style of evolutionary biology are bullshit ?
I find your comment very enlightening and I sense some kind of self-irony that tickles my curiosity about your views. I bet you have some insightful thoughts about explanations coming from evolutionary biology and how one should take them, and I'd would be very interested in hearing them.
I think that I wrote that comment a bit late at night and was thinking about evolutionary psychology, so I'll offer an apology to the evolutionary biologists in the audience. Sorry. :/
The majority of evolutionary biology is good hard science. The knowledge we gain from studying historical trajectories through genome-space has immense real-world value. As a concrete example, inspecting historical speciation events allows us to tighten the bounds we have on various constants in the dynamics that govern genetic change when implemented using Earth-pattern biochemistry (CGAT DNA etc), and that knowledge in turn helps us reduce bias and disentangle causes from correlations when we're trying to nail down genetic causes of disease.
There is a segment of evolutionary biology which is... not such good science. This is where you get attempts to explain specific high-level features of biology or human behavior using equally specific features in historical environments. Things like "Women are better at seeing colors because it was their job to look for fruit when we were hunter-gatherers." Works in this domain, if they pretend to have an underlying theory at all, will offer something vague and nonspecific that reveals no bigger picture or more general principle, or will offer an underlying theory that totally fails to generalize. Falsifiable hypotheses or useful predictions can be few and far between. You can't use it to reduce error in any experiment nor to improve the efficiency of any tool or product. It is, in short, bullshit.
I put my comment above closer to the bullshit end of the spectrum basically because I took some inappropriate shortcuts, I think because I was trying to not be bored or boring. Most of the issue was that I anthropomorphized evolution to assign intent and by attempted to present a "why" that implies that any alternatives were considered or chosen between. That kind of thinking, in the context of evolutionary biology, is playing with fire. I'll hand an explanation of that off to an essay that I think can explain it better than I possibly could: https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-g...
You are definitely right. But remember that 3D modeling circuitry have evolved a long time ago, because it arises from non-visual senses. This means localization of auditory events in space. So say these “beams” are already coming out of peoples —- no, not even people’s, apes — ears to listen for something, like a noise in the jungle. Then when these apes develop something key to detect the human voice, like the superior temporal sulcus, suddenly all our interest is in what sounds come out of other psople’s mouths, so our ear beams go to their mouths instead of, say, looking all around the hanitat, the room, the jungle, whatever it is, for something dangerous that will eat it. I think it’s plausible.
Given how frequently people can tell when someone is looking at them from behind or at a distance, the book on attention as a material force is anything but closed.
The linked paper says that the empirical accuracy of this feeling was disproven over 100 years ago:
> One of the most common extramission beliefs is that people can “feel” someone else’s gaze as a pressure or heat on the skin. In the late nineteenth and early twentieth century, Titchener (14) and Coover (15) showed that, although people may believe in eye beams, no such beams exist; people cannot actually detect the gaze of another in the absence of specific sensory information.
> 14 Titchener EB (1898) The feeling of being stared at. Science 8:895–897.
> 15 Coover JE (1913) The feeling of being stared at. Am J Psychol 24:570–575.
Have you come across later studies that suggest it does have a statistically verifiable reality?
I'm more inclined to believe that our senses are especially on the lookout for attacks from behind, so if we catch subtle cues that someone or something is behind us that we weren't previously aware of (subtle sounds, air disturbances, the diversion of other people's eyes to look at it) we're more likely to turn around to see what it is. Some of the time, the person we sensed behind us is looking at us, but we disproportionately remember those times because it feels more weird and memorable.
Ever noticed how especially in groups (or any group/public setting) when attention is diverted away from them people often seem to start shifting their pose or doing something like moving their fingers, hands or legs? I wonder how deliberate those movements and actions are.
At around the time the "Walking around NYC" video, there was a Feminist Frequency video about the "Male Gaze." Basically, men looking at women harms them, and we should avert our eyes, like we're peasants in the presence of the Emperor's entourage in Imperial China.
That’s an impressively warped hodgepodge, and as far as I can tell totally divorced from reality. The whole “Male Gaze” concept is drawn from cinematic gaze theory and the bowels of psychoanalysis, and it doesn’t remotely mean what you take it to mean. It also exists in the context of the feminine gaze, oppositional gaze and many others. It’s an overwrought construction, but that’s also a reflection of how old it is and its roots in even older ideas. There are some interesting points to be made, especially in cinematic analysis using gaze theory, but that’s about all.
One thing is for sure though, it has nothing to do with averting your eyes or harming people by looking at them. Please don’t use what amounts to intellectual clickbait to springboard utterly milquetoast rants about “PC culture.”
while I’m sure out of billions of people it’s trivially easy to find a handful of nuts taking almost any position, it’s dishonest to tar everyone else with that brush.
Not just a few out of billions, but throngs of people on social media, including acquaintances and friends of friends all willing to jump down your throat, as if asking questions or any nuance is an immediate sign of deplorability.
The intellectually honest response to being confronted with the reality of just how far off the mark you were
There is some kind of dramatic disconnect here. There was plenty of weird toxicity on the other side which you seem oblivious to. Its volume is quite a bit louder than your voice and your message, and it's very much the message many received.
> Basically, men looking at women harms them, and we should avert our eyes
This is not what "male gaze" means. The idea of a male gaze, broadly speaking, is that the way women are portrayed in visual media (films, painting, photography, video games) caters to a male, heterosexual audience over other audiences, even at the expense of other aspects of the art. I think Lindsay Ellis does a good job of explaining this with examples [1]; particularly striking is the way Megan Fox as Mikaela Banes in Transformers is shot to maximise her sex appeal (to a straight male audience), so that even though the script ostensibly establishes Mikaela as savvy and competent, all that most viewers remember is that she's eye candy [2].
Heck just pick up any book about portrait photography and you will see the same thing: male postures are typically steadfast, female postures tend towards vulnerability (geometrically weak, camera looking down to subject, shoulders not square with frame) and sideboob.
Male bust portraits tend to cut off at the shoulders, female bust portraits cut off at the cleavage.
It’s just the way it’s done and nobody questions it.
> Since the eyes obviously do not really extrude a beam of energy, this view is typically dismissed by science.
If you start with an assumption, the end result will probably be compatible with the assumption.
Rupert Sheldrake wrote a book titled The Sense of Being Stared At [0]. I recently found a copy at a thrift store (it was on my stack of books that I've yet to read). Dr. Sheldrake does not adopt materialist science's assumptions.
Perhaps the eyes do not exactly emit invisible energy, but something is probably happening that deserves to be investigated.
Here's a quote from Rupert Sheldrake's book (pg. 125):
[anecdote of girl's experience of catching
creeper following her]
In frightening situations such as this, the
sense of being stared at is particularly
memorable. But most people have experienced
it for themselves, usually in less dramatic
circumstances. In my own surveys of adults
in Europe and in the United States, 70 to
90 percent said they had sensed when they
were being looked at from behind. Surveys
by other researchers have given similar
results. Gerald Winer and his colleagues in
the Psychology Department of Ohio State
University have found that these
experiences are even commoner among
children than adults. Ninety-four percent
of sixth-grade schoolchildren (aged eleven
and twelve) answered yest to the question,
"do you ever feel that someone is staring
at you without actually seeing them look at
you?" So did 89 percent of college
students.
The sense of being stared is often alluded
to in short stories and novels. "His eyes
bored into the back of her neck" is a
cliche of popular fiction. Here is an
example from Sir Arthur Conan Doyle, the
creator of Sherlock Holmes: [...]
> We proposed (9) that a simplified model of vision may be related to a belief that is extraordinarily persistent across human cultures: the belief that the eyes emit an invisible energy. (emphasis added)
[edit: perhaps the belief is persistent because there's something to it.] Simplified models are one thing. Simplistic models are non-helpful.
Discussion question: In my experience, I've noticed a few times when I've been stared at, and I've stared at people who've spontaneously looked right at me. Do you have any memorable experiences of this phenomenon?
Perhaps people are most likely to notice they're being stared at when the starer is the opposite gender?
I guess I'm going to have to read Dr. Sheldrake's book.
How do you control for just happening to look when someone else is happening to look, or worse, attracting looks when you check for someone looking?
Here's a common interaction, which would create that feeling without anything at all underlying it:
1. A person is sitting at a table, with me sitting at a table behind them -- both of us facing the same way, ie, me facing their back.
2. I was staring a few feet to the side of the person -- zoning out in space, and not looking at anything besides the wall pattern.
3. That person has a transient feeling someone is watching, and turns to look.
4. The motion of their head turning causes me to leap to alert, as there's suddenly motion in my peripheral vision -- I move my eyes to focus on the motion, and resolve what it is.
5. We make eye contact, validating their initial belief.
6. The other person has taken a step to condition themselves, similar to Skinner's dancing pigeons. Repeating this interaction over decades leaves them believing in a sense they don't actually possess.
Add in some bias in what people remember, eg the times they make eye contact instead of the many times no one was looking, and it's a powerful way to end up confused about the world.
I'm not sure how you'd suss this one out without an extra interaction which necessarily confounds the results (eg, an extra observer -- but since we're testing observer effects, that's messy and a half).
> How do you control for just happening to look when someone else is happening to look, or worse, attracting looks when you check for someone looking?
I'm sure you could figure out an experimental protocol. The book I referenced, The Sense of Being Stared At, presumably applies mathematics and statistics to the experiments. But I haven't read it, and I don't care enough at the present time to really dig into the text and respond to your whole comment.
Chapter 11 is titled "Experiments on the sense of being stared at". Page 168 begins the discussion of Titchener and Coover's papers. The present PNAS paper's authors say these papers supposedly "showed that [...] people cannot actually detect the gaze of another in the absence of specific sensory information" [quote from the submission].
Sheldrake says, "Titchener's paper was very influential, and was widely cited by skeptics for more than 100 years, even though he said nothing about the actual experiments except that they were negative." Coover's paper was discussed a bit, then "By reinforcing Tichener's negative conclusions, Coover's work seemed to put an end to the matter from a scientific point of view. His paper was published in 1913."
Someone supposedly re-did Coover's experiment in 1939, and found the results statistically significant.
If you're actually interested in the subject, the ebook version of Sheldrake's book is $12 or so. Otherwise you're just fighting with a straw man [1].
People can feel "stared at" over a telephone call, so I'm not sure what that's supposed to show about eye beams.
But importantly, it's not an assumption that eyes don't emit a force of a hundredth of a newton. We have tested it, and you can test it yourself by glancing at a piece of paper. That nonexistent pushing force is what the paper is about, not feelings of being stared at.
The paper is an exercise in rationalism (aka materialism) to "explain away" people's actual experiences.
> We have tested it,
Please specify who this 'we' refers to.
People feel stared at all the time. There may be no physical "pushing force", but "you all" need to come up with a better explanation than the rationalists' standard trope of 'this phenomenon we all experience all the time, well it's all in our heads'. Quantum entanglement / whatever - anything is a better explanation than what's offered by this paper.
Millions of people have looked at things that are dangling in the air, and they haven't moved.
> There may be no physical "pushing force", but "you all" need to come up with a better explanation
I don't need to explain the sense of being stared at if I'm only interested in the perceived pushing force.
> People feel stared at all the time.
A feeling isn't proof unless you can show that the feeling correlates with actual staring. It's absolutely normal to have false feelings sometimes. Sometimes people feel hot or cold when the temperature hasn't changed. Sometimes people perceive crowds differently based on what mood they're in. We're social creatures and the idea of being looked at can be creepy even when nobody else is in the room, or entire building.
Obviously this doesn't disprove anything. But it demonstrates that the mere existence of a feeling isn't proof that something outside your body caused it.
Then I realized your retinas are more like a specialized tongue covered with lightbuds, and my sense of vision changed dramatically. I suddenly had a sense of blindness, that I could see only as far as the tips of the rods and cones in my eye, and everything beyond that was just lining up the photons so they made sense when they arrived.