Hacker News new | past | comments | ask | show | jobs | submit login
Things we didn't know about ourselves (kk.org)
64 points by kaycebasques 9 months ago | hide | past | favorite | 51 comments



Hmmmm...

I don't think the invention of social media shows people prefer attention to privacy. This is a false dichotomy. Privacy doesn't mean not getting attention, it means not having aspects of your life that you want to keep private revealed.


Yeah, how is he counting all the people who aren't exposing themselves for attention over privacy? Seems like he's looking for his keys under the streetlight.

Maybe what this tell us is that even deep thinkers aren't great at grappling with large numbers.

edit: Not to mention a lot of social media presence is 100% performance. It's not really an invasion of privacy, as it has very little to do with the private life of the creator.


One thing I find surprising about the whole Apple Vision phenomenon:

There has been a ton of hype around "you can work in VR and it feels like a real place" and "you can have tons of screens!".

But that being said, the below article about using Meta Oculus VR goggles to "work from space" came out over TWO years ago: https://medium.com/immersedteam/working-from-orbit-39bf95a6d...

Plus, that software even works on Linux!

What is going on now that it seems like the rest of the world suddenly discovered a feature of the previous generation VR goggles?

Is it just "because Apple"?

Is it better marketing? (e.g. are more influencers involved)

I would be curious to hear HN's take on this.


> What is going on now that it seems like the rest of the world suddenly discovered a feature of the previous generation VR goggles? Is it just "because Apple"? Is it better marketing?

It takes more attention to notice that you can combine a niche gaming device with a random 3rd party application to do work in VR, than the attention it takes to notice that Apple did a thing. Everyone, even people that aren't typically Apple consumers, would notice that Apple released a new product.

It sorta reminds me of the PS3. It came out at the same time as Blu-ray players and was capable of playing Blu-ray movies as a bonus feature. While the PS3 cost $600, the Blu-ray players cost $1,000+, and people still purchased the players. For people that were looking for a device to play movie discs, it took too much attention to notice that a gaming device offers that and much more for much less.


The Apple Vision is 3.5k - way too much, but I'm watching the videos of it. I can see how it's showing me the future.

I never saw a useful demo of virtual fixed screens in real 3d space, from Oculus. ie Both fixed in relation to my head position and fixed in the space of other rooms, with realtime content...I saw VR games from the Oculus marketing. I don't care about that (although I have coworkers who do).

More monitors of arbitrary size help me as I've gotten older. Year ago, I worked in an office, instead of at home. Remote-first attitudes have shifted how I work and are able to conceptualize things I want to get done.

The Oculus has weird hand grippers. I don't want to carry those. Give me my mouse and keyboard for interaction and build for that.

I noticed because it was marketed better and at the right time for my age group.


What does that add to you having screens hanging around?


I don't have to carry screens or hang them. I can put them on a wall. Putting a virtual shopping list on a traditional fridge, has my wife talking about possibilities. I can add art, virtual pictures, etc. These virtual digital devices don't break randomly (only the singular headset can break). Being able to resize screens matters when your eyes start to go is one thing, but having screens encircle you is a feat for your poor hardware (and the expertise to set it up) that I always desire...it's like tabs but better.

I can probably think of more in another 5 min. That being said, I still wont pay 3500 for a helmet, even when it enables these things. We will wait.


> What is going on now that it seems like the rest of the world suddenly discovered a feature of the previous generation VR goggles?

> Is it just "because Apple"?

To some extent yes - Jobs built a company that was less about innovation and more about polishing and packaging existing tech for the masses. This isn't bad - Apple has made some great stuff, and they have innovated some, but a lot of the innovation is in UX not tech.

Things apple "invented" that had been around for a while:

* the mouse

* the GUI

* MP3 players

* the smart phone

* the tablet

* virtual desktops

* video calls

And plenty more I've since forgotten. We used to joke about Steve Jobs' Reality Distortion Field in reference to this phenomenon.


Hard disagree with mouse, GUI, virtual desktops and video calls.

The mouse hasn't changed in 40 years.

Windows 3.1's GUI beat the pants off Apple's.

Virtual desktops - maybe, but I still don't know anyone who uses this and not really sure most people know it exists.

Video calls - No idea what Apple has to do with this space at all unless you mean FaceTime? I don't think that was really a game changer.


Hence me saying: apple "invented" (rather than saying apple invented). People credit apple with inventing all those things. They invented none of them.

Also worth noting - the mouse was invented over 50 years ago and has basically been the same since, the biggest change was consumer devices moving from a ball to a laser. 41 years ago was when apple first released a product with a mouse. From the wikipedia page on the mouse:

> However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button[54] Lisa Mouse) in 1984


> Windows 3.1's GUI beat the pants off Apple's.

It's been a really long time since I used Windows 3.1 or System 7, but I'm not seeing it; in what way was Windows better?


I dunno, the Mac Plus was there several years before windows 3.1, and lot more consistent. You must be confused...


oh I looked into that. Conclusion at the time was resolution was far too low to compete with my 27" 4k display on my desk. (I'd love to be able to travel and work remotely without lugging my big display, but my productivity takes a blow restricted to just my laptop display).

From what I'm reading about the Apple Vision Pro, it's not there yet either.


The avg consumer doesn't even know what Linux is.

The avg consumer distrusts Facebook more than Apple.

So yes, the most popular tech company + marketing.


I haven't followed the AV much, but I did mess around with using Oculus for work a fair amount, including trying that app and others. tl;dr, it's not super great.

A few random reasons:

- The Oculus goggles become uncomfortable and sweaty fairly quickly

- They use Fresnel lenses and (I think...) foveated rendering, which means you really only get sharp view straight in front of you. Not where your eyes are looking, where your head is pointing. So looking at the other screen, or even scanning text, means moving your head.

- The awareness of your surroundings is basically zero so it's easy to "lose" your mouse. Or, in my case I was using a wireless keyboard, and I could misplace it. There is a passthrough mode, but the resolution is garbage and it's annoying to get in and out of (in theory you can tap the headset, in practice it works maybe 50% of the time)

Probably some other stuff I'm forgetting...

Anyway, all that said I think there's potential that AV could do it significantly better. No idea if they actually did do it better, though.


This KK (Kevin Kelley) guy seems super cool. I just browsed the rest of the site. Turns out he wrote that "New Rules For The New Economy" book back in the 90s. Pretty much every post on The Technium that I've read has been insightful. He's even got a section of the site just dedicated to showcasing how people "reappropriate" i.e. actually use technology: https://kk.org/streetuse/


He's also one of the founders of Wired, and all of his books are at least intriguing, if not always persuasive (to me). And his long-running Cool Tools blog is worth reading as well.


He's a pretty prolific writer. He also started Wired, maybe has a podcast too? At the least he has been a guest on what seems like a million of them. Just running out the door so I can't fact check myself sorry. Thought you might like some factoids though. Have a swell day!


Also involved with the https://en.wikipedia.org/wiki/Whole_Earth_Catalog back in the day...


I think you might enjoy his 90s book Out of Control: https://kk.org/books/out-of-control/


I think it was Aral Balkan who said we're like cyborgs now with our smartphones. I remember him saying that in a talk, and the idea resonated with me. The key difference is that it's not a chip in our brain, but a device we hold and treat like a pacifier. I think we need a new term for smartphones though. I don't know the stats, but who makes plain telephone calls anymore when we have WhatsApp etc? A smartphone is really just a small tablet that happens to have a baseband (that people rarely use apart from cellular Internet).


They are kind of like pacifiers for adults, aren't they? The term "fondleslab" captures that; I believe I first read that moniker years ago in some article on the Register.


It is shocking how fast we’ve become totally dependent on smartphones. I remember getting lost on my college campus, having to print out maps and stuff. And I’m not really that old.


If you’d had no idea what a smart phone was but were observing human behaviour from afar, then we would seem freakishly coordinated, as-if we had telepathy or a hive mind.

Imagine watching from space as drivers detour around an accident ahead almost like they had foreknowledge of conditions they can’t have seen yet…


> who makes plain telephone calls anymore when we have WhatsApp etc?

Um... I do? And almost everyone I know?


It's an interesting point of view, working backwards the "chain of causality" instead of the usual "technology changes us", saying that "technology makes us discover new things about ourselves", i.e. we didn't change, it was always there. It's unclear to me which one is the more correct or to what extent both would be correct.


This seems to be written by someone not very familiar with history. While a different form factor, we have had small TV screens since TV was invented. People called for a Video iPod before we had iPhones because we wanted to watch video on a small screen on the go.

We have had stereoscopic viewers for a century or more, way before any VR existed. I also remember the VRBoy from the early 90s and many people could see the potential for that technology back then.

We have had drama queens/kings for a long as there have been people that preferred attention over privacy. Social media might make it worse but it isn't new.


My iPhone at a comfortable viewing distance is about equivalent to my 42" monitor at 8 feet. Details will change depending upon the size of your phone, but that's better than a lot of people's TV setup.


42” at 8’ is smaller than a typical TV setup in the US, I think? So his point still stands in strict size terms.

But also, the experience is watching something close up is qualitatively different — yes, focus and accommodation, but also relative motions if it’s handheld, and just the perception of size. (Same FOV, but you “know” it’s small and close rather than big and farther away.)

I think Kevin Kelley’s larger point is that our needs are either more plastic than we thought or there are wrinkles in the sorts of things we’re willing to accept, and that’s sort of interesting and fun to think about.


> 42” at 8’ is smaller than a typical TV setup in the US, I think? So his point still stands in strict size terms.

I'm guessing that, in general, the people who are using their phones as TVs are not the ones with 42" 4K televisions and a 8' living room or, if they are, they're kids getting the privacy to watch what they want in their own rooms.


What if humans had chromatophores like cephalopods?

Body language has been around for a good long time

Detailed spoken language has been around for a less long time

If we had chromatophores what would language be like?

We train children to speak, and speech is very important. Some people learn to draw, paint, etc.

We train children to read and write, and convert written words to speech.

---

I apologize if my words are not clear. I have to unfocus my mind a little bit to see my point myself.

Being able to record and send pictures and video in near real time is very new to humanity.


Ambiguous language is important in dating and politics and other endeavours. Would cromatophore-communicating species use textures at different scales to appear as if they were saying one thing to others far away, but actually say a second thing to those who were nearby?


Cuttlefish already do that! Well, not at different distances, but they can send conflicting messages with different sides of their body.

In particular, scientists have observed male cuttlefish displaying "I'm a male, let's mate" signals at a female, while simultaneously sending a false "I'm a female, court me" coloration on the opposite side to distract a rival.


  Hey baby,              I'm soooo
  gotta light?    /\     / drunk!
              \--ᔦꙬᔨ---/
                 /||\


"I am pretty sure that we did not know that we humans much prefer personal attention to personal privacy."

I think it's possible that many people want both, each in different areas of life. It's not necessarily an either/or.


His site reminds me of the thing I occasionally mention to restaurant managers: "Your customers probably can't make out the text size on your menus."


> I am pretty sure that we did not know that we humans much prefer personal attention to personal privacy. Until we invented the technology of social media, we thought we naturally favored privacy over attention, but we were also wrong about that.

I don't know that the classical assumption was that people preferred privacy as a whole. People who lived as hermits have generally been seen as eccentric and abnormal, though sometimes admirable.

But with the internet pushing the possibility of living a private life to its extremes, the unsuitability of it for most people becomes more obvious.


As I understand it, cultures that were poor enough that animals and people all lived together (and in cold climates, shared heat) in https://en.wikipedia.org/wiki/Longhouse structures had almost no privacy (hence the popularity of "rolls in the hay" in english or "going for a walk" in russian?) and the people who build later structures seem to have nearly universally preferred individual rooms, with open-dormitory-style arrangements reserved for those who had little choice: military, hospitals, etc.


"But the smartphone — a small pocketable screen – was not at all expected. It was a complete surprise because no one thought it would be possible to engage with such a tiny screen. It was a shock to everyone (including me) that a screen smaller than my palm would be enough to watch a movie, or read a book, or get your news."

Fictional depictions of a tablet form factor appear to go back to Asimov's original Foundation in 1951.

"Isaac Asimov described a Calculator Pad in his novel Foundation (1951); Stanisław Lem described the Opton in his novel Return from the Stars (1961)."

https://en.wikipedia.org/wiki/Tablet_computer#History


This is something I think we as a society have not fully grappled with:

> “I am pretty sure that we did not know that we humans much prefer personal attention to personal privacy. Until we invented the technology of social media, we thought we naturally favored privacy over attention, but we were also wrong about that. We found out that when given a choice people prefer to reveal themselves to get personal attention rather than the obscurity of privacy.”

I’ve been interested to read some contrarian thinking by Venkatesh Rao (Against Waldenponding <https://studio.ribbonfarm.com/p/against-waldenponding>) and others, about the ways this species-wide preference could/does benefit us, and how we might work with it rather than try to fight it.


I think it's easy to misread the data in this case. People fall into two camps, those who draw attention to themselves and those who prefer not to. The latter camp is under-represented online precisely because representation would mean drawing attention to themselves. So, counting just the online denizens introduces significant bias in the data.


> A kind of selfish free-riding/tragedy of the commons: not learning to handle your share of the increased attention-management load required to keep the Global Social Computer in the Cloud (GSCITC) running effectively.

It's much more sinister than that. In what I consider my best moments, I want to destroy the damn thing.


You want to destroy...the Global Social Computer in the Cloud?

If I understand you correctly, then just to clarify: the author disagrees with you, and considers you to be the selfish free-rider in that sentence.


Yes. I do not want the Global Social Computer in the Cloud to exist. I think it makes humans worse at being human.


>The fact that everyone alive on our planet is now connected electronically is not a surprise.

A shockingly inaccurate statement right from the start. I stopped reading after this sentence.


It's not shockingly inaccurate. It's not quite literally true -- of course there are some people who are not connected -- but it's very close to being true. Even in places of extreme poverty, in extremely underprivileged or remote areas, most people have access to a cellphone, even if they may not own it themselves. Indeed, in many parts of the developing world, even in extreme poverty, a cellphone is often the number one way people pay for goods.


> but it's very close to being true.

[citation needed]

I brought my own.

https://en.wikipedia.org/wiki/Internet_access#Digital_divide

https://en.wikipedia.org/wiki/Digital_divide


Is there a citation there that's less than a decade old? In the Digital Divide section in the first link, there was one citation from 2015, but most citations were 2007-2011... Really not relevant to the discussion.

Also, the meaning of "unconnected" varies widely. Many of the citation in the second link aren't referring to basic cell phones. E.g., the 2017 (which, again, is seven years old) Wireless Broadband Alliance paper defines unconnected as "an unconnected individual was defined as an individual who does not have access to or cannot afford broadband connectivity."

The huge proportion of the population that I know in, say, Kenya, who use WhatsApp and use cell phones to pay for goods, do not have broadband. But the quote in question was "connected electronically." These people are definitely connected electronically.



That page cites ITU [1], and the ITU page doesn't cite their sources at all. It may be their own numbers, but I don't see any link to a real study or methodology.

They also don't define connectivity, but the section is titled "connectivity/broadband" so I assume that, again, they're only referring to broadband connectivity.

1. https://www.itu.int/en/mediacentre/Pages/PR-2023-09-12-unive...


It is inaccurate, the degree to which it is inaccurate does not really need debating if the opening statement of the post calls it a fact.

And no, there are many remote areas where this is still untrue, many are still disconnected from what we are talking about.

It is very odd that this point is actually being argued.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: