Yeah, this came out during the last few weeks of my time in high school. It is a major reason I got into computer science and became a programmer. Good Times!
Wow, thanks for sharing, that really takes me back. I was so hyped when I saw this as a kid, my dad and I made a mount for my glasses with two IR LEDs and a battery. I remember that I was super impressed with the effect.
I also went to Maplins (UK Radioshack) and bought some infra red LEDs to hack together something to achieve this same effect. In the end I just taped the Wii Sensor Bar to my glasses!
After watching his videos, I went out and bought an IR pen so I could mimic his digital whiteboard. I think over the years, the bluetooth stack changed, so I could no longer pair with windows.
I tried implementing this with face detection-based head tracking after that demo (or maybe before; I can't remember). I got it working but the effect was very underwhelming. It looks great in that video, but it kind of sucks in real life.
I think the problem is in real life you have an enormous number of other visual cues that tell you that you're not really seeing something 3D - focus, stereoscopy (not for me though sadly), the fact that you know you're looking at the screen, inevitable lag from cameras, etc.
I can't view the videos because of their stupid cookie screen, but I wouldn't be too excited about this. The camera lag especially probably impossible to solve.
this (and several of his ideas) were the reason I value simple solutions so much in my work along with optimising for low cost. "If Johnny Lee can do this crazy thing in cheap, I can think something creative too"
Thanks for posting, I was sure I recalled something like this form a long time ago. I also build my self a FreeTrack headset (https://en.wikipedia.org/wiki/FreeTrack) around this same time to play the Arma / Operating Flashpoint games using IR LED's attached to a hat that my webcam would track.
Came here to say the same. I remember playing around with this back in the day, but using two candles instead of the sensor bar. Yes, it works. No, it’s not a good idea to hold candles that close to your hair.
You're correct, but when the worst the ChatGPTisms get is turns of phrases like "LeetCode youth finally paid off: turns out all those "rebalance a binary search tree" problems were preparing me for salami, not FAANG interviews." or "Designing software for things that rot means optimising for variance, memory, and timing–not perfection. It turns out the hardest part of software isn't keeping things alive. It's knowing when to let them age.", then I'm inclined to forgive it compared to how many far more egregious offenders are at the top of HN these days. This is a rather mild use of ChatGPT for copyediting, and at least I feel like I can trust OP to factcheck everything and not put in any confabulations.
If you were talking about some essays I wrote in the early 2000s, you’d be buttering your Stetson. It’s hilarious to me that several of my blog posts from 20 years ago have been called out as AI generated lol.
I agree. I've written like this too, but these days when you see it it's more likely to be AI.
I actually think if I were writing blog posts these days I'd deliberately avoid these kinds of cliches for that reason. I'd try to write something no LLM is likely to spit out, even if it ends up weird.
There's some hair splitting here: one may have to reverse engineer the mappings of the can bus for your car if you want to port the Comma to your platform, but I do believe the second clause is correct: it's a MITM system and does not monkey with the onboard ECM at all. Unplug device, give to dealer for service or updates, plug device back in
True in my experience. Back in 2019 i got a model year 2020 honda civic hatchback that was not yet supported. So as not to break the car, I purchased an oem steering motor for the car, dumped the firmware and then found the necessary data to support the car. I've been using comma since then. Other comments are correct about it being an assistant. You are the captain, now thinking more strategically about the road and vehicles ahead. The comma handles the tactical of keeping between the lines. The brain relief from all the mostly automatic constant correction is huge on long road trips. It's got a good driver attention model as well. Keep your eyes on the road. I will say a pair of glasses frames is sufficient to fool it though.
This either
1) assumes a homeomorphism between rationality and ethics,
or
2) is technically true but missing the point. Akin to saying: "Human deaths via a tsunami isn't a 'bad thing', it's a natural phenomenon"
I looove this site. This helped me plan a Trans-Siberian/Mongolian train trip almost 15 years ago. So many people I met on the trains used Seat 61, too. An absolute classic.
> Minimum height in the stairway isn't part of the specification
Uh, yes it is? It's in most building codes, as well as in the IBC (International Building Code).
Architects don't deal with structural soundness; that's a misconception. That's the domain of the structural engineer. The architect IS the generalist.
It's 2030 and deepfakes run rampant. This is a problem creating political deepfakes, celebrity fake porn, CSAM, fake revenge porn, etc.
Apple builds on their Spatial Photo feature, and their newest smartphones allows POR, or Proof of Reality.
Proof of Reality: Data from multiple cameras and a LIDAR sensor are stitched together to generate a 3d depth-map + color map that can validate that a photo or video was shot without post-production manipulation. This processing is done on-chip, on a Secure Enclave-like chip, and cryptographically embedded in the following photo. Each raw capture starts as the first block of a blockchain; further images or videos that are created from this raw data are understood to be downstream of this first block.
A piece of Proof-of-Reality media, edited together with multiple clips or images, can be cryptographically verified that it is composed out of individual Proof-of-Reality media. Like a Merkle tree.
Apple pioneers the first fully Deepfake-proof media workflow. Consumers can watch news media or social media while being cryptographically assured that it wasn't AI-generated.
2031: Proof-Of-Reality (POR) starts to catch on in public. Samsung gets on the bandwagon, and develops their own version (or joins a POR consortium). Soon, 40% of media is POR-validated, following the usual smartphone & OS update statistics.
2032: A particular egregious deepfake scandal from a non-POR source drives the rush towards POR standardization. Apple and/or the POR consortium partners begin to produce more professional-level POR camera equipment. Content blockers that block non-POR media become developed.
2033: Certain social media websites begin to place labels notices on non-POR media. POR-media consists of 70% of all news & social media.
2034: News media companies fully switch over to a POR-workflow. Browsers start adopting non-POR labels for content, like Twitter's 'Community Notes'.
2035: Deepfakes as we know it are mostly hidden from the public eye, but continues to evolve and change in unexpected ways..
I love it! Though I am not an expert, I can at least see this.
However, I hate to be a nitpicker, but I think this is solving a separate issue. I don't think this issue is about authenticating the legitimacy of deep fake porn. Rather, the mere existence of it is the issue.
That is, people don't care that it's fake. People don't want to buy the Apple proof of reality because they don't want reality.
2036: Due to increasing amounts of deepfake CSAM, the US's Congress passes a law against unconsensual deepfake porn, requiring "websites of sexual nature" to be POR-compliant or be shut down. Porn web companies, ISPs/hosting providers, and credit card processors alike are legally liable.
Pornhub welcomes this change with cheeky 'PORnhub' branding, but the reality is that change is necessary or they will be sued out of oblivion.
Prosumer platforms like OnlyFans welcome POR-validation with wide arms, because it bolsters their image of authenticity. Exploiting the ban on deepfake porn, "softfake porn", where celebrity look-alikes create porn, becomes mildly popular.
2037: Eventually, ISP / hosting providers / credit card processors that instigate the change. Much like SESTA/FOSTA's impact on sex workers in the early 2020s, payment processors and ISPs refuse to work with POR-unvalidated porn sites. Eventually, porn sites shift towards POR-compliance, and create new niches.
Of course, underground deepfake porn still exists, if you know where to look. But by now, its associated reputation with CSAM makes it very inaccessible and disdained.
Gone are the days of rampant deepfakes in the late 2020s and early 2030s. Mainstream media and politics call this a success, but a minority are angry, saying that deepfakes are a creative act, and the effective ban on POR-noncompliant material is a further restriction on creative liberties. ..
2030: Deepfakes are rampant, causing significant issues in politics, entertainment, and personal privacy. However, instead of technological solutions, there is a growing trend of regulatory capture. Large corporations and governments begin to argue that deepfakes are an inevitable part of the digital landscape. The cost-effectiveness of creating deepfake content compared to traditional media production becomes a significant talking point.
2031: As deepfake technology becomes more sophisticated and cheaper, it starts to replace traditional media production methods. Major studios and media companies lobby for and receive regulatory approval to use deepfakes as a legitimate form of content creation. This shift is justified by the reduced cost and logistical ease of using AI-generated characters instead of real actors.
2032: A scandal arises involving a particularly damaging deepfake, but instead of driving a push towards authenticity verification technologies, it leads to further normalization of deepfakes. The argument is made that since distinguishing between real and fake content is increasingly difficult, society should adapt to accepting deepfake content as a new norm.
2033: Social media platforms and news outlets begin to openly embrace deepfake technology, citing cost reduction and the ability to generate more engaging content. Traditional media actors and creators are increasingly marginalized, with deepfake creators dominating the market.
2034: Regulatory bodies, heavily influenced by big tech and media conglomerates, begin to actively promote deepfake content. New regulations make it easier for deepfake content to be produced and disseminated, while traditional media production is bogged down by increased costs and regulatory hurdles.
2035: The public gradually accepts deepfakes as the primary form of digital content. Traditional media, with real actors and genuine locations, becomes a niche market due to its higher production costs and complexity. Deepfakes evolve in unexpected ways, permeating every aspect of digital media and blurring the line between reality and AI-generated content.