Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Epilepsy Protection for VRChat (github.com/lessevr)
40 points by Kye on Sept 1, 2024 | hide | past | favorite | 11 comments


For those that don't know, epilepsy just means you are diagnosed with something that cause seizures. Photosensitive epilepsy (which is what this project seems to attempt to help with) accounts for ~3% of all diagnoses of epilepsy.

https://www.epilepsy.com/what-is-epilepsy/seizure-triggers/p...


Humanity has barely figured out how to run safe text-only online spaces. Spaces with images and video are far harder. Spaces with full 3D interactivity of user generated content is so far from being safe, and that's not even counting that things like VRChat and Roblox are scriptable.

Each escalation of content type brings a significant increase in the complexity of moderation, and things like this show that it's not just moderation to protect people from being offended, it's moderation to keep people physically safe. We need to take this a lot more seriously than we currently do as an industry.


Then there's stuff like this: https://buttplug.io



I've heard a lot of similar concerns from people who don't have much experience with VR. The general assumption is, that the display format is so immersive, that depictions of offensive content will somehow leave a greater impression than normal.

In my experience, this doesn't happen. I find the content equally disturbing (or not), regardless of whether it's on a phone, desktop or headset.


Isn't this epilepsy protection a counter point to this? I don't think an epileptic response is a risk of reading a blog post, and even watching a video online is a much smaller and more controllable example, and if pre-recorded it's probably straightforward to automatically detect.


I read OP's comment as more generally focused on disturbing content:

> ...to protect people from being offended...

Obviously, if you have an impairment that's directly affected by how light enters your eyes, it's a different story.


Anyone know of any algorithms to detect and quantify temporal light changes (as trigger for photosensitive epilepsy) with minimal latency?


Apple have this built into their OS (https://developer.apple.com/documentation/mediaaccessibility...) and publish the implementation (https://github.com/apple/VideoFlashingReduction)

There are a few other implementations out there but this is about as low latency as I’ve found, and takes into account several factors.


I was thinking about this... this is not my area of expertise but need it be more complicated than sampling a population of random pixels, convert from rgb to a colour space with lightness measures (like HSL) and calculating the difference at every frame?


It's more than just a frame to frame brightness difference. AIUI epilleptic triggers are more dependent on frequency of change over some period of time. As well as the absolute brightness difference.

You'd have to measure changes over a longer time and possibly predict into the future.

Maybe you could get away with detecting single frame brightness changes over a threshold and sort of smear the change over a period of time. Similar to that awful temporal shader that so many games use now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: