Hacker News new | past | comments | ask | show | jobs | submit login
Simula: A VR window manager for Linux (github.com/simulavr)
372 points by georgewsinger on April 9, 2020 | hide | past | favorite | 176 comments



In case people are curious what it's like to work in VR, here's a demo of me using Simula: https://youtu.be/FWLuwG91HnI

I would argue that working in VR is fundamentally superior to working in PCs/laptops. It's basically 10x the screens, focus, and work immersion. Simula's text quality is also very good (getting around the eye fatigue present in older VR Desktops).


Well looking at the video, I mean, I can have dozens workspaces with hundreds of windows (if I need to) and navigate them with moving just my fingers, whereas you are apparently breaking your neck to navigate more than 3 windows. I think what you're doing is cool and has to be done for the sake of exploration, but I would really, really focus on navigation. Also, how are you supposed to focus on text, when reading or reading or writing when it's wiggling all the time. It seems to me that VR really is for games, but for working with text I think humans are better off with stationary screens.

On a second thought: if you would make it easier to "fix" window, such that one can focus on it and work. As when you are "placing" a new window in your video, that could make it a workable solution.


I haven't used VR devices too much but I believe the wiggling that you see in VR demos is not present when actually using the device. The device is simply adjusting the perspective to match your head motion and so the actual user is not seeing the wiggling but in the demo it appears that the view is wiggling to the watcher.

But I agree that wiggling is off putting. I think VR demos should low-pass filter the output to some degree to give a more realistic perception of what VR is like.


I wonder if a separate camera for pancake with a larger FOV would help, with the translation (e: and rotation) matrix lowpass-filtered from user orientation.


Not sure what you mean by pancake.

But I feel its mostly rotational motion that is perceptible in VR demos rather than translational motion. I suspect that's because a small rotational motion results in a large shift in perspective for objects in the world that are some distance away from the camera while small translational movements don't really result in a significant perspective shift.

Would be interested in seeing what research into human head motion while sitting/standing/walking/etc. shows though.


Pancake is the 2D view observers see when they're looking at the monitor (i.e. that what gets recorded).

Yeah, both rotation and translation would be filtered in the idea I had, so that small wiggling motions aren't as visible.


I've seen "motion smoothing" options for VR spectators in some games - not entirely sure that's what they do, but I assume it is.


> It seems to me that VR really is for games, but for working with text I think humans are better off with stationary screens.

Have you actually tried VR? The fact that you think there's wiggling makes me think you haven't. Yes, videos make it look that way but I can guarantee you that's not reflected in the actual experience. VR is notoriously bad to watch on video and doesn't capture the experience at all.


There is no wiggling. That wiggling you see in videos is your head movements not being in sync with the wearer’s head movements. You’re doing it right now, your brain is just anticipating and compensating for it.


Actually the problem is low resolution and FOV of VR headsets. VR require at least 72Hz framerate at 4Kish which result in 2K 72Hz 90° ish stereo and that’s hard already even for moderate gaming PCs.

As for pinned windows: fixed HUD is a no-no in VR games because it burns in same spot of your eyesight. When a HUD is desired, players are usually given a virtual or symbolic helmet that allow display to lag behind head movement.


Navigation is one of the things we definitely need to improve upon.

I think the wiggling is actually kind of natural -- you don't notice your head or eye movements when you do them automatically, but it might be exacerbated in VR.

The fixing is an interesting idea, we'll try it out. I'm a bit worried it might feel weird in VR.


Yes, there is a thing called corollary discharge in the human visual system that keeps the world steady as one glances around (in the real world). Changing the direction one is looking causes the image to shift rapidly on the retina, yet perceptually we see a world that is not wiggling.

Corollary discharge[1] causes the brain to “subtract” out the motion of the image on the retina. It happens when our nervous system instructs the eyes to look in another direction. You can have someone else move your eyes or even do it yourself by pressing gently on the side of the eyeball with a fingertip on top of the lid. Wiggling one eye this way is not supported by the hardware of the eye and produces a very noticeable wiggly view of the real world.

I mentioned subtracting out the motion going on in the visual system. Experiments suggest that something like this is happening. If a muscle paralyzing drug is injected into the eye’s muscles it causes a wiggly world because the intent to glance somewhere causes both an ineffective command to move the eye, producing no real movement of the image in the retina while the triggered corollary discharge subtracts the intended motion from the still image and causes a perceived motion (in the opposite direction).

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2807735/


YouTube has support for VR videos. If you could record & render such that people with headsets could watch a session, that might help with interest and adoption.

I think navigation needs eye tracking, which sadly no headset currently supports. Focus-follows-gaze would be a game changer.


Uploading a YouTube VR video of a work session is a potentially fantastic idea. Thanks for the suggestion.

Right now, Simula uses "dumb" eye tracking, in the sense that windows receive keyboard and cursor focus when the user's forward eye gaze intersects a window. We also have it so that users can control the cursor focus with their forward gaze (presently binded to `Super + Apostrophe`); similarly, users can drag windows around by holding `Super + Alt` and looking around. The experience adds up to something quite productive once you learn all the keyboard shortcuts (your fingers don't need to leave the keyboard).


I bought a Vive Pro Eye specifically to try and develop eye tracking navigation for a system like this. I haven’t done any actual digging yet bc it turns out the NDAs for their SDK are...unfortunate. Definitely an opportunity for a free/open alternative.


Virtual Desktop already has many customizable features that allow you to work the way you're describing. I've worked in it before, and it was very immersive, though there was of course some fatigue.

You can have just one window at a time, and with a good headset, the wiggle is not significant. You can even adjust it so that your workspace follows you when your head turns.


RE eyestrain: here are a couple of links showing off Simula's text quality:

https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d... https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d...

I haven't used Windows' Virtual Desktop since the early days, but I suspect our text enhancements have improved upon the situation dramatically (when keeping hardware constant).


An interesting potential for VR would be alternative portable form factors. The laptop form factor is mostly dictated by the shape and size of the screen; since with VR the screen is moved to the HMD, this would no longer be a restriction.

The webcam view is also interesting; if it were "inverted" so that, instead of being confined to a window, it was outside of all windows (that is, the background "wallpaper"), it would work even better as a laptop replacement (that is, the screen doesn't cover your whole view, allowing you to keep aware of your surroundings).


We do actually plan to design standalone portable devices as a longer-term business plan.

The AR idea is really interesting. We'll try replacing the background with "reality" as seen from the HMD cameras.


If could contribute an idea, I wonder if one can make the camera watch your surroundings but with people recognizer that would notify you if someone is approaching with the intent to interact with you, and maybe show a notification or an approaching avatar. It would be neat in an office scenario. The next step would be a screen in the real world telling your visitor whether you're busy or interruptible, and a button the visitor can push to "knock on your door".

So that's the future, maybe: cubicle dwellers get their own VR offices...


This seems like the job for another program that you run simultaneously, not the window manager. Unless there are problems with having multiple programs make use of the camera or something.


Yeah, this should be a separate tool. There's some issues with multiple access I bet that's solvable.


I love the idea of a VR workspace. There are so many limitations in our current workflows and tools that mainly just arise from being restricted to a small flat surface. Even just having your perfect setup anywhere you go would be amazing. All you need is a headset and a small portable cube computer - basically a laptop without a screen.

---

But after watching your video, I feel like this really needs some tiling window manager (i3 etc) inspired way of easily creating workspaces with tiles, and effortlessly navigating between them. With relatively minimal head movement.

Also, does Simula support curved displays? It would seem very natural to arrange different curved screens around you instead of flat ones, which also potentially waste quite a bit of useful space.


Yes, a tiling functionality is something we're considering as a future feature. In fact, the shortcut system is inspired by it.

I'll put curved windows onto our todo list. Not sure if they're a good idea for all applications but at least some would benefit.


After using curved ultrawide monitors like the Philips 499P9H, I think a gentle curve is not an issue for most applications, and feels very natural.

I won't get to try out Simula until Sunday, but based on the video, my intuition says the apps being curved would look more natural / immersive.


I use a curved monitor (Samsung CRG9) myself as my daily driver, and it appears curved when I use desktop view in SteamVR, so I definitely see the appeal. Just not sure how nice it looks on smaller windows.


I would not be able to work like that, it would lead to neck strain or wrist pain, if you think about it comes naturally to reposition your body when you are looking in a new direction there is a reason for this. This hurts me because the keyboard locks the shoulders and it's only my heavy head that moves, this is not ergonomical.

So being able to fetch up windows so it's perfectly centered above the keyboard would make it easier for me to use.


Maybe there should be a harness to suspend us by one ankle while we work


How is the eye fatigue in general? How long can you go in a VR work session?

For VR gaming (Oculus Rift), I start to experience mild discomfort after about 30min, which grows severe by 60min. It's hard to envision using VR for a full work day. (My eyes are fairly light-sensitive from laser-eye surgery; not sure how much is saccading, vs. bright light directly in front of the eyes.)


RE eye strain, here are a couple of links showing off Simula's text quality:

https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d... https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d...

Many early VR Desktops (and VR games) didn't go to special effort to optimize text quality, and I think this has given people a lower than warranted impression of what is possible in VR today. There's no doubt that VR hardware is going to get exponentially lighter/sharper over time, and that these improvements will be welcome, but things are good enough now to put in 1hr+ long sessions without eye strain (at least in my experience as a daily user).


It really depends on the task - I (currently) have an OG Vive, and its absolutely fine for games with good anti-aliasing (no discomfort for me for basically any length of time, although I mainly play beatsaber and you don't really focus on the blocks beyond blinks of light and muscle memory), but it's hopeless if you want to watch a movie let alone program. The kind of programming I tend to do is generally fiddly enough that I've got a book open or some kind of datasheet etc. (I think - even good - VR would be a hindrance here)

I think the Index might be OK but I couldn't comment.


I can't speak to working with text but I could easily play games all day on a Vive with no eye strain.


Productivity use for VR is way more compelling to me than any game I’ve ever seen. Maybe that’s why it’s never taken off. We probably need to have an email client people want to use in VR and an IDE before we worry about gamers.


Totally agreed. Our project's "secret": most people think that the future of VR is in games & entertainment, but it's actually in office work.

I never would have even purchased a VR headset had it not been for the promise to start working on a productivity environment. But before we can start building the killer VR office apps, we have to get the basic 2D apps working crisply/perfectly, IMO.


I came here to say this. Never in my life have I wanted a VR headset. I've used others' to play games (Beatsaber in particular is really fun) but I don't have time in my daily life to allow for a regular gaming habit or indulging that kind of money on it.

In 2017, I had a conversation with a co-worker, where we talked about how badly we wanted a VR desktop environment with infinite screen-space. There's VRDesktop, but that isn't this. And then again the same conversation last year with another friend.

After seeing the demo video, I was compelled to go look at Vive prices (~$1,000, ouch).

I feel like my productivity would skyrocket with this, mostly due to lack of background distractions. Imagine wearing noise-cancelling headphones. I want this so bad.


Is it possible to have a giant curved "screen"?

I feel we are just scratching the surface of what's possible. Yes, multiple (arbitrarily sized) windows are neat. But we can probably do more.

Even without any changes though, I just wish my VR headset was lighter. I can go more than 8h straight in VR... playing games. Not sure about working.

Will try this :)


> Is it possible to have a giant curved "screen"?

SteamVR does this, but I've found it less useful than movable/resizable windows.

> I feel we are just scratching the surface of what's possible.

Definitely. This is one of the first viable iterations, but there's lots and lots of improvement potential everywhere.

> Even without any changes though, I just wish my VR headset was lighter.

Yeah, agreed. But I think future hardware will improve on that.


Non-Euclidean UI


I love the little joke that you ran "top" in the window that you had to look toward the ceiling to see. :)


Have you considered using a mounted Leap Motion controller for interacting with the UI?

Would allow you to spin the windows around without having to spin yourself, for prolonged periods of "switching over to the right" for instance.

Could use a different gesture to move around the 3D space too allowing you to navigate to different pockets of work


Good idea! You could do that with a spare controller or a Vive tracker too.


It’d be awesome if you could do this natively on the Quest (sideloaded). Native gesture control, inside-out tracking, wireless, almost twice the resolution, and wouldn’t even need to be connected to a PC at all so could be used anywhere (with Bluetooth keyboard and mouse).

Have you thought about an Android/Quest version?


IIRC the Quest is too different, but our medium to long-term goal is to design a portable computer in HMD format, if that's feasible.

I'll take a look at Quest docs and see if it's doable, but I don't have high hopes for it.


Quest would be amazing because it would truly demonstrate a completely untethered, self contained portable full desktop system that you could take anywhere.

Having said that, getting even a standard desktop setup on Android to work well is iffy so doing it in VR does seem like a pretty immense challenge. But it would be a breakthrough in what it could demonstrate in terms of the concept.


Very good demo.

Now you need something like a knob or dial to rotate the background, so you don't need to move your chair or turn 360.


> It's basically 10x the screens,

Holy crap, I never thought of it that way!

Anyone know if this is possible with macOS?


There is [ImmersedVR](https://immersedvr.com/), but it's expensive and only compatible with the Oculus Go/Quest.

It can create extra displays, but you have to buy an Elite license for that.


I saw that you were hacking on... Haskell..? In that video?

Where does Haskell fit in with Godot and Simula?


SimulaVR (unrelated to Simula) is written mostly in Haskell, with Godot as the engine.


Cool demo. But wouldn't it be better (for your neck) to move the screen instead of moving your head? Or both? I guess sooner or later it's going to hurt the way it is implemented now.


Been thinking about VR workspaces since sometime in the '80's. Broadly, they suck. It seems so cool but in practice VR adds pointless overhead to efficient UI. Windows with affine transformations suck at their one job. Very few people use Second Life as an IDE.

The big win, as far as I can tell, would be to engage the user's spacial memory. (There is a small but non-zero niche for 3D visualization of complex systems: weather, large molecule, etc.) You're going to want to combine "memory palace" with "zooming" UI in a kind of pseudo-3D (I think of it as 2.618...D but the exact fractal dimension isn't important I don't think.) Then infuse with Brenda Laurel's "Computers as Theatre"...

https://en.wikipedia.org/wiki/Second_Life

https://en.wikipedia.org/wiki/Method_of_loci

https://en.wikipedia.org/wiki/Zooming_user_interface

https://en.wikipedia.org/wiki/Brenda_Laurel - https://www.goodreads.com/book/show/239018.Computers_as_Thea...


They might suck right now, but this is a relatively nascent application of the technology.

Wait until resolution improves and we break out of the "desktop" paradigm. We could have a collection of unlimited windows and tabs that exist in a continuum around us, and we could use gestures to organize and surface the contextually relevant ones.

We won't need a bulky multi-monitor setup, and we could work remotely nearly anywhere. Imagine carrying your workspace with you.

> The big win, as far as I can tell, would be to engage the user's spacial memory.

Absolutely! Physical workspaces and work benches are incredibly functional because we are spatial animals. Breaking out of the limitations of using a screen could unlock more of our senses for use in problem solving.

I'm extremely excited about this technology. It will be great for software engineers, creatives (2d and 3d artists), mechanical engineering, CAD, ... you name it.

I really hope this keeps getting pushed forward. While I'm using all of my spare cycles on a tangentially-related problem domain, I'd be more than happy to donate money and ideas. This technology will be a dream come true if it continues to mature.


@echelon: This is exactly along the dimension we were thinking: VR as working environment for problem solving & creativity (via killer VR office apps that have yet to be invented). If you're curious, we have mapped out our long-term ideas in writing/deck format elsewhere. If you email me, I can send it over to you: george.w.singer [at] gmail.com.


> Wait until resolution improves and we break out of the "desktop" paradigm. We could have a collection of unlimited windows and tabs that exist in a continuum around us, and we could use gestures to organize and surface the contextually relevant ones.

You can get 90% there by just using multiple desktops IMHO, at least that's my experience.


I think in the mid-term, the big win would be ability to have 4-10 large monitors but with a cheaper, mobile, and compact solution with the eyes focused further away.

Headsets have improved almost ~2x in resolution and have halved in price and some have become wireless with better optics. A long ways from 4-5 years ago, but still need another doubling of resolution (or maybe more) and an increase in wearing comfort (lighter, more compact) plus an improvement in wireless latency and maybe a reduction from $400 to $300, and you’re looking at something that would be useful just as a replacement for multiple monitors.

Plus probably improvement with registering where your laptop and mouse are automatically. In principle that could be done with a software update to inside-out tracking software.

Additionally, some improvement is possible with similar to current resolution but with improved subpixel rendering and RGB pixel layout.

Seeing what has been done with the Oculus Quest since I last checked out VR like 3 years ago has left me pretty impressed. A lot of this stuff with multiple windows in this demo could be done natively and wirelessly with the Quest (which runs a kind of Linux). The inside-out tracking is impressively good. If combined with an insert you can put your tracked controllers on Bluetooth mouse and keyboard (so the Quest can register their positions in3D space to allow proper rendering in-headset), it could give you a high productivity workstation experience just about anywhere with WiFi (could be through phone). Hand-tracking (Which works already) could even allow gestures, although I’m not sure how important that is. Can the Quest do subpixel text rendering like ClearType but 3D?


This. I can't see moving my head and making gestures with my hands ever beating tiled windows and good keyboard shortcuts. I don't want a VR enabled workspace (ie I don't want my windows to move when my head moves), but I might prefer a sufficiently high resolution headset to multiple monitors if the software support was reasonably seamless.


> and we could use gestures to organize and surface the contextually relevant ones.

Eight hours+ of using gestures a day sounds like hell.


Not if they're super minimal. I can imagine something less than moving your wrist while using your mouse.


I used to love tinkering with the kinds of interfaces showcased on Nooface, but I wouldn't ever stick with one for actual proper use.

https://web.archive.org/web/20111102002100/http://nooface.ne...


This is a fantastic list of old projects. Many of them might be re-envisioned in the VR era.


> Windows with affine transformations

There's no reason you can't make the windows always perpendicular to the line of sight while still fixing their centers in space.


This actually used to be the default behavior in Simula a few months ago, and worked really well. There's a few use cases where it imposes a tradeoff though (specifically: it makes it hard hold windows in a specific orientation in space when you might accidentally look at them), so we now use a key binding to do this instead.


> it makes it hard hold windows in a specific orientation in space

I don't know if you're still watching this thread, but it would be interesting to hear an example of when one might care to lock a window's orientation.


I just added Computers as Theatre to my reading list.


Great ideas! The Method of Loci is a very powerful concept, that takes excellent advantage of how human memory works, and works nicely with zooming user interfaces, and is a great way to support user-defined editable pie menus that you can easily navigate with gestures.

I've experimented with combining the kinesthetic advantages of pie menus and gesture with the method of loci and zooming interfaces, including a desktop app called MediaGraph for arranging and navigating music, and an iPhone app called iLoci for arranging notes and links and interactive web based applets.

https://medium.com/@donhopkins/mediagraph-demo-a7534add63e5

>MediaGraph Music Navigation with Pie Menus. A prototype developed for Will Wright’s Stupid Fun Club.

>This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata. It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.

MediaGraph Demo Video:

https://www.youtube.com/watch?v=2KfeHNIXYUc

https://medium.com/@donhopkins/iphone-app-iloci-by-don-hopki...

>iPhone iLoci Memory Palace App, by Don Hopkins @ Mobile Dev Camp. A talk about iLoci, an iPhone app and server based on the Method of Loci for constructing a Memory Palace, by Don Hopkins, presented at Mobile Dev Camp in Amsterdam, on November 28, 2008.

iLoci Demo Video:

https://www.youtube.com/watch?v=03ddG3jWF98

Here's some more discussion about window managers, the Method of Loci, MediaGraph and iLoci, and pie menus:

https://news.ycombinator.com/item?id=22089694

DonHopkins 81 days ago | parent | favorite | on: Nototo – Build a unified mental map of notes

>Great idea, I totally get it! Your graphics are beautiful, and the layering and gridding look helpful. It reminds me of some experimental user interfaces with pie menus I designed for creating and editing memory palaces: "iLoci" on the iPhone for notes and pictures and links and web browser integration in 2008, and "MediaGraph" on Unity3D for organizing and playing music in 2012, both of which I hope will inspire you for ideas to implement (like pie menus, and kissing!) or ways to explain what you've already created.

>A memory map editor can not only benefit from pie menus for editing and changing properties (like simultaneously picking a font with direction, and pulling out the font size with distance, for example), but it's also a great way for users to create their own custom bi-directionally gesture navigable pie menus by dragging and dropping and "kissing" islands together against each other to create and break links (like bridges between islands). (See the gesture navigation example at the end of the MediaGraph demo, and imagine that on an iPad or phone!) [...]

>I like the idea of moving away from hierarchal menu navigation, towards spatial map navigation. It elegantly addresses the problem of personalized user created menus, by making linking and unlinking locations as easy as dragging and dropping objects around and bumping them together to connect and disconnect them. (Compare that to the complexity of a tree or outline editor, which doesn't make the directions explicit.) And it eliminates the need to a special command to move back up in the menu hierarchy, by guaranteeing that every navigation is obviously reversible by moving in the opposite direction. I believe maps are a lot more natural and easier for people to remember than hierarchies, and the interface naturally exploits "mouse ahead" (or "swipe ahead") and is obviously self revealing.


This looks very nice! I've been thinking about something like this for quite a while. May I suggest two additional features that I would love to see in such a window manager:

1. A way to neatly arrange all the windows on a virtual sphere that surrounds the user, possibly arranging them automatically in a similar manner as a tiling window manager.

2. A way to rotate the before-mentioned sphere around you without forcing the user to rotate it's head. This would avoid much of the neck strain. It could be done by, for example, holding a button on the keyboard while moving the mouse or by a simple keyboard shortcut to rotate the sphere by X degrees in any direction.

This concept could also be extended to virtual desktops where each desktop is a sphere around the next, like an onion, with the ability to "zoom" in to the next desktop.


> 2. A way to rotate the before-mentioned sphere around you without forcing the user to rotate it's head. This would avoid much of the neck strain. It could be done by, for example, holding a button on the keyboard while moving the mouse or by a simple keyboard shortcut to rotate the sphere by X degrees in any direction.

one more thing for the author to play around with (I don't have the hardware to try it myself) is experiment with speed/acceleration -- the mouse pointer movement has speed and acceleration parameters that affect how it moves, so that if you want you only have to move your mouse very little in real dimensions to move it thousands of pixels.

It might cause motion sickness, but maybe you can get away with pitching/yawing 5 degrees for every 1 degree of head movement, so to look "straight up" you only have to tilt your head up 18 degrees. Hopefully you'd still have the illusion of being oriented in a space.


> It might cause motion sickness, but maybe you can get away with pitching/yawing 5 degrees for every 1 degree of head movement, so to look "straight up" you only have to tilt your head up 18 degrees.

That would cause extreme motion sickness.


Great ideas. I've started a Github project to keep track of usability improvements and added them there.


There's something a little humorous about a new VR system whose primary feature is how well it can display VT100 emulators.

I don't know anyone who used MS Windows when their software was all still MS-DOS programs. Windows really took off when programmers started writing programs designed to take advantage of the native GUI paradigm.

Likewise, VR is never going to be accepted as a practical user interface until it moves beyond the concept of "windows". I fully believe it's possible to make a great VR user interface, but I can't believe it's going to look anything like some rectangles floating in space.

This could be a VR-fvwm. What we really need is a VR-GTK+.


Programmers work with text and will continue to do so for the foreseeable future. Terminal based editors are powerful, so properly displaying terminal is a great goal.

What would a non-rectangular UI for manipulating text look like?


I'm not sure I agree with the parent, but to give an example for a non-rectangular UI for manipulating (programming) text: Combine structural editing[1] overlaid on some kind of 3d graph that shows how your entire codebase fits together.

[1] https://www.youtube.com/watch?v=CnbVCNIh1NA


> Programmers work with text and will continue to do so for the foreseeable future.

Programmers work with 2D overlapping windows and will continue to do so for the foreseeable future. I'm giving a criterion for when that's going to change. They're connected. You can't beat a 2D display for displaying 2D data.

> What would a non-rectangular UI for manipulating text look like?

Looked at a DOM Inspector recently (HTML)? Or Computed Styles (CSS)? Or Network Activity (HTTP)? All of the most common text formats I use, I view through a non-text interface. These aren't inherently 2D data streams -- that's just what we do because they're being put on a 2D display. All of them would be even more useful in 3D.

That's not even counting the biggest classical use for an extra dimension: time, e.g., version control history, animation state, or database transactions or migrations.


I don't really do much frontend web programming, so those examples don't resonate much with me. I see your point, though, with your last examples.

It's still very difficult for me to imagine how I would translate interfaces I'm used to into another dimension. Even if I picture something floating in front of me, I only perceive a 2D projection of it. Perhaps with clever transparency or by rotating it around I could receive more information than I would in 2D.

I feel like a character in Flatland, I stuck in my own dimension.


I love using terminals and I've always wanted a VR window manager so I've put some thought into how it could work.

I've always envisioned an infinite 2D plane. I would love a terminal that stretches off forever above me, like the Star Wars opening text. Or a spreadsheet that goes on forever in 2 dimensions.

You could also embed your 2D-non-euclidean spaces in 3D. Want to enforce 80 character line wrapping? write on the surface of a cylinder that literally wraps around after 80 characters.

Another idea could be to use depth instead of colours for syntax highlighting. Comments could pop out of the page.

I think there are a lot of possibilities.


While the demo is impressive, I'm a bit unhappy with the naming. Mind that Simula [0] is a historically enormously important programming language. (The language, which introduced object-oriented programming concepts, developed by Ole-Johan Dahl and Kristen Nygaard, 1962-1987; Dahl and Nygaard were awarded the IEEE John von Neumann Medal and the A. M. Turing Award for this.)

[0] https://en.wikipedia.org/wiki/Simula


Looks cool and will try it out. That said, these things always feel like old paradigm stuffed in to new technology. Bit similar like when film was new people kept using it to record theater. It took years to come up with editing, not having to start scene by people entering the stage etc. Now that we have unlimited spatial space to manipulate the thing we come up with is to fill it with 2d screens.


There aren't a lot of jobs that can really use the 'space' provided by VR. I think the adoption here would have to drastically outpace multiple monitors and OS workspaces and keyboard shortcuts. I can easily get to 10-20 tabs in VS Code or a browser, but my attention is only really ever on one at a time. If I have to search through them, I'll have to do that on a monitor or in VR. I might be able to organize better in 3D: HTML files on the left, CSS files on the right, JS in the middle, backend reference above, tools and console below (like 5 monitors in a plus pattern).

I'd need probably a dozen or so channels of information that I had to cycle between quickly to convince me, given I've gotten used to multi-monitors, workspaces, and keyboard shortcuts. Maybe some futuristic high level analyst job that needs to look through lots of different kinds of information (geographical, text, videos, etc) at once.


I can imagine a web editor with some "3d space" flair. You have your HTML/JS/CSS editor in front of you, but imagine strands connecting CSS rules to the HTML elements in the preview window, which is floating slightly to your right. Or the page being previewed can even be the size of a 5-storey building which seems to be standing across the street.

For another interesting use case, how about a 3d debugger that lets you follow a program flow over multiple classes and stacks...


I end up needing a lot of screen real estate whenever I work on some EE project --- datasheets, schematic, PCB editor, calculator, notes. Additionally, being able to e.g. manipulate the enclosure and PCB to do a test fit and immediately edit things would be super cool.


Sounds like this would work nicely with vim³[0]

[0]https://github.com/oakes/vim_cubed


Nothing improves productivity more than vim³.


Indeed. vim³ is great, but it's hard to compete with nothing.


It seems to me like VR could get some great usage of vector assets. Having icons and especially text render perfectly regardless of how close you get seems ideal.


Yes! In fact, games use another method that could also be very beneficial in VR: signed distance-field rendering. As far as I know it's mainly used for text, but there is nothing specific stopping it from working with other kinds of images.

Essentially you get your vector data, and map it into a texture of distance from the boundary, with positive being outside the boundary, and negative inside. Then you apply a shader to that resulting texture to make it presentable for the user. This means crisp contours/outlines at all but the most extreme proximity, and easy implementation of other common vector effects (such as bevels, shadows, etc).


Wow, it is written in Haskell.


Yeah, Haskell was an interesting choice. I started working on it _because_ it was a Haskell project.

Overall it was a pretty rough road, but someone has to go first and I'd really like for Haskell graphics to take off.

I ended up writing bindings for Godot so we didn't have to deal with graphics stuff from scratch (like in the first few iterations).

While this lost us some of Haskell's benefits and it's not super functional-y, it's still nice to work with and most of the issues we encounter are related to the bindings being unpolished.

If you have any questions about doing this kind of project in Haskell, hit me up.


I was also impressed to see Nix package manager given first-class treatment.


Packaging this was an absolute nightmare due to various distributions shipping incompatible library versions and so on.

Nix, despite issues with everything OpenGL related, was the only thing that actually worked. And Cachix made installation really fast.

PS: if it starts building for some reason during the install process, let us know. We've tested it on various machines but due to some peculiarities specific configurations might require a rebuild even with Cachix. We can add that to the repo so that future installs and updates are faster.


Yes! Nix + cachix allowed us to squeeze installation time to less than 1-minute.

We initially tried to get a static binary with AppImage and even nix-bundle, but those ended up being failed/painful paths.


That’s weird, wasn’t Simula 2 the first OO language? This Simula doesn’t seem to be related at all. I get name reuse, but this Simula is one of those huge milestones in computer science.


Yes, it was introduced in 1967 as a successor to Simula 1 and the language was called Simula 67. It influenced subsequent OO languages like CLU, C++, Java, and Smalltalk.

Barbara Liskov lent me a book that described it in 1973 and recommended that I read about Simula 67 before starting to work on a small part of the the CLU compiler design. I believe the book was Structured Programming by Dijkstra, Hoare, and Dahl. It had a chapter by Dahl describing Simula 67.


Unfortunate name collision, and we never got around to renaming ourselves. I'll bring it up with George.


What is the reason you went with the name in the first place? Simula is the first part of simulation (which Simula was specialized in expressing), but I'm not seeing a simulation aspect here. Maybe an inside design inspiration relating to VR and simulating reality? Or maybe named after a village in Estonia?


"Simula" reminded me of the word "Simulation" or "Simulate", and had a good Matrixy feel to it. We're open-minded to a name reboot as we get further off the ground.

Right now we're working on perfecting Linux VR Desktop (for 2D apps), but the long-term goal is to bring forth killer office VR apps (for productivity, not gaming/entertainment) that wouldn't otherwise be possible in normal 2D Linux. These apps will likely take advantage of VR as an amazing "Simula[ting]" environment.


There was a similar product for Windows in 2016: https://techcrunch.com/2016/06/28/office-in-vr/

Disclaimer: I worked at its company at that time


Looks like a nightmare to me, but I also prefer a single monitor with workspaces where most of my coworkers seem to feel the more monitors the better.


I think that in 1-2 generations of VR headsets, we will no longer need monitors. For now my HTC vive isn't comparable to my 4k screens.


Agreed that VR isn't comparable to 4K screens (yet), but for what it's worth I regularly put in 1hr+ long SimulaVR sessions with the good ol' 2016 HTC Vive.

Here are a couple of links showing off Simula's text quality:

https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d... https://github.com/SimulaVR/Simula/blob/gdwlroots-xwayland/d...

Many early VR Desktops (and VR games) didn't go to special effort to optimize text quality, and I think this has given people a lower than warranted impression of what is possible in VR today (even on old headsets).


I think VR desktops look really cool, but I'm not sure I'd want to have to wear a VR headset every time I used my computer. Monitors will surely stick around in parallel.


Until the VR headset comes integrated into contact lenses (my personal dream since I started needing glasses for reading passing 40).


Possibly a stupid question, but is it possible to try this out on an Oculus Quest? E.g. with a link cable?


Since Oculus Quest doesn't support SteamVR on Linux (AFAIK; they might have added support), not directly.

We'd have to implement a Godot VR interface for the Quest. If they have Linux support at all, this should be doable.


I have Rift not a Quest so I don't know if they're the same but you can pull your desktop and every window in your desktop up in VR on the default Rift OS, even in the middle of running another VR app. I had a browser open in the middle of playing Half Life: Alyx. Did the same on No Man's Sky VR

Someone asked if I could watch all the Star Wars movies at the same time. I got 6 trailers running at once in 6 separate virtual monitors with the default built in OS feature.


The Quest runs Android and is a stand-alone device, it's not just a display unit for your PC.


Yes, but there's the PC link option for the Quest, so if you have the link do you get the same experience as a Rift including being able to pull all your Windows desktop windows individually into VR?


Quest link requires the Oculus app which only runs on windows. ALVR/VirtualDesktop are also Windows only.

A cross-platform possibility would be if it served up a WebVR page that could be accessed with Firefox Reality.


This looks absolutely awesome, but I think there's a bit more to be done RE neck strain and general ergonomics for the vast majority of people.

If you are at a desk and/or without a swivel chair, you're pretty limited to a grid of ~9 screens directly in front of you, and even then it looks like you're turning your head a TON to see the outer layer.

Some ideas that might help:

1) To use an analogy, could you increase the "mouse speed" of moving your head? Right now it's 1:1 with the virtual space and makes geometric "sense", but it might be nice to be able to focus on screens on the left/right by moving your head less and having your focus move just as fast. For example, moving your head 50% of what you do now to zoom over to the same place would reduce neck strain when looking at those windows, but also open up a bit more window space if, for example, you could still move your neck 100% of what you're doing and be able to "look over" far enough to fit another set of screens even further out. I don't know if this disconnect in pan speed would be disorienting, but it's worth trying. :) c.f. https://i.imgur.com/0KCo9hG.png

2) Keyboard gestures to "re-center" your focus might also go a long way. Perhaps instead of craning your head to look at a window that isn't "straight ahead" for a long time (as long as you're working in that window), you could do a quick glance at that window and hit a keyboard shortcut that would recenter wherever you're looking to be your "straight ahead", so you could then straighten your neck back out and continue working in that window while in a more comfortable position. c.f. https://i.imgur.com/u6SI284.png

3) Likewise, we don't all have amazing headset resolutions and there's a lot of potentially wasted space whenever focusing on a particular window that's not "full-screened". Perhaps a keyboard shortcut to expand whatever screen you're looking at to the full size of your view would be helpful for eye strain. Even something that could temporarily "pin" a window to your view (full screen and moving with your gaze) might go a long way toward minimizing unused focus/pixels. c.f. https://i.imgur.com/yA1twKf.png

My immediate thought watching this was to mount a keyboard on a swivel chair, but I think requiring end-users to get customized hardware to take full advantage of your environment might be the wrong way to go. :)


Having spent some time in VR in the past, remapping inputs like "the mouse speed of your head" can be super disorienting when you return to the "real world," perhaps dangerously so if spent the majority of your workday in VR.

Your brain adapts, and starts to believe at a primal level that "things fall this speed when I drop them," "I move this fast when I walk", "when I point over there, it's at this angle," etc.

If anybody remembers playing GTA for too long, and later walking outside with the feeling like you ought to be able to jump in the nearest car and drive it a way, you have an inkling of how your brain gets remapped.

Remarkably quickly, moving around in the real-world can feel "off" to the same degree that it felt "off" when you first entered the VR world when you start messing with these primal feedback loops. Get ready to bump into things, and don't operate any heavy machinery if you've been in a non-1:1 VR mapping.


After playing half-life: alyx for a few hours straight the other day, my mind kept wanting to hit the teleport button to move around my house instead of just walking. It was a very strong reaction.


As another option, perhaps having it passively reorient whenever you look at the same location for a long time. If you stare at a window, then it slowly adjusts that window to be straight ahead. If done slowly enough, you might be able to trick the wearer's brain into thinking that it hasn't actually moved.


1) I think that'll be super disorienting, but we might add a quicksnap feature like in HL Alyx to rotate your head quickly.

2) I believe we already have that (via Super-')

3) Good idea. Being able to instantly fullsize and "pin" a window sounds good, maybe even as a toggle.


Well now I have something to use my SteamVR after I finish Half-Life: Alyx


If you're stuck I can name a few dozen other cool things to do.

It's not like Alyx is the only reason to own a VR headset.


Beatsaber is really fun as well!


Please do share, I am new to this world as I just got a headset.


That's a bit like saying "Just got a monitor. What's good to look at on it?" ;-)

The world of VR is so diverse it's tricky to recommend things without knowing you.

Personally I'm not much of a gamer so I tend to be interested in narrative stuff, abstract visuasation and geometry stuff or just exploring interesting environments.

And the games I do like tend to have one or more of the above qualities.

You might be an RPG fan, a strategy gamer, an art aficionado , looking for productivity or educational tools or any number of things.


If you like car racing sims Project Cars 2 with a good wheel/pedal (G92/G920 etc) is sublime, like really good fun.

PavlovVR is CS:GO in VR and done really well, hilariously good fun.


Get Beat Saber, git gud, lose weight because suddenly you're doing aerobic exercise for fun


This has been the main thing that I have wanted out of VR, and want to try whenever it is that I get one. I'm glad to see that it exists, and that it is usable for my primary goal of coding.


I really wish we will soon get UI elements which are not flat, with form and function.

Volumetric video, spatial website navigation design guidelines, and augmented reality are high on m wish-list.


Can you elaborate what you mean by volumetric video and spatial navigation?


Video with z-depth, maybe computed from stereoscopic video. The complexity added would then be similar to an alpha channel. This would break video out of the rectangles we see in all UI design.

Wikipedia articles follow a format which makes it easy to find information. Perhaps site maps for a VR web could be arranged according to a convention so that users can easily figure out where to look for specific information. Early VR we web tried to do this by arranging content like rooms in a building, and streets, but that is cumbersome to use in practice. I think the wiki format suggests a much better approach can be created.


Seems like this doesn't have any relation to xrdesktop[1], which doesn't have mouse support! Really glad to see they have it here, any VR workspace is completely unusable if I have to use motion controls to simulate a mouse. Will definitely be trying this out tonight.

[1] https://gitlab.freedesktop.org/xrdesktop/xrdesktop


xrdesktop has a very talented team of hackers working on it (so I have no doubt they'll fix the mouse issue!).

Our approach with mouse cursors has been to assign every window its own mouse cursor. This is possible in Simula since the active window is the one that the user is currently gazing at, so that -- for example -- if you gaze at one window and move its cursor, it won't affect the other windows' cursors. We also allow users to control mouse cursors with their eye gaze (presently bounded to `Super + Apostrophe`), which is a good productivity boost.

In fact, you can do everything in Simula with just the keyboard (no mouse or VR controllers needed): move windows, move the mouse cursor, click the mouse cursor, etc. Once you learn the shortcuts, it's very quick.


I play a decent amount of flight sims in VR, and my eyes simply wouldn't be able to handle any more of this than they already do for the sake of game immersion. Unless theres some new headsets out there that fix this problem, I'm not sure if this will work for me yet.

I do await the day my whole system is a AR/VR hybrid, but I suspect it'll be a while still.


The tech is still getting better, so it isn't like VR headsets are stagnant technology wise. I can see resolutions getting hire and maybe using AMOLEDs for less backlighting hitting your eyes all the time.

I've been spending an hour and a half in VR everyday as part of my quarantine workout, and it seems fine, though I'm not really reading anything in that environment.


Does this work with Samsung Gear VR[0]? I got one for free but never really used it much.

[0]: https://www.samsung.com/us/mobile/virtual-reality/gear-vr/ge...


Very clever idea actually, major props. Unfortunately I get really sick from VR so it's not gonna be for me but I can definitely see how someone can take advantage of that. In the WFH era for people who have tiny laptops, this could do. I personally hook mine to the tv as a second screen but still... Very clever!


Or in cramped environments where you don't always have space for a multi-monitor setup.


Do you get sick in all VR? It was my understanding that VR generally only gets people sick when there is motion. So any stationary 6 DOF VR experience should generally not make anyone sick.


How does it get the image of an application inside of SimulaVR? Is it something weird like grabbing a graphics buffer? Or is it launching the applications as a child process so it can access its viewport? Or is it asking the windows manager? I'm not sure how this part works and am curious.


We implement a Wayland compositor interface via `wlroots`. Windows can launch like usual in our environment (i.e. via DISPLAY or WAYLAND_DISPLAY).

X11 apps are supported via Xwayland.


I remember seeing something like this during one of the early MagicLeap demos and was intrigued. If the headset was lightweight enough this would rock. In that video the screen remained in 'front' of the viewer and he could swipe left or right to switch screens.


I can easily see this in a Day Trader application Forex/Bloomberg/Etc you can run a live feed 4x2 type dashboard matrix with a "click"-through drill down experience or similar. So many possibilities. Nice work with big upside potential.


Fantastic! Now I just need a SinoLogic 16 with Sogo-7 data gloves and Thomson eyephones. :)


Very cool work, particularly important is the text quality improvements. Could you talk about the approach to improving text quality is? Are you implementing projected font hinting by any chance?


The core issue with surface (text) quality is essentially that classic texture filtering fails for surfaces here.

So we set a higher DPI than usual, and then use a supersampling algorithm with a suitable kernel as a lowpass filter. This allows us to avoid artifacts while maintaining sharpness.


About eye-strain, how is focusing in a VR headset? Is it always a fixed point given it's really just flat screens, or does one have to focus differently when things are "far away".


> how is focusing in a VR headset

From a practical point of view, at least for me, "it just works like normal".

As I understand the biology of it, there are really two different relevant mechanisms for "focus".

The lens that is your eyeball changes shape to focus on things closer/father away in real life. In VR this doesn't happen, there's a fixed focal plane a few meters away. I know this, but I don't notice it at all. Everything is always in focus and my brain doesn't seem to have any trouble dealing with it.

There is also the problem of "vergence", your eyes point at the thing you are looking at, which means they point "closer to eachother" when looking at something close, and in parallel lines when looking at something at infinite distance. VR headsets implement this reasonably well. If you hold a finger a few inches in front of your nose and focus on something a few meters away you should be able to see two copies of your finger (or at least I do, I'm told that other people have to try to see this). This works in VR as well.

As an aside. Trying to play shooters in VR has been surprisingly hard for me because of the vergence issue. I'm not use to aiming a real weapon, and it turns out I have trouble figuring out which near-field image corresponds to the eye I'm trying to aim with while I focus on the target in the distance. I'm not sure if I'd find it easier in real life.


No idea actually. I haven't noticed any strain or discomfort though so whatever it is is fairly compatible with how your brain works.


To add to your questions - What about old farts with reading glasses, like me? Do they just work with VR, or are they even needed?


I wear glasses and VR is fine for me. I know some people who can't use it with glasses, but I think that's for clearance reasons.

There's prescription lenses for headsets, but they're fairly expensive and IMO only worth it if you use VR a lot.


Also worth noting the foam guard on your headset can be adjusted or replaced for a better/different fit. I have three foam guards I alternate between (because people get sweaty when I have them over for Beat Saber and it's easy to swap in a clean one for the next person) and one of them didn't fit my glasses until I cut out some clearance for them. Now all three work well, and I wear glasses in VR 100% of the time.


thanks. Good to know. I'll be sure to bring my reading glasses when I go try on headsets. Something I would have likely neglected to do otherwise.


I’m pretty sure you never focus on anything up close in VR. It’s all “far away”, as someone who is near sighted I absolutely have to use glasses in VR. I suspect but am not certain some far sighted people might not need correction, as something may seem close up but really it’s just “big and far away”.

Try it!


> It’s all “far away”, as someone who is near sighted I absolutely have to use glasses in VR

Yeah, me too. As far as my eye is concerned, most things are "far away". Interestingly though, I can read things in VR "farther" than I can in real life. I am not exactly sure what the convergence looks like.


The focal plane is typically a few meters (depends on the headset)... so I guess if you don't need glasses at that distance you would be fine.

*IANA optician


That is for presbyopia, right? Unless it is severe (or combined with other conditions like myopia), you may not even need them. Give it a try!


Very cool demo but I don’t want to read/write (code or otherwise) at an angle. It’s tiresome and straining.

Whatever I’m editing should be frontmost and dead center, IMO.


We have a key binding that automatically orients the active window towards the user's gaze. If you're interested, I could make a launch flag that leaves this feature on by default, so that any window you look at automatically orients itself toward your gaze. This actually used to be the default behavior in Simula a few iterations ago, and worked nicely in most contexts.


Not being able to see the keyboard would push my touch typing skills to the limit! I hope it's doable, because this seems incredibly useful.


When touch typing fails, Simula has a mouse & keyboard view: https://www.youtube.com/watch?v=D5c3Hfp8Hcw Right now it's binded to `Super + w`.


Amazing! I had no idea this could be done in the current generation of VR devices.


cribbing your music picks :)


One word - neck strain. If carpal tunnel syndrome is not enough we now get even more fun trying to hurt out necks


I imagine the neck strain after a full work day. Ergonomics is very important. I like the idea though


mm I think we have a common interest

You could reach me at: http://tbf-rnd.life/contact/

Or if you have some ways of contacting you I'd be very happy to do so


A tip for finding out the email of a Github user: choose any of their commits, open it on Github and add ".patch" at the end of the URL, like this [1]. There's a good chance they use a real email to sign their commits.

[1] https://github.com/SimulaVR/Simula/commit/19cf46894cae1962e9...


Is this for Xorg or Wayland? I couldn't find the answer in the readme.


It's based on wlroots, but Wayland is less supported than the X compatibility layer via Xwayland. That's fixable though.


Developing a VR desktop on top of X-Windows seems like driving very fast down a dead-end dirt road with a cliff at the end.


We don't actually use X as the framework. The underlying code is Wayland, and then Xwayland as a compatibility layer. It's just that most apps we test against are X11.


It's a Wayland based compositor (using Drew Devault's wlroots) that supports X apps.


The name is familiar. :-)


How change would it require to run on MacOS?


It would require Apple it care about VR and gaming and VR capable GPUs. Except for a MacPro, no Apple devices are powerful enough for VR. Even the top end MacBookPro at $19000 still has an underpowered GPU below the lowest VR specs.


That top end MBP has four thunderbolt 3 ports; have you tried any of the egpu setups? (I haven't, but I've been thinking about it for my 5th gen x1 carbon with thunderbolt 3)


The highest possible price for a MacBook Pro is US $6099. How do you get $19000? The iMac Pro can get to $14,299.


Sorry, bad memory. Still that's beside the point. The point is Apple so far is neither game nor VR friendly. I hope they change that stance. I'd prefer to be on Mac 100% of the time but I have a Windows machine as well because that's basically where all VR is happening.


Major hurdle would be porting Wayland/wlroots to MacOS I think.


Once step closer to having an Ono-Sendai


now make the windows wobbly and make it a spinning cube.


Soon after the invention of the movie camera, there was a "genera" of films that consisted of nothing but pointing a movie camera at a stage, and filming a play in one shot.

That's the classic example of using a new technology to emulate an old technology, without taking advantage of the unique advantages of the new technology, before the grammar and language of film had been invented.

https://en.wikipedia.org/wiki/Film_grammar

https://en.wikipedia.org/wiki/History_of_film

>The first decade of motion picture saw film moving from a novelty to an established mass entertainment industry. The earliest films were in black and white, under a minute long, without recorded sound and consisted of a single shot from a steady camera.

>Conventions toward a general cinematic language developed over the years with editing, camera movements and other cinematic techniques contributing specific roles in the narrative of films. [...]

>In the 1890s, films were seen mostly via temporary storefront spaces and traveling exhibitors or as acts in vaudeville programs. A film could be under a minute long and would usually present a single scene, authentic or staged, of everyday life, a public event, a sporting event or slapstick. There was little to no cinematic technique, the film was usually black and white and it was without sound. [...]

>Within eleven years of motion pictures, the films moved from a novelty show to an established large-scale entertainment industry. Films moved from a single shot, completely made by one person with a few assistants, towards films several minutes long consisting of several shots, which were made by large companies in something like industrial conditions.

>By 1900, the first motion pictures that can be considered "films" – emerged, and film-makers began to introduce basic editing techniques and film narrative.

Simply projecting desktop user interfaces designed for flat 2D screens and mice into VR is still in the "novelty show" age, like filming staged plays written for a theater, without any editing, shots, or film grammar.

VR window managers are just a stop-gap backwards-compatibility bridge, while people work on inventing a grammar and language of interactive VR and AR user interfaces, and re-implement all the desktop and mobile applications from the ground up so they're not merely usable but actually enjoyable and aesthetically pleasing to use in VR.

The current definition of "window manager," especially as it applies to X-Windows desktops, tightly constrains how we think and what we expect of user interface and application design. We need something much more flexible and extensible. Unfortunately X-Windows decades ago rejected the crucially important ideas behind NeWS and AJAX, that the window manager should be open-ended and dynamically extensible with downloadable code, which is the key to making efficient, deeply integrated user interfaces.

For example, the "Dragon Naturally Speaking" speech synthesis and recognition system has "dragonfly", a Python-based "speech manager" that is capable of hooking into existing unmodified desktop applications, and scripting custom speech based user interfaces.

https://github.com/t4ngo/dragonfly

Another more ambitious example is Morgan Dixon's work on Prefab, that screen-scrapes the pixels of desktop apps, and uses pattern recognition and composition to remix and modify them. This is like cinematographers finally discovering they can edit films, cut and splice shots together, overlay text and graphics and pictures-in-pictures and adjacent frames. But Prefab isn't built around a scripting language like dragonfly, NeWS or AJAX.

Here's some stuff I've written about the direction that user interfaces should take to move beyond the antique notion of "window managers", and enables much deeper integration and accessibility and alternative input and output methods.

https://news.ycombinator.com/item?id=14182061

>Glad to see people are still making better window managers! [...] I think extensibility and accessibility are extremely important for window managers. [...] I'd like to take that idea a lot further, so I wrote up some ideas about programming window management, accessibility, screen scraping, pattern recognition and automation in JavaScript. [...] Check out Morgan Dixon's and James Fogarty's amazing work on user interface customization with Prefab, about which they've published several excellent CHI papers: [...]

>Imagine if every interface was open source. Any of us could modify the software we use every day. Unfortunately, we don't have the source.

>Prefab realizes this vision using only the pixels of everyday interfaces. This video shows the use of Prefab to add new functionality to Adobe Photoshop, Apple iTunes, and Microsoft Windows Media Player. Prefab represents a new approach to deploying HCI research in everyday software, and is also the first step toward a future where anybody can modify any interface.

https://news.ycombinator.com/item?id=18797818

>Here are some other interesting things related to scriptable window management and accessibility to check out: aQuery -- Like jQuery for Accessibility

https://donhopkins.com/mediawiki/index.php/AQuery

>It would also be great to flesh out the accessibility and speech recognition APIs, and make it possible to write all kinds of intelligent application automation and integration scripts, bots, with nice HTML user interfaces in JavaScript. Take a look at what Dragon Naturally Speaking has done with Python:

https://github.com/t4ngo/dragonfly

>Morgan Dixon's work with Prefab is brilliant.

>I would like to discuss how we could integrate Prefab with a Javascriptable, extensible API like aQuery, so you could write "selectors" that used prefab's pattern recognition techniques, bind those to JavaScript event handlers, and write high level widgets on top of that in JavaScript, and implement the graphical overlays and gui enhancements in HTML/Canvas/etc like I've done with Slate and the WebView overlay.


htop at top,my neck.


What about an underwater setup where the VR headset is merged into a scuba mask?

Eventually we could move on to having tubes implanted for feeding and waste. You could stay in there indefinitely, forget you ever even existed outside the simulation...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: