Hacker News new | past | comments | ask | show | jobs | submit login
Remote workstations for discerning artists (netflixtechblog.com)
132 points by el_duderino on March 8, 2021 | hide | past | favorite | 68 comments



I was a little disappointed to see that there wasn't a discussion of how Netflix provides (or doesn't!) a high-fidelity visual experience. I imagine certain artists require fluid-motion and previews without visual artifacting, so I was hoping there was some discussion about how they provide that. I've used a handful of remote desktop systems, and I haven't found anything that's able to provide even a passable video watching experience. The issue I've run into is that the technology designed for low-latency video heavily utilizes compression, which falls apart when looking at text.

I really wish Microsoft would just throw a ton of money at making RDP an absolutely delightful experience. Last time I checked, RDP wasn't able to use your GPU at all unless you had a very specific, no-longer-maintained version of Windows 10 Pro (the Workstation variant).


I work on Jump Desktop (https://jumpdesktop.com) that's being used for remote video editing @ 60fps on up to 3 monitors. It supports very low latency remote desktop and even works in environments that have high latency. For example here's Jump connected from the east coast to a physical server in Paris with 150ms ping times between the two: https://twitter.com/jumpdesktop/status/1356679263806763013?s...

It works on Macs and Windows. It works really well with text as well as full motion video.


In your excitement, you also forgot to explain how this is achieved.


Currently we use a modified WebRTC stack with VP8/VP9 to encode the video stream. This is similar to what quite a few game streaming services are doing like Stadia and even Amazon's new Luna(?) service. There are lots of platform specific optimizations up and down the webrtc stack to make sure latency is kept to a minimum while preserving quality. Things like only encoding pixels that change, progressively encoding over multiple frames to sharpen quality, keeping jitter buffers to a minimum.


Recent advancements in hardware video encoding have made things like this significantly easier to achieve. If you're jumping into a device that has the power for video editing tasks, then it most certainly also has great video encoding capabilities.


Buy it? :)


What’s the actual latency you get between pressing a key and seeing the result on the screen? And how is it if the connection is good and the server is close?


This is a good question. I guess what you're actually interested in is the latency between the physical display updating vs the local display updating, right? I just did a quick test under good circumstances using this video (machines on the same lan): https://www.youtube.com/watch?v=OLxY0HDakRk with the physical and remote display side by side. I noticed an approximately 3 frame lag between the physical display and the local / client display update.


Well what I really meant was the time between pressing a key and seeing a character on the screen. It’s easy to measure (except for deciding when the key is pressed. I go for hitting the key fast and timing from when it stops going down) by using some high speed camera (or medium speed, see https://apps.apple.com/gb/app/is-it-snappy/id1219667593 ) and it’s the latency that actually affects how usable something is for typing. The problem is that it’s only really good for comparing things as lots of devices in the middle add latency, eg keyboards can have 15-50ms of latency[1], typical deferred rendering can give you something like a frame of latency, monitors may add latency or just have slower switching times (it takes longer to switch a pixel to an accurate colour), and some pixels at the bottom of the screen will make it out of the computer 16ms (or whatever a frame is) later than those at the top.

For comparison with my (not particularly optimized) set up going over a fast internet connection, I get something like 150ms between pressing a key and the character starting to show up in emacs (it takes a few ms for the pixels to finish switching). 10 frames feels like a lot to me. My best guess is that without changing anything drastic or eg reducing resolution, I might be able to get that down to 100ms which is still looks like a pretty big number (eg a round trip between London and New York is something like 70ms).

Anyway, thank you for investigating this. A few frames from the network seems pretty good, especially with a reasonably large delta between frames in that video)

[1] https://danluu.com/keyboard-latency/


I think the gold standard here is what Chromecast does with YouTube— tell the cast device directly what video to show instead of re-encoding and passing the whole thing over the network.

But of course, doing that requires application level support, or at least low-level hooks into the desktop rendering system to extract video streams and treat them differently from everything else. And if the video is being generated on the fly or is an uncompressed local preview, then you may be looking at an intermediate compression step regardless, but at least it could be compression suited for video instead of for text.


Well, that works because there is a pre-rendered video that one can just point the Chromecast to most of the time. If you cast a tab or whole desktop to display anything that's not pre-rendered (which is what workstations would fall into) then Chromecast runs into the same problems as anything else.


> But of course, doing that requires application level support, or at least low-level hooks into the desktop rendering system to extract video streams and treat them differently from everything else.

That's exactly what RDP does on Windows! I don't know the extent to which controls it supports rendering on the client, but the concept is that RDP is aware of things like text, buttons, title bars, etc., and is able to tell the client things like "draw a button at (20, 140)" instead of sending the raw pixel data. With enough engineering effort (and some standardization on the application UI side) I totally think Microsoft could make RDP the protocol of choice for virtual desktops.


It makes total sense; OTOH I'm kind of cracking up here because isn't this what a web browser has become? A portal for viewing a remote application state, that can be serialized into a blob of semi-structured XML and JSON?

It's hard to imagine that retrofitting such a serialization onto the total wild west that is desktop applications could be any more successful or consistent than what the web has already got.


> but the concept is that RDP is aware of things like text, buttons, title bars, etc

The fun bit, at least as I understand it, with the move to more and more Electron based apps, this is somewhat moot since those apps draw their own thing. There are still performance improvements RDP brings over a simple "blit everything" protocol, but they're smaller and smaller.

My biggest complaint with RDP in general is how many Windows apps react poorly to changing resolutions and DPI. I'm practically an expert now in rescuing windows that have decided "I was over at here on the right side of your ultra-wide monitor, so I'm still over here in this no man's land on this tiny RDP resolution."


They switched to h264 video years ago.


When I worked at a TV station, all the editors had capture and output hardware in their systems. The output only displayed what was in the video preview window on the screen, and it sync'd with what was being displayed in the editor. Most video editors are likely already set up to do this.


Teradici clients for realtime. For smooth playback they export to a mp4. For real-time smooth playback, they do SDI to a streaming box that provides low latency.


Hey, we launched Renderro - cloud computers for filmmakers, graphic designers and animators in October 2020, and since then we are helping creatives all around the globe. Check us out here: https://renderro.com/

We are supporting up to 4 monitors, each up to 4k res, and the quality we are offering (60 Hz and low latency) allows you to do real-time editing, eg. when using Premiere Pro or Avid MC running on our Cloud Computers. We are also used a lot for 3D design and rendering with Blender.

Independent review: https://www.youtube.com/watch?v=LxM4mC5hwpo

Tutorial showing how to use Renderro: https://www.youtube.com/watch?v=Uw44El8kMxM

If you have any questions, don't hesitate to contact us at info@renderro.com

Please note, that with us you don't need any additional tools to connect to our Cloud Computers (no need for Teradici or Parsec) - we've got VDI solutions built in Renderro. We are focusing on delivering Renderro as super easy to use, out-of-the-box solution with clear pricing and superb quality, so I hope that you will enjoy it ;)

Cheers!


> Last time I checked, RDP wasn't able to use your GPU at all unless you had a very specific, no-longer-maintained version of Windows 10 Pro

Not sure where you got that idea. RDP has been able to use the GPU for both encoding and the apps since Windows 7 era and both certainly still work on the current version of Windows 10. The main limitation I've run into is it's still limited to 30 FPS though.


It's my understanding that RemoteFX is only available in Windows Server and Windows 10 Pro for Workstations.


During the Windows 7 era it required Enterprise or Server, during the Windows 10 era it requires Pro, Enterprise, or Server (or that Pro for Workstations version you mention). Remember you have to actually enable it though, it's not configured by default.

Perhaps you're thinking of the deprecation of the RemoteFX guest vGPU for the Hyper-V role?


> I really wish Microsoft would just throw a ton of money at making RDP an absolutely delightful experience.

I suspect they will make VSCode RDP plugin and call it next generation remote desktop.


This has been a quest of mine for a while now, “native” (or close enough that it does not matter) display and input over a data network.

You can get very close today, but you need both very low network latency (<8ms or half a 16ms frame at a minimum) and a hardware + software stack that can quickly and efficiently dump it’s display framebuffer to the network.

Getting all that to work at the same time is a real trick. Getting it to all work consistently has so far not proven worth the effort vs some other solution like just using another device.


You should probably check out Teradici. They even finally published Linux clients.


i'm not really in the workstation provisioning industry so i can't comment on the article, but i've used salt, puppet, etc in provisioning servers and have found that declarative configuration management is surprisingly complex and has a lot of pitfalls and is deceptively hidden behind the "look how simple it is to install a package". declarative languages for config management imo was a mistake, because in order for it to be truly declarative, every part of the system (including preinstalled software) needs to be managed by the config management tool. otherwise stuff like "i pushed a diff to remove the block where i asserted this package to be installed but the package is still installed on the hosts" will continue to surprise newish users. stuff like cdk works because, as stated earlier, it asserts the desired state of every piece of infrastructure it manages

a better alternative would instead be something akin to Dockerfiles where you assert what packages you want to be installed on top of a base system, and if you want anything changed you update the dockerfile and rebuild the whole shebang.

i'm imagining a system where the local filesystem is more or less immutable and work is saved to separate filesystem. but having worked from computer labs in schools, i know how less than ideal that is when i need to use a program that isn't preinstalled to do my work, so maybe this all doesn't work and saltstack is a necessary evil


> a better alternative would instead be something akin to Dockerfiles where you assert what packages you want to be installed on top of a base system, and if you want anything changed you update the dockerfile and rebuild the whole shebang.

Something like Packer?

You are right that there are hidden pitfalls. But they shouldn't be a great deal if servers are cattle. A lot of corner cases are related to trying to manage servers that live forever. This shouldn't really be done in 2021 except in rare cases.

> i'm imagining a system where the local filesystem is more or less immutable and work is saved to separate filesystem

CoreOS worked like that. Immutable filesystem for the system itself. It also had a secondary "mirror" of this filesystem that was used for upgrades (and as a fallback in case boot failed). New stuff would either come when the instance first booted up (cloud-init), or via system upgrades.

They were acquired and so my experience with them ended there.

There's also https://nixos.org/


I'm an artist and I do fine with a 13" laptop and a Wacom tablet that fits next to it in my laptop bag, to be honest. I consider myself to be pretty discerning about my tools.

If I was doing a lot of 3d stuff I might want more horsepower but I'm pretty happy to be able to go sit out in the park and draw with all the Internet off to conserve power.

YMMV, obviously.


It's honestly just unpleasant working over a laptop when your work is video or 3D. It's the difference between working over a hot jet engine or just a gently humming tower.

Not to mention some 3D/video workflows are just unfeasible with the lack of power available in laptops.


actually more horsepower isn't necessarily helpful - I was reading earlier today about a point cloud architecture application that preallocates larger jobs by guessing the capabilities of the multiprocessor available. This isn't a low end application but a banner product for Leica Geosystems. the reviewer could only get it to slice the job into 8 sections despite many more cores available.

in filmmaking, personal equipment rental companies like sendgrid have raised the level of everything now I can buy my own Red and lenses and learn my tools intimately and pay for them (when I'm not paid to bring them on set) by rental customers who are liberated by the freedom from having to deposit the replacement value with traditional rental houses which risk is now insured. nvidia should sell a licence to use the 3090 professionally and rent it out like you can with the cards formerly known as Quadro now RTX A* beginning RTX A6000 (48GB VRAM at under $3K list is incredible if you can get one 96GB at 1TB/s+ linked..) if they really want to take the visual graphics industry for good. (equally AMD can grab this business forever just as easily with a turn on a penny worth of engineering)

Architecture and geographical systems are strongly and culturally averse to parallel programming. the complexity involved has only recently been significantly helped by multiple cores - the workstation on my desk in 2005 was dual dual core 3.9GHz xeon. 32GB RAMBUS RAM. early solid state swap. (and four FC drops from a SAN shared array with enough RAM cache to hold geometry models)

and laptops can be made to run single threaded codes very quickly single core, satisfying the (self imposed limits of) potential for most "graphics heavy" programs. But it isn't at all a good situation.

Ed removed confusingly placed sentence


>in filmmaking, personal equipment rental companies like sendgrid

Sendgrid rents camera equipment now? The email company?


I think they meant to say ShareGrid.

https://www.sharegrid.com/


ahm yep! I can't stop making the same mistake unfortunately, I have attributed this to the similarities with the worrying about my payload that I experience about the same sending 6 figures of hardware to a stranger as I have felt when I encountered clients using the mta service for contractual correspondence ...


Keyshot and Redshift have put PIXAR level rendering within reach of anyone with a modern GPU and can make great use of 64 core AMD now too.


As an ex VFX sysadmin, this is a little underwhelming.

its trivial to provide a machine on demand, We used to do it all the time. Because the version of software are known in advance[1], its super simple to make sure they are there and ready for the user. Our machine images were like 120 gigs, with 3/4 of the software on NFS shares. (still much faster than running in docker containers)

What they dont cover is the actual important bits:

o keeping the session colour correct

o making it responsive.

Remote workstations have been about for many years. Mainly because the big workstations were too noisy to use with clients. so they used teradici cards to allow dual screen 2k screens.

The third thing they don't really cover, but I suspect thats out of scope is how they sync assets. VFX is still a file based system, which translates poorly to object storage, even worse to object storage that doesn't support seek()

[1] vfx pipelines used fixed versions of software, because otherwise everything turns to shit.


I'm very curious what remote display tech they're using, as I've had trouble with latency even over a local network. I've tried VNC and NX, mostly with Linux servers and Mac clients.


As stated in a sister comment, it looks like Netflix is using DVC NICE for their VMs.

For the vast majority of this industry, the two larger players are Teradici with their PCoIP protocol (also licensed to VMware) and HP Remote Graphics Software (now known as ZCentral Remote Boost). Teradici is definitely the most widely used, but I know a couple of shops using RGS. NICE is an interesting project coming up in AWS, particularly since it's free for systems within the AWS cloud, and available outside of the cloud for a price.

I don't have any HP RGS experience myself, but with Teradici it currently supports Windows and Linux hosts, with software clients for Windows, Linux, and macOS. They also have hardware solutions (zero-clients) available so end-user operating systems aren't needed. Those clients are provided by third-party OEMs (e.g. 10Zig, Dell, etc) using the Teradici supplied chips[0]. macOS host support is coming later this year[1]. The host support is split up into two different categories: hardware with the Remote Workstation Cards which have a hardware encoder on them (uses a PCIe slot) that you plug display output into, and their software agents (standard and graphics) that don't require hardware and encode on the CPU and/or GPU depending on the agent you're using.

Teradici has also been fairly clear that software is the future of the product line, so the hostcards are not a good buy for long-term support. However they will be continuing the hardware clients with a "next-gen" zero-client coming out this summer with a boatload of new features over their older clients, but it's not really a zero-client in the truest definition of the word.

[0] https://www.teradici.com/resource-center/product-service-fin...

[1] https://connect.teradici.com/blog/macos-release


HP Remote Graphics is very popular and widely deployed in post production and artist graphics suites.

Azure has been taking high end graphics workstation seats in the GFX and compositing and film production industry at a clip for close to a decade now.

the hourly cost of running £30,000 worth of hardware is below £5ph, which pays for the hardware running 24/7/365 over 3 years, but is clearly a saving serving 5*8/7 active employees. [ed] and that's before factoring in floating licence economies.


Judging from the GIF, they're using AWS NICE DCV: https://docs.aws.amazon.com/dcv/latest/adminguide/what-is-dc...


At an internship we heavily used Windows RDP over our local network - worked phenomenally well, basically 1:1 when connecting to our "home desk" desktop. I think RDP + very high bandwidth network is the best way to do it still.


I was thinking the same thing as I was reading. Seems like feedback lag while drawing brush strokes would be frustrating for an artist. But then again, we have examples of surprisingly good response time in the game streaming world.


Like anything it's going to heavily depend on latency to the workstation in question. With cloud providers that have reasonable region coverage, it's not really that bad. In our tests, ~40ms latency is the upper bound of where you want to be before artists start really complaining, and Teradici themselves suggest 25ms for optimal results. However, depending on the task said artist is doing, you can venture above that, though for Cintiq/pen displays it becomes pretty painful and a "live with it" type deal.


I don't know anything about NX, but VNC is terribly slow and outdated. Pretty much anything else will be faster.


For people who want to roll their own system, I suggest following https://www.reddit.com/r/cloudygamer/. That's focused on commercial game streaming services, but there are many threads about people who roll their own game streaming system.

The low-latency high-quality requirements for game streaming is very similar to the requirements outlined by Netflix in this post.


I don't know who's actually using these kinds of game streaming solutions, or what games they're playing with them. The PSN version of the idea is basically unusable for FPS games, and I have a good broadband connection. Even Steam, over the LAN, Steam introduces a noticeable-enough lag that I just don't bother. But things like Stadia make it seem like enough people are getting mileage out of these solutions to make it profitable to run, so I guess I'm not the target demo.


There are enough people that just aren't that sensitive to latency. For reference, look at any console player that doesn't go out of their way to enable PC/game mode on their TVs (HDMI 2.1 with Automatic Low Latency Mode is thankfully changing this) that is going to suffer upwards of 100ms of input latency[1]!

[1] https://www.rtings.com/tv/tests/inputs/input-lag


I haven't tried the cloudygamer options recently, but I had no problem playing Destiny and The Division 2 multiplayer on Stadia. PSN might just be unoptimized.


We (Paperspace, YCW15) also give you the ability to run a high-performance workstation in the cloud (and soon on any remote machine!). Check it out https://www.paperspace.com

Disclaimer: I am the CEO/co-founder


I've tried to use Paperspace in the past, but the leap in pricing (for Core) from $60/month for "2D applications" to $120/month for "3D applications" is too high. I have a low intensity, always-on (24/365) 3D application that doesn't require a powerful GPU, but nonetheless requires compute that allows for graphical workloads. I would love to use Paperspace for our case, but I currently can't. :/


I tried Paperspace and Blender would just crash the instance when rendering the splash screen demo files.

Would really like a service that works well for casual Blender usage. Vagon didn’t crash, but was too slow, even though the ping was in the low tens of ms.


How's your customer support? Looks like someone is waiting for your team to reply after he/she gave them the requested id info.

https://www.reddit.com/r/cloudygamer/comments/m0kqpa/does_an...


This is really cool! I would love to demo this- I regularly used Maya via Google Remote Desktop at a previous job and it worked a lot better than I expected. We're headed back toward thin clients :D

I've not used Salt very much and went all in on Ansible years ago. I assume they're running it in master-minion mode but the article doesn't say... Maybe not and that's why they need a custom agent as well?


What I don’t understand is how we’re able to livestream out video in 4K but can’t share our desktop at the same speed and resolution.

Am I missing something?


Those 4k livestreams have a latency measured in seconds. We have made massive advances in recent years and if you have good bandwidth you can now have 4k livestreams with maybe 2-4 seconds of latency. But for moving a mouse that's still one to two orders of magnitude too much.

If you allow less latency you can't do as good a job at video encoding, and can't allow for any buffering (client side or encoder side) to smooth out bandwidth fluctuations or allow for more dynamic bitrates. That drastically lowers the achievable quality.


Not that it isn't an underdeveloped area, but the needs of the streaming cases are different. One key difference is if there is no feedback/interaction on the livestream, then there could easily be a relatively long time buffer that would not be noticed. Vs moving a mouse pointer around on a desktop where it would get annoying at a few hundred milliseconds. It also leads to differences in how the encoders and decoders need to be applied.


the video is pre-cached by your ISP


Sounds like a nightmare for the artist compared to the status quo. Specially, I could imagine that the thought of your workstation being ephemeral does not sit well with somebody that is not used to the sophisticated version control tools available to software engineers


As mentioned in a few of the comments here the gold standards for desktop streaming are Teradici, HP RGS (now zcentral remote boost), and NiceDCV (acquired by amazon). All are limited to Windows or Linux (sorry macOS you'll have to use something like Jump although teradici did just announce they are working on a macOS product for release this year: https://connect.teradici.com/blog/macos-release). Parsec recently came out with a Teams offering that is pretty slick.

There are also some interesting application streaming services out there like fra.me, cameyo, aws appstream, etc..

Fundamentally remote workstations come down to interactivity. If you assume a 60Hz monitor (60 cycles a second), whenever I move my mouse or interact with an application I need all the computation and draw calls to happen on the computer and return/display the modified pixels in 16ms. So that's all sorts of latency but when you are talking about a remote computing environment the largest component will be your network latency. Now humans are incredibly adaptable and so you can go beyond that but the further you get from 16ms the more your brain will have to compensate for the delay. Data can travel around 122 miles per millisecond (2/3rds C). But with all the network switches and things in between it is not a "google maps" type calculation.

Remote workstations have a few advantages and disadvantages.

Advantages include easy provisioning, secure access to data and assets, centralized backups, access to high speed networking for large data sets, easy system config updates (add storage, add gpus, etc..), opex not capex, global systems deployment without going through customs, immutable desktops, etc..

Cons include expensive ($1000/month for a windows workstation. Linux is much cheaper but Adobe and Avid make Linux hard for some creative workflows), limited to hardware provided by cloud provider (quadro's not geforce, slower clock speeds affect single threaded applications, inability for other addon cards like BlackMagic / AJA), limitations for working with HDR, high resolution (beyond 4k), multichannel audio, multiple monitors, VR headsets, require a stable and consistent connection, etc..

For certain creative applications the flexibility of remote workstations is really great. For example if you need to access 100TB of raw video footage and 3d assets. For others (for example where you are painting on a canvas, need to hook up a peripheral like a wacom tablet, or need to hook up a VR headset), they can be non optimal depending on the user, video drivers, application, etc..

There is also an emerging hybrid coming out which leverages browser native applications so you get the local interactivity but offload the processing and assets to a backend in the cloud. Things like Storyboarder, Photopea, or the Blackbird Editor.

Teradici got there start over a decade ago primarily shipping ASICs that would connect to the video out of your GPU via physical cable, encode, and then stream over IP to a zero client (Wyse, eVGA, 10zig) that had another ASIC that would decode. Popular in the military and other areas where you want to provide a terminal. They excelled over other stuff like NX based tools or RDP for a couple reasons: 1) Full screen video playback 2) Optimized protocol for crisp text gives it an edge over the others that are x264/x265 streaming. Teradici will actually dynamically modify their stream based on what you are doing. So working in a DCC you get their crisp text goodness, but if you switch to full screen video they will switch over to x264/265 3) USB bridging for peripheral support 4) Local termination of the mouse cursor. So while draw calls may be delayed (think a trailing line when drawing), the mouse cursor is snappy which helps with latency adaptation.

The most recent version has gone all software and is actually what powers AWS Workspaces product. They also now support 4k and 10 bit color with their Ultra product. Audio is still limited to stereo.

NiceDCV was acquired by Amazon a while back and while it is "free" for ec2 instances, and does support 7.1 audio on Windows, there are some things it doesn't do as great some of the others.

Parsec was originally focused on the remote gaming space and I remember reading about it on reddit years ago where people were using it with Paperspace to run games remotely. Their newish Teams product is pretty cool moves them from a single user type use case to definitely worth a look for enterprise deployments


Great write-up. I was just watching a small presentation for their cloud access tech which you can also use to expose on-premise physical and virtual desktops. Big limitation mentioned was that physical Linux-based desktops are not supported (for both their standard agent and the graphical one).


They should rent this out, I have 3 computers rendering frames right now, and I can't find more graphics cards anywhere.


(I'm not super informed in this space)

AWS WorkSpaces purports to have machines with meaty GPU's, are they not powerful enough?


The problem with AWS Workspaces for us was the price was just too damn high.

Their Graphics Pro instance was okay spec: 16 vCPU, 122 GB memory, 8 GB VRAM. But they charge $1000/month for it.

Our $3000 workstations beat it in every benchmark, sometimes by 3-5x and have a useful lifetime of at least three years.

I'm sure for some people, that is a valid trade off, but for us, it was just too much.


Are you rendering 24/7....?


Most cloud GPUs are just thin-sliced datacenter GPUs which only have one redeaming quality: high RAM per PCIe slot.

The $100,000 GPU can be beaten by a $5,000 consumer space CPU for raw compute but is basically 3 to 4 times the amount of RAM.


There's a niche in the space for workspaces that support "low intensity 3D applications" with 99.99% uptime. Right now, all the major cloud vendors (AWS/Azure/GCP) are prohibitive with respect to pricing. I can spend $600 to buy a dedicated machine to run my GPU-bound workload 24 hours a day, or spend $600/month to have a dedicated GPU instance. Things like Paperspace don't cater to this particular workload, still. I've honestly thought about creating a new PaaS provider that attacks this niche. Seems doable.


I'm sure they are, I just need to make the leap. :)


Have you looked at RNDR? Sounds like it could be a perfect match for what you are looking for. It's designed for Octane GPU renderer but there are more render engine supports coming soon I believe.

https://medium.com/render-token


Why not just use Citrix HDX Pro? Tech has been around for ages, works well for accessing virtual desktop, providing virtual desktop with big GPU and IO since it is in a central facility. We were doing this 10+ years ago. What am I missing?


I took a Citrix training class on thin clients back in I think 1999. I spent the entire class wondering if I'd be able to use it to play Starcraft. I never got to try it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: