Hacker News new | past | comments | ask | show | jobs | submit login

Instead of having the party ssh into a VM installed on the user's machine, potentially exposing a high majority of the user's codebase, have you considered spinning up temporary containers on your back-end and having contributors install something like remote CUDA or remote OpenCL so that only the GPU kernels are transferred to the contributor, who's client software polls a network queue checking to see what kernel should be run and where the results should be sent?



Remote CUDA seems incredibly useful - this is an excellent idea - I'll look more into it tonite.


Good idea from the perspective of not exposing the code base. However, technologies such as remote Cuda/OpenCL which rely on remote execution of compute kernels in general require high-bandwidth and low-latency connectivity - this is especially true for deep learning / AI workloads, not necessarily for other applications which may have a higher computer to data transfer / synchronization ratios. The latency on a typical internet connection will likely stall the GPUs on a remote system, yielding little compute benefit.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: