Instead of having the party ssh into a VM installed on the user's machine, potentially exposing a high majority of the user's codebase, have you considered spinning up temporary containers on your back-end and having contributors install something like remote CUDA or remote OpenCL so that only the GPU kernels are transferred to the contributor, who's client software polls a network queue checking to see what kernel should be run and where the results should be sent?
Good idea from the perspective of not exposing the code base. However, technologies such as remote Cuda/OpenCL which rely on remote execution of compute kernels in general require high-bandwidth and low-latency connectivity - this is especially true for deep learning / AI workloads, not necessarily for other applications which may have a higher computer to data transfer / synchronization ratios. The latency on a typical internet connection will likely stall the GPUs on a remote system, yielding little compute benefit.