Hacker News new | past | comments | ask | show | jobs | submit login

Adding backends for TensorRT, ONNX, JAX, etc are on our TODO list (and we'd love to see PRs to add support for these and others)!

We actually do use TensorRT with several of our models, but our approach is generally to do all TRT related processing before the Neuropod export step. For example, we might do something like

    TF model -> TF-TRT optimization -> Neuropod export
or

    PyTorch model
    -> (convert subset of model to a torchscript engine)
    -> PyTorch model + custom op to run TRT engine
    -> TorchScript model + custom op to run TRT engine
    -> Neuropod export

Since Neuropod wraps the underlying model (including custom ops), this approach works well for us.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: