Hacker News new | past | comments | ask | show | jobs | submit login

> Looking forward to your feedback as you try it out.

Thanks Rajat. We use typical Cortex-A9/A7 SoCs running plain Linux rather than Android. We would use it for inference.

1. Platform choice

Why make TFL Android/iOS only? TF works on plain Linux. TFL even uses NDK and it would appear the inference part could work on plain Linux.

2. Performance

I did not find any info on performance of TensorFlow Lite. Mainly interested in inference performance. The tag "low-latency inference" catches my eye, just want to know how low is low latency here? milliseconds?




1. The code is standard C/C++ with minimal dependencies so it should be buildable on even non-standard platforms. Linux is easy.

2. The interpreter is more optimized for being low overhead and the kernels are better optimized especially for ARM CPUs currently. While model performance varies by model - we have seen significant improvements on most models going from TensorFlow to TensorFlow Lite. We'll share benchmarks soon.


> The code is standard C/C++ with minimal dependencies so it should be buildable on even non-standard platforms. Linux is easy.

Glad to hear that Rajat. Since it is easy as you say, I look forward to your upcoming release with Linux as standard. :-)


Also interested in answers to these two questions, as well as OpenCL performance in vanilla linux (iMX6 and above).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: