Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
malnourish
3 months ago
|
parent
|
context
|
favorite
| on:
Qwen3-Omni: Native Omni AI model for text, image a...
The sales case for having LLMs at the edge is to run inference everywhere on everything. Video games won't go to the cloud for every AI call, but they will use on-device models that will run on the next iteration of hardware.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: