covers how to wire up the components to run a real-time voice agent that you can actually talk to via an ESP32 client, keeping the heavy lifting on your local machine.
This post summarizes WasmEdge's recent talks at OSSummit Korea and KubeCon NA 2025, focusing on its new role as a premier AI runtime. Key technical highlights include expanded multi-GPU support (NVIDIA, AMD, Apple), multi-backend inference (TensorRT, OpenVINO), and the use of Embedded Rust/Wasm for building real-time voice AI agents (EchoKit project).
The talk explores the idea that while human-centric languages like Python are popular for their ease of use, they are suboptimal for AI-driven development. Rust, with its strict compiler and strong type system, provides an excellent reward function and a tight feedback loop for LLMs, forcing them to generate correct, efficient code. This could make Rust the de facto language in a future where most code is written by AI. The video demonstrates this with a voice AI agent built with Echokit and Rust. the orignal tak is there https://www.youtube.com/watch?v=bbq0b_FpYEY