"... A comprehensive evaluation was performed on a suite of models, including Qwen3-Omni-30B-A3B-
Instruct, Qwen3-Omni-30B-A3B-Thinking, and two in-house developed variants, designated Qwen3-
Omni-Flash-Instruct and Qwen3-Omni-Flash-Thinking. These “Flash” models were designed to improve
both computational efficiency and performance efficacy, integrating new functionalities, notably the
support for various dialects. ..."