Obviously not op, but these days LLMs can be fuzzy functions with reliably structured output, and are multi-modal.
Think about the implications of that. I bet you can come up with some pretty cool use cases that don't involve you talking to something over chat.
One example:
I think we'll be seeing a lot of "general detectors" soon. Without training or predefined categories, get pinged when (whatever you specify) happens. Whether it's a security camera, web search, event data, etc
Think about the implications of that. I bet you can come up with some pretty cool use cases that don't involve you talking to something over chat.
One example:
I think we'll be seeing a lot of "general detectors" soon. Without training or predefined categories, get pinged when (whatever you specify) happens. Whether it's a security camera, web search, event data, etc