Your question turns into a question of "have you worked with Avro, S3, Apache Druid..." which will filter out competent people that don't have experience with the specific list of tools.
The keywords there were "streaming" and "also ad-hoc analysis".
Answering those questions requires no knowledge of Avro or Druid, but rather, how to build streaming pipelines, and also make the data available for a query engine.
But that said, if you're in data engineering and are unaware of what Avro is, you're probably not at the level of knowledge I'd expect, and I'm happy to filter you out.
Likewise, if you're unfamiliar with S3, why are you even applying for a data engineering role?
But how can I be familiar with S3 without any data engineering work? Or maybe I should put it like this, what exactly does "familiar with S3" mean? Does it mean that I used it to store something (I mean even a non-programmer can do that), or that I should be able to manipulate (i/o) the store using Python with Boto (Again this can be done with a tutorial), or something else?
I believe that "something else" usually indicates some previous data engineering work though, so it's an egg and chicken problem.