At trivago, we rely on the ELK stack to support our logging pipeline. We use a mix of filebeat and Gollum (github.com/trivago/gollum) to write logs into Kafka (usually we encode them using Protocol Buffers) these are later on read by Logstash and written into our ELK cluster.
For us, ELK has scaled quite well both in terms of queries/s and writes/s and we ingest ~1.5TB of logs daily just in our on-premise infrastructure. The performance/monitoring team (where I work at) consists of only 4 people and although we take care of our ELK cluster it is not our only job (nor a component that requires constant attention).
For us, ELK has scaled quite well both in terms of queries/s and writes/s and we ingest ~1.5TB of logs daily just in our on-premise infrastructure. The performance/monitoring team (where I work at) consists of only 4 people and although we take care of our ELK cluster it is not our only job (nor a component that requires constant attention).