recently, a lot of neural network models, especially those in NLP (like GPT-3, BERT, etc.) use "attention" which basically is a way for neural networks to focus on certain subset of the input (the neural network can focus its "attention" to a particular part of the input). Explanations just refers to ways for explaining the predictions of the neural networks.