Hacker News new | past | comments | ask | show | jobs | submit login

recently, a lot of neural network models, especially those in NLP (like GPT-3, BERT, etc.) use "attention" which basically is a way for neural networks to focus on certain subset of the input (the neural network can focus its "attention" to a particular part of the input). Explanations just refers to ways for explaining the predictions of the neural networks.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: