Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Explain Yourself Leveraging Language Models for Commonsense Reasoning (arxiv.org)
51 points by sel1 on June 9, 2019 | hide | past | favorite | 2 comments


The model is trained end-to-end to answer common-sense-reasoning questions after generating explanations for its answers, using sample human explanations as part of the training data.

This results in improved performance on the question-answering task.

This is fascinating, although in hindsight, not entirely surprising: inducing a machine to learn to model human explanations helps the machine perform better in testing.

A natural question follows:

Can we find ways to induce much larger models to learn to generate human explanations about a growing number of subjects of increasing complexity?


Reminds me of the concept of "Social Stories": https://en.wikipedia.org/wiki/Social_Stories

> Social Stories are a concept devised by Carol Gray in 1991 to improve the social skills of people with autism spectrum disorders (ASD). The objective is to share information, which is often through a description of the events occurring around the subject and also why.

I'm wondering if this kind of "common sense" interactions could be leveraged to train models?

Here's a concrete example, "Being Angry and Safe": https://youtu.be/R8c_Br8I_Tc?t=28




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: