| | Security Mindset and Ordinary Paranoia (intelligence.org) |
|
4 points by rsaarelm on Nov 26, 2017 | past
|
| | MIRI got awarded $3.75M from the Open Philanthropy Project (intelligence.org) |
|
1 point by lyavin on Nov 9, 2017 | past
|
| | New Paper: “Functional Decision Theory” (intelligence.org) |
|
4 points by JoshTriplett on Oct 22, 2017 | past
|
| | New MIRI Paper: “Functional Decision Theory” (intelligence.org) |
|
5 points by ASipos on Oct 22, 2017 | past
|
| | AlphaGo Zero and the Foom Debate (intelligence.org) |
|
1 point by JoshTriplett on Oct 21, 2017 | past
|
| | There's No Fire Alarm for Artificial General Intelligence (intelligence.org) |
|
219 points by MBlume on Oct 14, 2017 | past | 206 comments
|
| | Why so many very smart people are worried about AI [pdf] (intelligence.org) |
|
3 points by chakalakasp on July 16, 2017 | past
|
| | A.I. Alignment: Why It's Hard, and Where to Start (intelligence.org) |
|
2 points by davedx on May 10, 2017 | past
|
| | Vingean Reflection: Reliable Reasoning for Self-Improving Agents [pdf] (intelligence.org) |
|
1 point by cl42 on May 8, 2017 | past
|
| | Coalescing Minds [pdf] (intelligence.org) |
|
1 point by tvural on April 18, 2017 | past
|
| | Ensuring smarter-than-human intelligence has a positive outcome (intelligence.org) |
|
2 points by rbanffy on April 13, 2017 | past
|
| | Ensuring smarter-than-human intelligence has a positive outcome (intelligence.org) |
|
2 points by apsec112 on April 12, 2017 | past
|
| | Cheating Death in Damascus – The escaping death dilemma (intelligence.org) |
|
3 points by titusblair on April 6, 2017 | past
|
| | 2016 in Review – Machine Intelligence Research Institute (intelligence.org) |
|
4 points by JoshTriplett on March 29, 2017 | past
|
| | Cheating Death in Damascus [pdf] (intelligence.org) |
|
2 points by apsec112 on March 19, 2017 | past
|
| | Response to Ceglowski on superintelligence (intelligence.org) |
|
11 points by apsec112 on Jan 15, 2017 | past
|
| | AI Alignment: Why It’s Hard, and Where to Start (intelligence.org) |
|
5 points by apsec112 on Dec 28, 2016 | past
|
| | Reducing Long-Term Catastrophic Risks from Artificial Intelligence (intelligence.org) |
|
1 point by davedx on Dec 23, 2016 | past
|
| | Logical Induction (intelligence.org) |
|
157 points by apsec112 on Sept 13, 2016 | past | 65 comments
|
| | Safely Interruptible Agents [pdf] (intelligence.org) |
|
28 points by wallflower on July 10, 2016 | past
|
| | Algorithmic Progress in Six Domains [pdf] (intelligence.org) |
|
1 point by apsec112 on July 6, 2016 | past
|
| | A formal solution to the grain of truth problem (intelligence.org) |
|
93 points by ikeboy on July 1, 2016 | past | 15 comments
|
| | New paper: “A formal solution to the grain of truth problem” (intelligence.org) |
|
2 points by JoshTriplett on July 1, 2016 | past
|
| | Safely Interruptible Agents – Accessible Paper on Google's AI 'Kill Switch' [pdf] (intelligence.org) |
|
3 points by hunglee2 on June 12, 2016 | past
|
| | DeepMind's “Safely Interruptible Agents” [pdf] (intelligence.org) |
|
4 points by vonnik on June 9, 2016 | past
|
| | Interruptibility, AI and the big red button [pdf] (intelligence.org) |
|
1 point by pilooch on June 8, 2016 | past
|
| | A 'Big Red Button' for AI to interrupt its harmful sequence of action [pdf] (intelligence.org) |
|
3 points by auza on June 6, 2016 | past | 3 comments
|
| | Safely Interruptible Agents [pdf] (intelligence.org) |
|
3 points by neverminder on June 4, 2016 | past
|
| | A new MIRI research program with a machine learning focus (intelligence.org) |
|
1 point by convexfunction on May 7, 2016 | past
|
| | How We’re Predicting AI – Or Failing to [pdf] (intelligence.org) |
|
2 points by ernesto95 on April 25, 2016 | past
|
|
|
More |