Hacker News new | past | comments | ask | show | jobs | submit login

Thank you for your hard work in this area! I'm just starting to learn about relational programming, but I'm interested in it from the perspective of "computer aided thinking", i.e. a deduction assistant.

When I'm learning a new system, especially in an unfamiliar domain, it sometimes takes a while for that "click" to happen in my brain. Before the click, I'm just building mental projections of the system internals, adding primitives and new relations on the fly based on whatever reading material I have at my disposal. After the click, I have a solid understanding of the core of the system and all other knowledge fits neatly on top of it.

This is especially important for me when designing new systems from my mental projections, as opposed to deducing systems that I know already exist. Sometimes there are inconsistencies hidden behind several layers of indirection. Hard to spot when you're not an expert in the domain, because the domain keeps changing as you're brainstorming.

I've been experimenting with JetBrains MPS for this exact purpose. It's good, but it doesn't "guide you" if that makes sense. Providing autocompletion for custom concepts is as far as it goes AFAIK. Barliman looks very useful for this kind of guided approach.

I'd love to hear any thoughts you may have on the topic! Do you think about things in a similar manner? Is there anything that I should check out?




Thanks for the question!

I've heard of MPS but I don't know much about it. I need to watch the demos/screencasts!

I'm very much interested in "tools for thought," which could include deduction assistants, but also other tools to augment human intellect. Barliman is the first program I've worked on with this tools-for-though mindset, which I've found very stimulating. I've been influenced by Englebart, Vannevar Bush, Licklider, Alan Kay, Alan Borning, Brett Victor, Alex Warth, Michael Nielsen, among others. I can point you to lots of resources on this topic, if you are interested.

Michael Ballantyne and I started a mailing list/set of Google Hangouts called "As We May Thunk" to explore some of these ideas. You might find some of the hangouts interesting:

http://webyrd.net/thunk.html

https://groups.google.com/forum/#!forum/as-we-may-thunk

I'm not sure I fully understand your question though. It sounds like you may have a specific tool or feature in mind. Can you elaborate a little more?

I'm always up for email or a hangout if this isn't the best venue.


Maybe I just need to learn Prolog/Aleph


Inductive Logic Programming is really interesting. I'd really like to explore similarities between ILP and the synthesis used in Barliman--I suspect Barliman's example-based synthesis could get a real boost from ILP.

One book I highly recommend, and which I've found very accessible, is Ivan Bratko's 'Prolog Programming for Artificial Intelligence'. The 4th edition has a section on ILP.

If you are interested in ILP and also in Barliman, maybe this is a topic we could explore together.


A good resource for relational learning in general including ILP is this:

http://www.springer.com/gp/book/9783540200406

And this is a good introduction to the statistical side of things, a.k.a. statistical relational learning:

http://www.cs.umd.edu/srl-book/

Also, if you're considering adding stochastic search this might be a good pointer:

https://dtai.cs.kuleuven.be/problog/

Problog is a probabilistic Prolog. Above is the implementation in Python but there's a few Prolog versions floating around, unfortunately the ones I tried did not seem to work out of the box.


Thanks for the great links!

I really enjoyed the Statistical Relational Learning book.

Rob Zinkov and I have worked on two prototypes of probKanren (https://github.com/webyrd/probKanren), a probabilistic version of miniKanren inspired partly by the Hakaru language (http://indiana.edu/~ppaml/HakaruTutorial.html, https://github.com/hakaru-dev/hakaru). We learned a lot from our two prototypes of probKanren, but neither version is ready for real use (and neither version is documented!).

Rob and I have taken a step back, and are now working with Evan Donahue on just adding stochastic search to miniKanren. If you are interested in joining us, please let me know! :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: