Hacker News new | past | comments | ask | show | jobs | submit | more mathetic's comments login

That doesn't help at all because although Datalog is a syntactic fragment, its dynamic semantics are very different, so the operations defined in Prolog standard are suggestive at best and completely nonsensical at worst.

For example, the program `p(X) :- p(X)` with query `?- p(X)` is an infinite loop in Prolog but in Datalog it terminates trivially and is equal to the empty set of facts for `p` in the absence any other facts.


I think it's more cultural than anything else. SQL, despite being declarative itself, has a more imperative feel to it for some people (recursion being at the fringes of its standard) and people seem to believe more declarative means slower and more difficult to comprehend. The former is true only due to the implementation effort that went into SQL engines and the latter is plain false.

It probably doesn't help that the standard presentation of Datalog looks very much like Prolog, which most people think is an esoteric and failed programming language.

As for its limitations, it's disciplined in a number of ways, but those are features rather than shortcomings. So you can see in the post that it employs syntactic restrictions to ensure domain independence, it doesn't allow function symbols to have termination. It requires predicates that are not purely logical (extralogical) to satisfy certain conditions for safe execution. These make sure that your queries are well-behaved even before you start running the query. There is research on relaxing these restrictions (cough my work cough) without compromising on the nice properties that come along with it.

The nice thing is of course that if you want to do the same for SQL you start with pandora's box open and try to contain the chaos inside again, while with Datalog you're carefully trying to make the best out of it without unleashing madness. The latter is a much nicer foundation.


OP here. Happy to answer any questions.


That's not always an option. Your career very much depends on consistently publishing in "A" venues (conference/journal). So it depends on your subcommunity's opinion on open access. For example, programming language people as a community are super keen on it, so most top venues are becoming open access. If that's not the case for your field, it's basically academic suicide unless you already have an amazing reputation.


You see, I read that clause in particular as "The difference between research and tinkering is whether you belong to an organisation that's willing to spend 30k+ a year on journal access".

And fuck that. If that's the attitude that academia is going to take, clearly we need to kill it off, tinker for a while, and start fresh.

Maybe we can refresh some standards for academic writing while we're at it.


I've heard some prominent academics speak to the same end, or farther, in the rare access I've had. (Yet with displaying no less passion for their chosen subjects. Who would have guessed!)


Freeman Dyson is actually quite vocal about his opposition to the current structure of academia, so I'm not surprised that you'd be able to find other academics who were equally self-critical.


I probably shouldn't be so surprised— but the number of esteemed academics I've had the chance to encounter I can count on one hand. Though many seem to share that same opinion.

The number of pretentious title-holders (even at the lowest minimum levels) I've met seems unending. I could get into that one... but I won't.

Instead you've just reminded me of yet more subjects to stack on my reading list.


This is all true. However, the large publishers (IEEE, ACM, Springer) these days all allow you to self-archive on your own webpage and typically also on ArXiv. So you can (and should) make your papers freely available. I have done that since the start of my career. It's just a matter of looking.


It is a breath of fresh air seeing someone, after earning some success and wealth, still remembers the values he started out with.

Going a step further and being a good leader would I suppose be to steer the technology towards a tool that can create the values he desires e.g. advocate for proof of work that folds proteins.


Ah, that's a problem with many facets.

It is not that we can't come up with a logic language that is more declarative, it is just that telling the program everything about the universe is so damn dull. For example, ordering of your body predicates will often be based on what you think intuitively their sizes of solutions are going to be e.g. `plus(A,B,C), solution(C,X).` You know that plus is injective, so you put it first but from a logical perspective putting solution predicate first is just as sensible. If the language allowed you to express relative sizes of the predicates, then this could have been done automatically, but it would involve coding the order of predicates somewhere else in the program!

You also need to get rid of cuts in Prolog, if you want to be closer to true declarative programming. You might like to then try Datalog (alas, not Turing complete).

Another approach is to separate concerns more clearly. The part of Prolog that encodes and solves a logical problem is often different than the one dealing with IO for example. Ideally, you want to be more declarative in the former domain and more imperative in the latter. This can (and probably have been done) with a monadic approach.


I made an online Datalog IDE with Lua extensions (Turing complete): http://ysangkok.github.io/mitre-datalog.js/wrapper.html


I do not get why you mean by plus being injective.

`plus(1, 2, C). C = 3.`

`plus(0, 3, C). C = 3.`

Isn't that a counterexample, showing how it is not injective?


Yes, I was thinking total with respect to _intuitive_ arguments, but wrote injective. Sorry.


I'm a grad student and I spend a lot of my time reading CS papers.

It is amazing that you want to start reading CS papers and I highly encourage it. However, if you don't have a CS degree I think papers might be remarkably off-putting (they usually are even to early grad students). I suggest you start with textbooks instead. Papers suffer from not being rewritten once they are published, hence from a pedagogical point of view, they are usually not explained as well as they could be after digesting them for many many years. Another problem is that most papers contain multiple ideas some of which rise and shine and the other ones die (often for good reasons). It is not easy to spot which is which without knowing about the wider context, which you naturally lack as a beginner.

If you insist on reading papers, however, at least don't read them in linear order. A good section order is abstract -> intro -> conclusion -> related work (because it uses comparisons which help) -> background -> evaluation (if exists!) -> technical chapters (those are usually best read in linear order). If there are proofs read them only if you must! If the proof is presented in the paper that is a good indication that it contains multiple subtle points which are tough to understand even if you are an experienced researcher.


Could you suggest some textbooks? And if you have the time, a word on why particularly book X and not book Y in the same subject?

There’s so much noise online now about these topics that it’s very hard to figure out where to focus and which resources are worth investing time into.

Thanks in advance!


Sure. Which subjects are you interested in?


We got caught up doing science of it all.


Nope. Strange loops are not by definition not intentionally circular (while a circular graph in a file system is in this case). He makes that clear I think in the final chapter of GEB.


He likes computers, he even likes AI. He doesn't buy into Ray Kurzweil's ideas about singularity [0,1]. He also (as I understand) is in Chomsky school of statistical learning as opposed to Peter Norvig (or Google) school [1,2]. Those two are highly unpopular stances to have these days, so I can see how that can be confused with not liking computers/AI.

If you read GEB, you can see in different chapters that he is a big fan of computers, simulations, attempts at AI, and the such.

[0] https://www.youtube.com/watch?v=Nhj6fDDnckE

[1] http://www.americanscientist.org/bookshelf/pub/douglas-r-hof...

[2] http://norvig.com/chomsky.html


I mean, the source is, I had dinner with the guy for a school dealio and he said he didn't really like computers and had grad students to do all the programming. GEB is full of mathematical content, but it isn't hung up on computers as machines and concrete things.


I'm not seeing him expressing a position on language that goes as far as Chomsky's (as described by Norvig) in that interview. Has he written more about this somewhere that you know of?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: