Do you see a good way to include backtracking in an imperative programming language?
I can imagine how unification would work, since the ubiquitous "pattern matching" is a special case of Prolog's unification. But I've never seen how backtracking could be useful...
backtracking is perilous in general; logic programming languages have really nice abilities for such but I don't know how to avoid pathological inefficiency.
Re-evaluation of a tabled predicate is avoided by memoizing the answers. This can realise huge performance enhancements as illustrated in section 7.1. It also comes with two downsides: the memoized answers are not automatically updated or invalidated if the world (set of predicates on which the answers depend) changes and the answer tables must be stored (in memory).
Known to the Prolog community since about the 1980's if I got my references right.
Yes :) we make software that helps sell complex products (If your product has a million options and takes up a whole factory floor you can’t just have a series of dropdowns)
Thanks for the insight, Görkem! I was always thinking CPQs make a really good use case. We had so many problems with performance, not your product, CPQ is becoming a standard software for pricing contracts.
> Up until about GPT 2, EURISKO was arguably the most interesting achievement in AI.
I'm really baffled by such statement and genuinely curious.
How come that studying GOFAI as undergraduate and graduate at many European universities, doing a PhD. and working in the field for several years _never_ exposed me to EURISKO up until last week (thanks to HN)?
I heard about Cyc, many formalism and algorithms that related to EURISKO, but never heard of its name.
It was featured in a BBC radio series on AI made by Colin Blakemore [1] around 1980, the papers on AM and EURISKO were in the library of the UK university that I attended.
So? There's a real possibility DART has still saved its customers more money over its lifetime than GPT has, and odds are basically 100% that your yoga teacher and IT colleagues haven't heard a thing about it either. The general public has all sorts of wrong impressions and unknown unknowns of facts that I don't see why they should ever be used as a technology industry benchmark by anyone not working in the UI department of a smartphone vendor.
From time to time, I read articles on the boundary between neural nets and knowledge graphs like a recent [1]. Sadly, no mention of Cyc.
I'd bet, judging mostly from my failed attempts at playing with OpenCyc around 2009, is that the Cyc has always been too closed and to complex to tinker with. That doesn't play nicely with academic work. When people finish their PhDs and start working for OpenAI, they simply don't have Cyc in their toolbox.
The cochlea actually supports the point the article makes, as while it does transform to the frequency domain it doesn't do (or even approximate) a Fourier transform. The time->frequency domain transform it "implements" is more like a wavelet transform.
Edit: To expand on this, to interpret the cochlea as a fourier transform is to make the same mistake as thinking eyes have cone cells that respond only to red, green or blue light. The reality is that each cell has a varying reponse to a range of frequencies. Cone cells have a range that peaks in the low, medium or high frequency area and tails off at the sides. Cochlear hair cells have a more wavelet-like response curve with secondary peaks at harmonics of their peak response frequency.
Caveat: I'm not an expert in this, only an enthusiastic amateur, so I eagerly await someone well-akshuallying my well-akshually.
Any kind of discrete Fourier transform, and also any device that generates the Fourier series of a periodic signal, even when done in an ideal way, must have outputs that are generated by a set of filters that have "a varying response to a range of frequencies".
Only a full Fourier transform, which has an infinity of outputs, could have (an infinite number of) filters with an infinitely narrow bandwidth, but which would also need an infinite time until producing their output.
So what you have said does not show that the eye cone cells do not perform a Fourier transform (more correctly a partial expansion in Fourier series of the light, which is periodic in time at the time scales comparable to its period).
The right explanation is that the sensitivity curves of the eye cone cells are a rather poor approximation of the optimal sensitivity curves of a set of filters for analyzing the spectral distribution of the incoming light (other animals except mammals have better sensitivity curves, but mammals have lost some of them and the ancestors of humans have re-developed 2 filters for red and green from a single inherited filter and there has not been enough time to do a job as good as in our distant ancestors).
Sure but the article asks the question about the frequency domain generally then constrains itself to Fourier transforms. Fourier has a lot of baggage from making large assumptions. Transforms like wavelet and laplace are closer to "real world" because of fewer non-physical assumptions and have actual physical implementations. It doesn't get much more real than seeing it with your own eyes.
I'm not certain the secondary peaks would matter very much though? It seems to me that maybe the most useful model would be not a wavelet transform but some form of DCT?
At any rate, the point is that the frequency domain matters a lot, since our brain essentially receives sound data converted to the frequency domain in the first place...
It's easy to forget how grounded in physics biology is. When I was in college, we had an issue where our stem cell lines were differentiating into bone. Turns out, the hardness of the environment is a signal stem cells can transduce, and the hard dish was telling them they were supposed to be bone cells.
I can imagine how unification would work, since the ubiquitous "pattern matching" is a special case of Prolog's unification. But I've never seen how backtracking could be useful...