Regarding the conceptual and theoretical background of the OpenCog approach, please see the two volumes "Engineering General Intelligence" published by Atlantis Press (distributed by Springer) a few months ago. There is a lot more subtlety to it than what you suggest.
AGI is mainstream science, these days. The keynote of the 2012 AAAI conference (the major mainstream AI research conference each year), by the President of AAAI, was largely about how the time has come for the AI field to refocus on human-level AI. He didn't use the term "AGI" but that was the crux of it.
The "AI winter" is over. Maybe another will come, but I doubt it.
What's different from 20 years ago? Hardware is way better. The Internet is way richer in data, and faster. Software libraries are way better. Our understanding of cognitive and neural science is way stronger. These factors conspire to make now a much better time to approach the AGI problem.
As for my own AGI research lacking anything new, IMO you think this because you are looking for the wrong sort of new thing. You're looking for some funky new algorithm or knowledge structure or something like that. But what's most novel in OpenCog is the mode of organization and interaction of the components, and the emergent structures associated with them. I realize it's a stretch for most folks to realize that the novel ingredients needed to make AGI lie in the domain of systemic organizational principles and emergent networks rather than novel algorithms, data structures or circuits -- but so it goes. It wouldn't be the first time that the mass of people were looking for the wrong kind of innovation, hmm?
Regarding tachyons in videos of AGI conferences, could you provide a reference? AGI conference talks are all based on refereed papers published by major scientific publishers. Some papers are stronger than others, but there's no quackery there.... (There have been "Future of AGI" workshops associated with the AGI conferences, which have had some freer-ranging speculative discussions in them; could you be referring to a comment an audience participant made in a discussion there?)
I wish you luck (well sort-of - with great power would come great responsibility and all-that).
I wasn't making up the tachyon guy. If I have time, I'll dig the video (it'd be a little hard since the hplus website reorganized). He was presenter and not an audience member, had at least a paper at one of these conferences. I can easily believe the AGI conferences have gotten better.
I would stick to the point that AGI needs to make clear how it will overcome previous problems - clear to mainstream science is useful for funding but clear to yourselves so you have ways to proceed is most important.
I don't necessarily agree exactly with Herb Dreyfus' critique but I think that in the minimum a counter-critique to his critique is needed to clarify how an AGI could work.
I mean, I have worked in computer vision (not that much even). There's no shortage of algorithms that solve problem X but nothing in particular weds them together. Confronted with a new vision problem Y, you are forced to choose one of these thousand algorithms and modify it manually. You get no benefit from the other 999.
As far as open source methodologies solving the AGI question, I've followed multiple open source projects. While certain things might indeed work well developed using the "bazaar" style, I haven't seen something as exacting a computer language come out of such a process - languages tend to require an individual designer working rather exactly - with helpers certainly but in many, many situations almost alone (look at Ruby, Perl, Python, etc). I would claim AGI would at least exactly as a computer language, possibly more-so. Further, just consider how the "software crisis", the limitations involved in producing large software with large numbers of people, expresses the absence of AGI. Essentially, to create AGI, you would need to solve something like a boot strapping problem so that you cause the intentions of the fifty or five thousand people working together to add up to more than what fifty or five thousand intentions normally add up to in normal software engineer. I suppose I believe some progress on a very basic level is needed to address.
This is Ben Goertzel, chief founder of the AGI conference series.
You are correct that the AGI conferences have a higher ratio of "speculative ideas"/"technical results" to ICML. This is intentional and I belief appropriate -- because AGI is at an earlier stage of development than machine learning, and because it's qualitatively different in character than machine learning.
Machine learning (in the sense that the term is now typically used, i.e. supervised classification, clustering, data minign, etc.) can be approached mainly via a narrowly disciplinary approach. Some cross-disciplinary ideas have proved valuable, e.g. GAs and neural nets, but the cross-disciplinary ideas there have quickly been "computer science ized"...
OTOH, I think AGI is inherently more complex and multifarious than ML as currently conceived, and hence requires more "out of the box" and freely multi-disciplinary thinking.
I think that in 10-15 years, when the AGI field is much more mature, the conferences will seem a bit more like ML conferences in terms of the percentage of papers reporting strong technical results. BUT, they will never seem as narrowly disciplinary as ML conferences, because AGI is a different sort of pursuit...
Thanks for the kind reply. I said ICML, but NIPS would have been a better point of reference -- since it was originally conceived as a cross-disciplinary enterprise. The NIPS TOC looks like this:
which indicates it's possible to have a selection of papers both technically sharp and interdisciplinary. We should all be so lucky to attract such a set of papers.
Hi, this is Ben Goertzel, the chief founder of the OpenCog AGI-focused software project and of the AGI conference series.
Comparing Google Search and IBM Watson to OpenCog and other early-stage research efforts is silly. Google Search and IBM Watson have taken fairly mature technologies, pioneered by others over decades of research, and productized them fantastically. OpenCog is a research project and is aimed at breaking fundamentally new research ground, not at productizing and scaling-up technologies already basically described in the academic literature.
Lecturing is a very small percentage of what those of us involved with OpenCog do. We are building complex software and developing associated theory. Indeed parts of our approach are speculative, and founded in intuition alongside math and empirics. That's how early-stage research often goes.
Of course you can trash all early-stage research as not having results yet. And the majority of early-stage research will fail, probably making you tend to feel vindicated and high and mighty in your skepticism ;p .... But then, a certain percentage of early-stage research will succeed, because of researchers having the guts to follow their intuitions in spite of the ceaseless tedious sniping of folks like you ;p ...
On the contrary, there IS a detailed, well-thought-out defense of the OpenCog design, but it's sufficiently complicated that the casual observer may not be able to appreciate it. Wait for the book "Building Better Minds" (http://opencog.org/wiki/Building_Better_Minds).. to be available later this year... for a detailed treatment of the theoretical foundations of OpenCog. Or for a few clues see http://goertzel.org/dynapsyc/2009/general_ai.htm and the references therein. For a crisp write-up of many of the ideas, and a comparison to other AGI approaches, see the proposal http://www.goertzel.org/CogBot_public.pdf . -- Ben Goertzel, leader of OpenCog ...
Looking forward to it, Ben. And while I respectfully disagree on principles, I greatly appreciate the work you do to advance thought and discussion on these issues.
A bunch of classifiers is not an artificial general intelligence. A human-level mind (or beyond) requires a systematic cognitive architecture, it's not going to emerge out of a heterogenous, quasi-random soup of mind-components. This is a naive theory of cognitive science, I would argue.
For a careful, systematic design for an advanced AI system, see http://opencog.org/wiki/OpenCogPrime, which is associated with the open-source AI project opencog.org
Anyway, I'm probably one of the biggest optimists in the AI research community, but in my view the idea in this post represents "way over the top" optimism based on a naive view of how mind works.
Given a mature version of an AI system built according to an overall AI architecture like OpenCog, then maybe people writing little "mind modules" as you suggest could make sense. But the Internet just is not a "mind operating system."
I wrote about what would need to be done to turn the net into more of a mind-OS in my 2001 book "Creating Internet Intelligence" (Plenum Press). At the high level I don't think this is a bad idea. But again, brains are not just soups of heterogenous processes -- the right high-level cognitive architecture is required.
-- Ben Goertzel
novamente.net
agiri.org
singinst.org
goertzel.org
I'm trying to draw a parallel between how content on the web is being created and how an artificial mind might be created. If you sat down at tried to engineer a giant encyclopedia within one company/community, you would end up with britannica (within one company), and wikipedia (within a community of enthusiasts), However, the web at large is much larger than both of these because of the economic incentives for people to create articles. Many people try and fail (in effect working for free), but the progress is rapid, as in evolution.
In the same way, incentivizing people to produce classifiers would have the similar effect of rapid progress.
If you think of intelligence as being the ability to predict accurately, having a giant web of classifiers that predict accurately could be construed as a form of artificial general intelligence, or a human type artificial intelligence. Having an enough data about the world, because you have created many classifiers that can recognize the semantic events in video and speech, would allow you to make all the same types of recognitions and predictions that a human would make.
Regarding the instructions for building and compiling OpenCog, it's a lot easier now than last summer. Try the Dockerfile, for example ... https://github.com/opencog/opencog/blob/master/Dockerfile