You remind me of stock market speculators who fail to account for regime change: making a trading system based on past data incurs model risk in the form that things simply will never be like they were in the past ever again. Just because things worked a certain way in the past doesn't mean they will in the future, without good reason (the sun will rise tomorrow because that's how gravity works, not merely because it happened yesterday and the day before that).
If you can give a solid reason, with any actual logic to it rather than referring to past events, I'd be delighted. Just putting quotes and referring to totally different technological developments which many said were impossible indicates that you probably have nothing because you don't know any better than I do. Blind optimism is as unreasonable as blind pessimism.
I have some reasons: a lot of people who should care about machine learning and other fields related to AI who could benefit from it don't; research, even in a hot field like machine learning, moves slowly and it takes years to determine in retrospect that a given body of work made a significant, noteworthy impact. Papers go through months of review/revising and then get published months after that; by the time you read a journal paper the original work behind it might have been done 2-3 years ago. Things move on the scale of _years,_ and I don't see that changing anytime soon. That doesn't sound very singularity-ish to me.
Now, can you give anything based on actual reason/logic/evidence, or are you just going to give me another unrelated quote?
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
-- Arthur C. Clarke
If you're looking at academic papers, no wonder you don't believe it.
If we are only talking about creating an artificial general intelligence, this should do: Nature created a general intelligence through natural selection, which is a less powerful process than what humans are wielding (if only because humans can implement natural selection with computers and in addition have intelligent tricks that evolution doesn't have). Hence, if the human brain follows the laws of physics, it should be possible to find out how it works and implement it without all the messy parts that are bound to have accumulated over the years.
If you believe the estimates for the computing power of the human brain, we only need a few more years of growth in the dollar/flops ratio to make the required computational horsepower available to smaller organizations. In fact, supercomputers being designed today already touch the lowest estimates of brain computational capacity. So if these estimates hold, we only lack the knowledge of how to copy this process artificially. Which certainly doesn't mean that it is right around the corner, but not looking into it would be stupid.
Then, of course, the regular pro-singularity lament goes like this: Once we have created one machine that faithfully copies what the human brain does best, the exponential growth in processor capability will ensure that we soon have an army of intelligent machines. This sentiment is easy to attack. But even if processors stop getting better overnight (which is a grossly conservative estimate) we can still find improvements to our algorithms to make our hypothetical artificial intelligences better. In nature, brain mass doesn't equate with capability. (This last sentence is anecdotal evidence, but I won't bother to dig up the papers to see whether it holds).
You cite the sorry state of machine learning/artificial intelligence to show that we aren't anywhere near what is required to implement strong AI. I agree that the fields of cognitive science and artificial intelligence are in a sorry state that barely make any effort at copying what biological brains do. But didn't you just warn against extrapolating future change from history? It is obvious to any young, ambitious scientist that our parents have been banging away at the wrong problems for the last 30 years.
This isn't a pro-singularity post, because I believe that what Ray Kurzweil and his cronies preach carries too much resemblance to a cult. They may or may not have ulterior motives, but as you say, things always look better on paper than they do in the real world. What I am trying to say is that is would be incredibly stupid not to investigate these things closer (not to _try_, throw our full weight on the problem), because the implications of successfully pulling this off would be profound.
Wait - first you say that there's no basis for AI because of the historical leadup to today ... then you state in this post that people (stock speculators) who act based on systems based on past variables don't think clearly because they fail to take into accounts new development (regime change) which loosen the deterministic grip the past has on the future. So which is it - there is no AI soon because nothing in recent history indicates that there will be, or ... just because recent history fails to indicate that there might be a basis for it, doesn't mean that it's not just around the corner? A bit contradictory??
If you can give a solid reason, with any actual logic to it rather than referring to past events, I'd be delighted. Just putting quotes and referring to totally different technological developments which many said were impossible indicates that you probably have nothing because you don't know any better than I do. Blind optimism is as unreasonable as blind pessimism.
I have some reasons: a lot of people who should care about machine learning and other fields related to AI who could benefit from it don't; research, even in a hot field like machine learning, moves slowly and it takes years to determine in retrospect that a given body of work made a significant, noteworthy impact. Papers go through months of review/revising and then get published months after that; by the time you read a journal paper the original work behind it might have been done 2-3 years ago. Things move on the scale of _years,_ and I don't see that changing anytime soon. That doesn't sound very singularity-ish to me.
Now, can you give anything based on actual reason/logic/evidence, or are you just going to give me another unrelated quote?