Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Granted.

But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

That is, ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

In some sense that keeps people honest, it also lowers the cost of creating training data, but it only teaches people how to do the same data set over and over again, not how to do a fresh one.

So a lot of this activity is meaningful in terms of the field, but not maybe not meaningful in terms of useful use.

I saw this happen in text retrieval; when I was trying to get my head around with why Google was better than prior search engines, I learned very little from looking at TREC, in fact people in the open literature were having a hard time getting PageRank to improve the performance of a search engine.

A big part of the problems was that the pre-Google (and a few years into the Google age) TREC tasks wouldn't recognize that Google was a better search engine because Google was not optimized around the TREC tasks, rather it was optimized around something different. If you are optimizing for something different, it may matter more what you are optimizing for rather than the specific technology you are using.

Later on I realized that TREC biases were leading to "artificial stupidity" in search engines. IBM Watson was famous for returning a probability score for Jeopardy answers, but linking the score of a search result to a probability is iffy at best with conventional search engines.

It turns out that the TREC tasks were specifically designed not to reward search engines that "know what they don't know" because they'd rather people build search engines that can dig deep into hard-to-find results, and not build ones that stick up their hand really high when they answer something that is dead easy.



> But the academic activity is focused around the kind of activities that Kuhn calls "Normal Science".

True, but even Kuhn would note that most paradigm shifts still come from within the field. You don't need complete outsiders and, as far as I know, outsiders revolutionizing a field are quite rare.

You need someone (a) who can think outside the box, but you also need (b) someone who has all of the relevant background to not just reinvent some ancient discarded bad idea. Outsiders are naturals at (a) but are at a distinct disadvantage for (b).

I think what's really happening in this thread is:

1. Carmack is a well-deserved, beloved genius in his field.

2. He's also a coder, so "one of us".

3. Thus we want him to be a successful genius in some other field because that indirectly makes us feel better about ourselves. "Look what this brilliant coder like me did!"

But the odds of him making some big leap in AGI are very slim. That's not to say he shouldn't give it a try! Society progresses on the back of risky bets that pay off.


> But the odds of him making some big leap in AGI are very slim.

That's probably true. I look at this as Carmack running his own PhD program. I expect he will expand what we know about computation and the AGI problem before he's done.


> ML researchers mainly do competitions on the same data sets, trying to put up better numbers.

There are surely a lot of researchers doing that, but do you really think anyone who has a plausible claim at being one of the top 100 researchers in the field in the entire world is doing that? Even if there are only 100 people doing truly novel research, that's still 100 times as many people as are going to be working on Carmack's research.


How many people were working on physics before Einstein came along?

I don't think you understand the desired outcome here. We want eureka moments, and we're hopeful for some. That doesn't mean we expect them to happen. Stop being such a pessimist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: