Hacker News new | past | comments | ask | show | jobs | submit login
Special projects (openai.com)
235 points by hendler on July 28, 2016 | hide | past | favorite | 194 comments



"If you do not work on an important problem, it's unlikely you'll do important work. It's perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them. Let me warn you, `important problem' must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important."

Richard Hamming, "You and Your Resarch"

http://www.cs.virginia.edu/~robins/YouAndYourResearch.html


For most of this problems that they mention you need AGI, we do not know how to attack this problem also.

Solving any coding challenge is NP-hard problem. You need to understand not only what you need to do but also how language in which you need to do it works. For example in game of Go you have huge amount of possible states but you have only couple of moves to transition to this states. In programming language every line of code has large space of states and it grows exponentially with every new line of code. Sure you can brute force "hello world" program but good luck with writing 1000 lines of code that work together to solve complex problem. Pattern matching will not help you, training based on github will not help you.


Hi, I'm a researcher in program synthesis.

I think you meant to say "NP-hard" . And indeed, there are many program synthesis problems which are NP-hard, 2EXP-complete, or undecidable. But there are also many program synthesis algorithms which are polynomial time. If you use Windows, there might even be one running on your computer.

We've been studying this problem for close to 50 years. I think you see the basic problems, but we know a lot about what to do about them.


> If you use Windows, there might even be one running on your computer.

Which algorithm / program is this?


Maybe OP means Excel's FlashFill[1]? It's a feature that figures out string manipulations based on a handful of examples, and it pretty much went directly from MSR research on program synthesis into Excel 2013.

[1]: http://research.microsoft.com/en-us/um/people/sumitg/flashfi...


FlashFill, a feature in Microsoft Excel.


Indeed, NP-hard (edited it now). Can you expand on "we know a lot about what to do about them" ? Thanks.


So first, NP-hard isn't much of a barrier. We often like to call it "NP-easy." Or, going a step further:

"PSPACE-complete is great news. It's the new poly-time." -- Moshe Vardi

SAT solvers are very fast. It's worst-case exponential, sure, but it's hard to find that worst-case. I just heard at lunch yesterday that MaxSAT is now also easily handling problems with millions of variables. I think I'd be scared if I found out what the record was for normal SAT.

So, how do you actually do synthesis? I could talk for quite a while about it.....but here's a pretty good intro-article: https://homes.cs.washington.edu/~bornholt/post/synthesis-for...


It's worst-case exponential, sure, but it's hard to find that worst-case.

It's not that hard. Just generate a boolean circuit for the multiplication of 2 64 bit prime numbers and convert the circuit to a 3-SAT formula. I doubt any current SAT solver can solve that problem. If you could do it for 1024 bit primes a lot of cryptography would be toast.

EDIT: To be a bit more clear I mean that the circuit takes two n-bit numbers, multiplies them then compares the result to some known product of 2 primes. So by solving this circuit you factor the known integer.

EDIT2: Doesn't solving MaxSAT exactly imply that you can also solve SAT ? If there's a SAT solver that can handle million variable instances "easily" that's something I'd be really interested in hearing more about.


The satisfiability problem doesn't require that you provide the solution, only that you determine whether or not a solution exists. So you'd end up with a primality check, which is known to be in P:

https://en.wikipedia.org/wiki/AKS_primality_test


It's not a primality test. A primality test determines whether or not a single given integer is prime.

The circuit I described checks that the product of two arbitrary input integers is a specific known integer.

By solving the decision problem for each bit independently you can determine all bits of the 2 input integers and hence factor the known target.


Neat. I wasn't aware of the trick of peeling off a bit at a time.


Relatedly, even though Boolean Satisfiability [1] is NP-complete, there are SAT solvers that can solve huge practical instances fast enough to be useful.

1: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem


Solving any coding challenge is NP-hard problem.

To prove that statement you'd have to carefully define your terms.

If "Develop a polynomial time algorithm to determine whether any given 3SAT problem is satisfiable" is a valid coding challenge then the statement is obviously true.

On the other hand I doubt that solving the types of coding challenges that actually appear in competitions can be proven to be NP-hard because that would prove that humans are much better at solving NP-hard problems than any currently known algorithm. I don't think there is any such proof (though there may be some weak evidence for the proposition).


It'd be useful-enough to have an AI that could be given problem descriptions of the first kind (problem-statements that describe problems which may or may not require original research to solve), and then manage to figure out whether they reduce to gluing a set of "known" solutions together (and then, perhaps, generate such solutions), or whether they require original scientific/mathematical research (at which point it could just shrug its digital shoulders, like most humans do at that point.)

And by "known", I don't mean "in the AI's knowledge-base"; the AI would properly at least be able to hunt down textbooks and journal papers, read them, and learn problem-solving approaches from them. In other words, the AI would at least be as able to "do science" in the way that a grad student is expected to "do science."


I think any algorithm that could do that would count as an artificial general intelligence (AGI) and I think the current consensus is that we currently have no idea how to create one.

What relation an AGI has to NP hardness is unclear. I think that if P=NP in the sense that a practical algorithm exists for solving large (ie. 10^9 variables) NP complete problems then AGI (even super-AGI) would probably follow. However I don't necessarily think that's a necessary condition for AGI to exist.


Is that really true? There's a long list of important science which was discovered by accident while working on other things (serendipitous discoveries). In astronomy, you have the discovery of gamma ray bursts and the cosmic microwave background. Viagra is another well known accidental discovery.


If you do not work on an important problem, it's unlikely you'll do important work.

An anecdotal counterpoint would be that Einstein was working as a patent examiner in 1905, his "annus mirabilis"... (Of course it's not very useful to generalise Einstein's career.)


Be careful to differentiate working on important problems and being paid to work on important problems. I've heard (though I can't find any sources) that one of the reasons Einstein took the job at the patent office was because it was such a low effort job that he had time to work on his more important ideas.

Also note that some of the major breakthroughs in the past decade or two have been by mathematicians and scientists that are outside of the "system" (Grigori Perelman, Yitang Zhang (though maybe he's not as much of an outsider)). Maybe a bit outdated but Riemann and Galois also fall into that category.


Yes, I was basically trying to say the same thing as you.

The quote mentioned Bell Labs, but you don't have to be at a high-end research lab to work on important problems.


I thought that the point the grandparent is highlighting is that "important work" is not defined by what other people think is important, but by the fact that you have a unique angle of attack that suddenly makes a previously-unsolved problem tractable. Einstein had a unique angle of attack for all 4 of his annus mirabilis papers, and the photoelectric effect, Michelson-Morley, and Brownian motion were all known to be important unsolved paradoxes in the physics of the day.


> An anecdotal counterpoint would be that Einstein was working as a patent examiner in 1905, his "annus mirabilis"...

Presumably he worked on big physics problems while also performing his job as patent examiner.


Awesome. A quantitative version of Hamming's thought process seems to be 80,000 hours' Problem Framework.

https://80000hours.org/articles/problem-framework/


What about unexpected discoveries or breakthrough, surprising new uses of previous minor innovations?

Those who developed the minor innovation were "doing important work", as it turned out, without even realizing it.

My understanding is that these are common mechanisms through which important work is accomplished.


When you are uber rich and have no real technical skills you think that things get done by rich people demanding it and throwing money at it.


Well, that can help.


> 2. Build an agent to win online programming competitions. A program that can write other programs would be, for obvious reasons, very powerful.

The people on top desperately want us gone, and when it happens it'll happen so quickly you won't know what hit you. Software engineers need to recognize this common threat and organize (labor) sooner than later. Even more important is for all engineers to plan for an imminent future where developers are not paid like they are now, if at all. We have it good, but we will be automated away like everybody else, just a little later.


For simple data import / export tasks, simple CRUD, generic admin dashboards, I agree that those tasks likely will be solvable by AI in the not too distant future. As a matter of fact, I think it's somewhat bewildering we still need human developers for that kind of task because the tasks themselves don't really require a lot of intelligence. Problem is, the technologies and systems we use to implement them today still require a lot of human interaction and specialist knowledge.

Other than that, implementing an AI that is capable of understanding business problems and solving them via code probably is nothing short of artificial general intelligence. In that case we won't have to worry about not being needed anymore anyway because by then we'll either very rapidly have a post-scarcity economy or you know ... Skynet ...


My feeling is well. If standard CRUD coding gets automated nicely, my feeling is that many (at least some?) CRUD type coders could simply move into a sort of AI business analyst type role. To be honest, business analysis is already a fair bit of the CRUD coder type job. Current AI requires training data in order to learn what the rules are, which means that most likely they will need a team to formulate the rules in a manner that the AI understands.

If AI gets to that level where the programmer is completely replaced, well, regarding "the top" that "wants us gone"... many of them would also be gone. There's no reason a general AI of that power could not automate (to give typical "the top" occupations) much of finance, law, health care, or even management / executive work.


We already have a post-scarcity economy. There is enough of everything to go around for everybody. Kropotkin talked about this in "The Conquest of Bread" in the 19th century. If history is a lesson, no degree of abundance will result in collective plenty.


The total output of the world economy is about $13,000 worth of goods and services per person per year. That's not enough to provide even just education and healthcare up to a standard that would be considered remotely acceptable by people in the west.

(For comparison: the average public school student in the US costs $12,000 to educate. And yet plenty of people will say that even that is not enough.)

But the good news is if we don't fuck up, that number will continue to grow nonlinearly, just as it has for many decades. We will live to see it cross the point where it really is high enough to give everybody a comfortable life.


> the average public school student in the US costs $12,000 to educate

Why does it cost $30,000 to educate a class of 30 kids for a month? The salary of the teacher should be <$5,000. Double, or triple that to include rent and other costs. It still comes out at half of $30K.


> no degree of abundance will result in collective plenty

Will it? When you look at most First World countries in general nobody there has to starve anymore.

In the Middle Ages only liege lords could get by without ever having to work. Nowadays, even middle-class people don't necessarily need to work till retirement age.


> nobody there has to starve anymore

Use of food banks has increased in the UK, and hunger is very much present: https://en.wikipedia.org/wiki/Hunger_in_the_United_Kingdom

This is a policy choice. For many voters, it's far more important that nobody "gets something for nothing" than nobody starves to death. This has resulted in the benefits system being increasingly punitive leaving some people without money for food for weeks at a time.


You and I have seen very different parts of the "First World", then.


Indeed, hunger is still a problem in the US. Our welfare system is so fragmented and conditional that there are numerous cracks where people of the wrong class/gender/neurotype/etc. can fall undetected and unhelped, and even in cases where resources are available, there is such a strong stigma against using them that some people don't even realize they have the option.


I feel there are a lot of areas where poverty exists in the US, and as a society, we should work to improve them. I disagree though, that 'hunger' is a problem we should be trying to address. If we're claiming that a class of people in society regularly don't get enough food, then I should see malnourished people regularly in the hospital emergency department where I work as a physician, and I never do, except in the case of alcoholics, other substance abusers, people with severe gastrointestinal disease, and some old people with severe dementia living on their own (in which case, the problem is their general inability to care for themselves, not inability to afford sufficient calories). In fact, the most common nutritional problem seen among the poor is obesity. Fyi, all the hospitals I have worked in regularly over the last 8 years have been classified as 'medically underserved areas', which usually corresponds very closely with most other measures of poverty, so I don't think I am seeing an unrepresentative sample. Edit: I should have made explicit, we should of course be putting a lot more effort into feeding those people who are malnourished around the world, of whom there are way too many.


i strongly recommend reading "the limits to growth" (don't remember authors, but it's easy to find) and "the culture" series by late mr Banks. we're nowhere near post-scarcity. in fact, i don't believe we can reach post-scarcity without a benevolent AGI that can deal with psychopaths.


For simple data import / export tasks, simple CRUD, generic admin dashboards

Tell that to the people who insist that Real Programmers™ always write these things from scratch, never use a library to help them with it, etc.


I hear what you are saying about CRUD, but then I look at Kendo and weep...


Why does Kendo (UI) make you weep?


1. You can't use standard UI automation test tools on it (easily).

2. This may just be my projects, but people tend to re-make excel on the web with it.

3. Its easy to get started coding with it, but quickly turns in to spaghetti filled with glass shards when you try to do advanced things with the controls.


They took a pretty good swing at solving CRUD with CASE tools. It was an interesting approach, but never had the flexibility to get by without customization.


I spent a few years of my life back in the 90s working on CASE tools at Oracle. The main useful remnant I now have of that time is a useful ability to spot when developers go too meta - i.e instead of just solving the problem in hand they try to build a generalized solution that can solve even problems they haven't seen yet.


>I agree that those tasks likely will be solvable by AI in the not too distant future.

What experience or understanding about "AI" do you base your opinion on?


The people on top desperately want us gone

This is a cynical view of the world. Maybe the people "on top" simply want to improve their company's efficiency in order to return value to investors, etc. My point being its economics, not some evil intent.

Software engineers need to recognize this common threat and organize (labor) sooner than later

And the point of that would be what? To protect our jobs by demanding that industry ignore and no longer pursue innovation? That seems like heresy for anyone in Tech.

If the day does come that AI starts writing code, the lost of our fat salaries and stock options will likely be the least of our problems; or perhaps the world will be void of problems altogether.


When automation comes and replaces us, we need to be owners of robots and AI, in other words to have some kind of capital and enjoy a revenue stream from it. If not robots, then at least we need to have land to cultivate for food. When nobody gives you a job any more, you either cultivate your own food, or associate with others to become investors in the new tech.

Those who already have capital and can invest it smartly might fare much better than those who only rely on BHI, which is at the whims of politicians. A person need not be super rich, if she can associate with other to buy land for agriculture or robots for manufacturing. Robots are analogous to land. You gotta have one or the other :-)


Why should we fight that? That's no different from taxi companies fighting Uber, or how Intuit lobbies congress to avoid automatic filing of taxes. Hampering progress for the greater good just because it harms you in the short term is never a good idea.


This wouldn't harm me in the short term alone, it would be a lost career and a life catastrophe. A taxi driver can work as an Uber driver. I cannot become an automated programmer. So I don't give a damn about the "greater good" in this context, and question that premise also: Sibling mentions getting used to everybody being retirees, but what I see is collective poverty with token conveniences. Time will tell, etc.


A Taxi driver can't become an Uber driver when all Uber cars are self driving, which will happen and likely happen far sooner than programming gets automated away.

HN doesn't bat an eye at Uber, AirBnB, Netflix, Amazon disrupting and taking away jobs. But it's interesting once programming jobs are threatened, some become defensive.


In any case, HN is not one person, and you will not find me defending those businesses.


Are you saying you don't think Uber, Amazon, etc should exist? Why? Because they are disruptive and potentially taking away jobs?


That's not what you said. You said no one on HN doesn't bat an eye at Uber, AirBnB, etc. I personally think the way those two companies in particular flaunt laws is unethical, and I think they should be punished until they start following them. That's not the same as thinking they shouldn't exist.


I think that's a different issue. And I also agree with you, I personally don't give AirBnB business because of ethical issues. But, I was entirely referring to how they are disrupting things.


How does Amazon flaunt the law?


"Those two companies in particular" were Uber and AirBnB, the only two companies I named in my comment. I never said Amazon flaunted the law.


If I had an AI that could write code better than I it would be AMAZING. As a solo game dev I could be 1000000 times more productive and spend all of my time designing and balancing the game with the help of the greatest dev of all time!


Yeah but: how are you going to turn your labor into "food" and "shelter".


If he's a game dev, he could sell his game?


Retirees aren't the richest bunch, true. Collective poverty isn't an unlikely outcome for the vast majority of humans. Still, "poverty" need not be uncomfortable when "token conveniences" like a private home, food, entertainment, and opportunities for (even if economically useless) creativity and learning are in abundance.

An interesting analysis on a world with increased automation via emulated brains of the best of humanity: http://ageofem.com/


> Hampering progress for the greater good just because it harms you in the short term is never a good idea.

I'd love to hear your thesis for this, in a way that doesn't whitewash away the negative effects on the individual. If you're going to claim we should all be throwing ourselves on the burning pyre of progress no matter what I'd love to hear more of a compelling argument than "never a good iea".


Not OP, but I'd like to offer up a reframing. Progress gives us the opportunity to grow, to adapt, and to learn new things. Do you want to be the person who is stuck as the person they were when they first entered the workforce? Wouldn't you rather be someone who remains present in whatever moment in history it is now, and constantly redefines themselves according to the environment they are.

There's no reason why you have to think "I'm a software engineer, therefore any attack on software engineers is an attack on me." You could, just as easily, define yourself by "I'm a person. I do software engineering because the money is good right now and that is what the economy happens to need right now, but if in the future society ceases to need software engineers, I'll retrain with whatever skill is highly valued then."

I remember coming out of college, happy to learn about the world, and thinking how sad it was that people's identities were so wrapped up with their jobs that they were broken when their jobs were eliminated. I felt that shift happening soon after I turned 30, where I started to get just a little bit too comfortable and too proud of what I was rather than what I did, and then quit so that I'd have that opportunity to grow again.


You always have these great insights - you should blog! Or at least tell us what you have been reading...


>I'll retrain with whatever skill is highly valued then.

Software development has been invaded by mouth-breathers who don't give a shit about what they're doing, and I don't see a way to switch careers as an adult chasing money without becoming one of those.


Well, it might involve respecting fellow human beings and acquiring a bit of humility.

And I don't meant that facetiously - interpersonal skills matter. A lot more than pure technical skills, because they allow you to leverage groups of people to achieve bigger goals.


One way to do that would be to give a shit about what you're doing even after you switch careers.


But I'm not whitewashing away the negative effects of the individual. We'd all be personally very screwed if programming was automated. But I've never seen anyone on HN ever stand up and defend truck drivers, lawyers, taxi drivers, hotel workers, factory workers etc all who currently have their job threatened to various degrees. The double standard is what I'm complaining about. You can't have it both ways.


I'm not the person you asked. But for me, I think we have for too-long "fought" pure progress in the name of social-welfare and protection. So much so that that effort affects and pervades every facet of life. Just look around you and think how many jobs would not exist, or how many generations of people would not have made it this far if it weren't for societal-level intervention on the part of a state. The net-result of that is that if we were to now have some sort of revolutionary piece of progress, it would displace millions if not billions of individuals. Of course that is bad.

I know it sounds cruel to talk about people like a resource or as an animal, but that is precisely how I see society as treating lots of us in the name of a greater good. Under the guise of social-welfare we've increased our numbers to levels that never would have occurred naturally by us simply taking care of the really needy.

So now we're stuck in this predicament. Either we progress our society, and potentially affect a lot of people negatively. Or help everyone slightly-struggling and below out by preventing a technological-revolution so we can grow a bit at a time, and delay the problem for the next generation.


Maybe it's never a good idea at a species/progress level. But it's acceptable that trying to stop progress is a good (albeit selfish) idea at the individual level.


I have no interest in organizing to artificially give myself a job that isn't necessary. In fact that seems insane.


I think the better way to avoid our replacement is to think about our non-coding contributions to our companies.

I'm a full-time software engineer, but I spend maybe 50% of my day implementing code. I spend a lot of time figuring out trade-offs, exploring edge cases, and verifying that the business/marketing people know what the side-effects of a given feature will be. There's also a lot of flag-waving to make sure that small problems don't become big problems.

I've run into the "that should be able to be automated" hand-wave of a thought before. I think it's always done by people who haven't actually spent time thinking about the edge-cases and all the little decisions that are put into a final software project. Sure, you can hand-wave and get a CMS, but you're going to be unpleasantly surprised when the defaults don't match your subconscious expectations.

Here's how software engineers could retain their position in a world where our code is automatically generated:

- Understand and communicate the trade-offs of different solutions

- Embrace product design-- a lot of us unwittingly become novice visual designers and product designers during our work, and we should embrace that domain knowledge that we learn, rather than being frustrated that we're being taken away from our primary task, coding.

- Explore edge cases, and explore new ways of doing things

- Understand and get involved in the business process!


The capital holders in all industries would love expensive human labor to be replaced by cheap automated labor. If programming AI is created, no labor organization could save software engineers, just as it would be ridiculous to think horse and buggy drivers or telegraph operators could have stopped the march of progress.

The bigger issue is what has we've already started to witness: automation leads to accelerated accumulation of wealth and resources among those who already have the most capital. Historically, corporate profit and labor demand went up or down together. Now they're diverging: corporate profit grows while demand for labor shrinks. The spoils of corporate success goes to those who own the tools of automation, and others lose their job and income.

In any case, OpenAI's vision here is much bigger than making software developers obsolete: they want to study (or create) AI that can improve itself (i.e., a singularity scenario).


Almost every reply to my comment talks about saving jobs, but I never even brought that up. Organization is important because you want to be left with something more than a seat at the unemployment office when your job does disappear.


Apparently what you wrote has given everybody this impression. What are you really suggesting?


Pensions. There will likely come a time when most of us are no longer employable. We have (or will have) put our lives into this and made a small group of people very wealthy. Without negotiating from a position of relative strength, we will be left with nothing when we are no longer necessary.


> Software engineers need to recognize this common threat and organize (labor) sooner than later.

Has organized (labor) ever saved organized labor's job?


From obsolescence? Never. The only thing it does is anticipate it.

Organized labor is useful against a whole lot of problems, but not this one.


The solution to everyone being automated away isn't to halt the advances of automation and inflict inefficient make-work on the economy, it's to get used to the idea that we will all become effectively retirees.


If we can be automated away, we will be. No amount of organizing is going to stop that. And even in some alternate universe where it could, why would you want it to? Automation is what we do.


I see this coming from a much smaller perspective as well.

If one would reflect back on the applications that they've built in their career I suspect a number of those applications are now available generally as packaged software with customizable components.

E.g. The dawn of the web saw many publishing, & catalog systems being built. Now you don't need to, you can simply grab a commercial or OSS package and customize it for your needs. Same is true for document management, workflow.

This hasn't impacted jobs yet because we are still early in this industry, and growth is still outpacing this commoditization. But, yeh, I see the day where software developers become quite rare and software customizers are common.

The tricky challenge is organizing our labour efforts. It's already easy for IT organizations to outsource development to cheaper markets, I fear labour organization would accelerate that movement.


Imagine the potential of a world where anything can be built nearly instantly. If a computer can program itself, we can concentrate on trying to come up with useful things to make, instead of worrying about what we're capable of making. I have a really cool idea for a video game, but it's not going to be made because I don't have time to build it... but if a computer could make it. Wow. I become an imagineer, not a software engineer.

Step it up a level further, if computers could think of programs to build themselves (which they probably would have to if they are going to be anywhere close to practical), that's when the "singularity" truely starts to take off. At the speed that computers operate, programs would very quickly go beyond what humans are capable of.


Organize to do what, exactly? If software engineers became obsolete, why should anyone listen to an organization of them?

It's not in any danger of happening soon, but if/when it happens, you might as well join the union of street gas lamp lighters.


I actually think that there has been a long-term awareness of this fact in our industry, for decades now, and its one of the driving, motivating factors behind the entropy of the field. The OS-vendor guys know if they improve their OS'es, the systems-programming guys will get automated away, and if they get automated away, the applications-programming guys will be next .. so, subtle imperfections and flaws in the way things operate at a system level are allowed to persist, such that further downstream there are still opportunities to perpetuate the need for hands-on engineering. I can think of a few examples of where it feels like OS vendors/systems developers intentionally hobbled the functionality of their environment in order to make the long-term viability of the industry more resilient ..

So, I think it'll be a long time until 'the people at the top' get what they want. Until they start coding a replacement for Dropbox or SublimeText that will be the end-all of filesharing/file-editing, we'll always need another IPFS, another Atom, etc., etc.


The people on top? How about the software engineers themselves? As a software engineer myself I look forward to the day I'm forced to change careers due to obsolescence if my skills. I can't wait to see what people can achieve when anyone with an idea can make it happen with such ease.

Just in case, since I think I'm up against Poe's law here: I am, in fact, 100% serious.


I'm going to venture a guess that programming will be among the last jobs to be automated away. It's not clear that there will be a career to change into.


>>The people on top desperately want us gone, and when it happens it'll happen so quickly you won't know what hit you.

Isn't, given a problem 'X', produce a solution 'Y' kind of automation, called AGI?

Well then you have little to worry about. If AGI is indeed invented you have bigger things to deal with than worrying about your job.

In any such eventuality 'people at the top' are likely to be eliminated rather more quickly than us. Because an AGI with huge resources can make far better decisions than any CEO ever can.


If everybody else will be automated, who will buy all of these products and services? I dont suppose that AI will need anything else beside electricity and cpu time.


You aren't thinking at the right scale. If a computer program can do any complex task humans can do, worrying about unemployment is the least of our problems. Such a world would be so different than our own. It may have an entirely different economy, and it may not have people at all. I don't know, but I think it's equivalent to a caveman predicting what the effects of steam power would be.


If we end up being automated fully, I think that least important thing anyone can think of is that we will be paid less. I think that something like this is unavoidable but quite dangerous, singularity, etc.

The upside is that I think it would be really accessible for anyone to have it's own personal army of programmers. Although that's also quite dangerous.


Personally, I think you overestimate how difficult it is to translate an arbitrary set of human business rules to an API/website. And how arbitrarily the business side wants to change things without a concrete understanding of what they want the result to look like.


Programming jobs are protected by the halting problem. It has been proven that no program cannot detect if other programs halt or not (i.e. goes into an infinite loop or completes). I paraphrase, but this is the general idea. As long as this is true computers will not be able to program themselves except in very limited ways.

The only way around the halting problem is to use something other than computation as we know it. I.e. an entirely new kind of machine based on different principals. And no, quantum computers do not solve the halting problem since they are still Turing-complete machines.


The halting problem is often misunderstand as "computers can never detect if a program halts". It actually states that "computers can never detect if an arbitrary program halts".

There is a fairly large subset of useful programs that can be proven to halt. Anything that uses straight-line linear control flow. So can anything with that plus conditionals. So can that plus foreach loops, as long as iterators do not reflect updates to their underlying collections. Add a "forever { ... }" construct and you can prove that the program will not halt; in combination with the other constructs, you can prove liveness on each request handler while also guaranteeing that the server itself will never go down.

The two constructs you have to watch out for are loops that mutate state used in the conditional and unbounded recursion. Even for these, there are techniques to increase the set of programs that can be reasoned about, eg. using dataflow analysis to identify which state is mutable and preventing it from being used in conditionals or tracking data & codata through the typesystem.

http://blog.sigfpe.com/2007/07/data-and-codata.html

Such a language would not be Turing-complete; you won't be able to write an interpreter for a Turing-complete programming language in it. But the majority of common business problems don't require an interpreter for another programming language; most of them focus on storing data, triggering events, or computing functions of data.


I commented in a similar vein on a piece that said that smart contract environments should never use Turing-complete languages because of the undecidability of program properties: https://news.ycombinator.com/item?id=11942015 (That piece seemed to have a misconception that you can never prove properties of programs, rather than that you can't always prove properties of programs.)


The halting problem also makes an assumption on the program's size. In practice we probably only care about programs under, say, a billion petabytes (or any other finite limit you can think of). In theory you can have a Turing machine that solves this regardless of the program's structure.


If you mean "programs that can only use a billion petabytes of storage", then that's true, but if you mean "programs whose code is less than a billion petabytes long", it's not true. (Someone recently calculated a result that I think can be interpreted directly as an actual decidability bound, and it's dramatically shorter than that.)


I meant that there exists a TM which solves HP for programs smaller than a given size. This makes it computable. Now, just because we can prove the existence of a TM, doesn't mean we can find it. It's true that for programs which use a finite amount of storage we can actually describe the algorithm for the TM, but that's not what I meant.


This seems to be a weird issue conceptually:

http://mathoverflow.net/a/153106

Because of the list of correct answers for a finite subset of the Halting Problem is finite, the Turing machine you mention does exist, but we not only can't find it, we can't know when we've found it!

Where the Math Overflow post mentions that "experts could compute the particular value of n", there has recently been such a bound published in a thesis, such that we can be confident that we can never construct or recognize the solutions beyond that point (using a particular axiomatization of mathematics).

You referred to the idea that "you can have a Turing machine that solves this", and we can agree on that with the caveat that, above certain problem instance sizes, you can't know or verify in any way that you have such a Turing machine!


Yes, this is absolutely correct. If you give up on Turing-completeness you can prove that a subset of programs halt. A computer could search this infinite space for programs that solve a particular problem. However, there may not be a program P in this space that solves the problem in question so the problem of finding P in the subset of provably haltable programs does not necessarily halt.

My argument stands. Unless the halting problem is overcome, there will still be jobs for humans to write Turing-complete programs.


You merely defined halting problem. You did not argue the following:

Turing machine X can produce programs that meets certain specifications => Turing machine X solves the halting problem.


I'm not trying to make that argument.


Humans can't solve the halting problem either.

Here's a nice simple one for you:

    def collatz(n):
        while n > 1:
            n = n/2 if n%2==0 else 3*n+1
Is the above program guaranteed to halt for all integer inputs?

In practice, programming doesn't require solving halting problems. We write programs that are on average easy to analyze, especially if you're calibrating the scale with busy beavers. There's no fundamental reason that a computer program can't collect requirements, collect clarifications, and translate those specs into executable code. Clearly it's hard (How do you do the translation? Optimizing Prolog isn't easy! And how do you avoid asking for millions of things that humans take for granted as obvious?), but I don't see anything that makes it impossible.


If we take the simple case of genetic algorithms, we already know that it is possible to brute-force the problem of programming.

The halting problem doesn't really matter that much in this context. Just spawn up a bunch of threads that churn away at the problem, using random mutations, and any that go on too long can just be considered flawed, regardless of whether they have any redeeming qualities.

Then you select the winners, based on the selection criteria, and churn away on some more mutations of those new variants.


Sorry multiple threads will not get you around the halting problem. The brute-force genetic algorithm you describe also suffers from the halting problem. Just look at the algorithm as a whole. Say you are searching for a program P and you have some criteria for recognizing it, including a maximum run-time. There is no way to know if the genetic algorithm will ever find P or will itself just run indefinitely.

In general, you will not be able to easily find P using a genetic algorithm (which amounts to a random walk through the space of all programs) even with many threads. The problem is that the algorithm is exponential in the length of P. It only works if you only consider a very limited set of possible programs. E.g. very short programs or programs which are only slight variations on a known program.


There exist equivalent problems for human minds, though. E.g. the Riemann Hypothesis - people keep trying to find the proof, but noone knows if the proof exists, so it's not a given that the search will ever finish.


That's the only way to totally solve it. You can get pretty far with imperfect shortcuts. Maybe far enough that it doesn't matter.


This profession is probably the only one that automates itself. It has always been like that for developers, yet they 're still in high demand.


Seriously, why would you need programmers if the machine does it better (this needs to be assessed somehow first)?


I'm acutely curious how we can attack the problem of detecting undisclosed AI breakthroughs, particularly those made by organizations uninterested in ever revealing them. Commercially, yes, many organizations want to boast about their AI, and we may find those AIs via games or news. But many organizations involved in national security and financial markets will want to hide their advantage. I would love to hear more about how Open AI thinks this problem can be approached.


By analyzing sequences of amazingly brilliant developments/products/inventions/movements performed by single company or conglomerate of linked companies, and which are few levels above what competitors can deliver.

I am not from OpenAI though.


If you came up with AI algorithms that gave you a massive edge on financial markets, presumably you would also be intelligent enough to execute the strategies discreetly, through highly distributed trading entities, just below levels that raise eyebrows, so as not to be found out.

Similar problem to bot detection on online poker networks, except much harder on real financial markets, and probably not something you can regulate.

Try to understand how exactly Renaissance or Two Sigma are making money with their algorithmic trading, from the outside, I don't think you'll have much luck.


By that measure, startups are powered by AI and large corps are not. By which I mean, so many variables go into successful product development. Google arguably has the strongest AI in the world, but it has been remarkably bad at introducing new products recently, so those two phenomena seem decoupled to me...


And google definitely has few AI powered projects, others can't match. Google is not a target of that goal, just because they always disclose their AI achievements.


This is the biggest problem IMO. Just imagine a couple of AIs that systematically discover a few exploitable loopholes or trading algos all at once; it would be like black & scholes exponentiated. I doubt outside research would be able to infer anything from available data.


Step one: Stop calling equations "AI".


Step zero: Define "Intelligence"consistently and unambiguously.


This reminds me of an excellent two post series by Tim Urban about the upcoming AI revolution: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


For #3, wouldn't the AI defending against hacks need to be a hacking AI itself? It would need to find vulnerabilities in the systems it protects in order to plug them before other AI's find them.


Exactly[1].

The moment I saw this project start, the first thing I thought was, "Isn't this how all terrible things are created, by trying to avoid having them be created?".

I wonder if the long-term strategy will be called Mutually-Assured Obsolescence™.

1. https://en.wikipedia.org/wiki/Skynet_(Terminator)


Yes and no. The analogy of 'attack' and 'defense' in hacking comes from the military. So to extend that analogy, defensive missiles are still missiles, but they are unquestionably different from offensive missiles.


The point is that in this analogy, no missiles exist yet. Nobody's gone to the effort to figure that out. And then someone says "somebody might figure this out an weaponize it! we gotta figure out how to neutralize it!" and then they do the fundamental research to figure out how to make a missile at all. Now you "just" have to swap out the guidance package and instead of a defensive missile, you've got an offensive on.

Making an AI that finds vulnerabilities is the hard part. Doing an offensive thing or a defensive thing is much easier if you've got an AI that's just spewing out vulns. They don't need to be tightly coupled at all.


> 4. A complex simulation with many long-lived agents.

Maybe someday we'll discover that this was exactly how the ancestor simulation we're living in started.


It's funny to see grown adults in tech taking the simulation idea seriously just because Elon Musk said it. I was interested in science fiction in high school and thought about these things, but I concluded they were worthless.

Why? It is ontologically no different from 1) solipsism, a philosophical idea that even most philosophers considered worthless centuries ago, or more loosely 2) belief in a higher deity who created the world.

It's completely unfalsifiable. The answer to the question clarifies nothing about our understanding of the world [1] and furthermore has absolutely no impact whatsoever on how you ought to live your life.

[1] Even if you conclude we live in a simulation, we will never know for sure anything about the nature of the simulation or the nature of the world outside the simulation.


You can expand your reasoning to include all religion, while you're at it. And anything anyone does that evidences a belief in something irrational or unfalsifiable.

The thing is, you're not wrong. You're just kind of mean and arrogant to reduce a multiplicity of nuanced worldviews that many people care about very deeply to simple meaninglessness.


Let's not resort to calling people names.

Your argument is ironically the reductive one. I have complete respect for people who choose to believe in an organized religion. But it remains the case that whether or not you can prove a higher deity, that in of itself has no bearing on how you should live your life. You can disprove the existence of god and still choose to believe in religion. You can prove the existence and still choose to not believe in any religious system. So proving the existence is worthless.


Fair enough; I don't know enough to say that you are arrogant personally, nor would that be productive. What I meant to express is that the comment itself is an arrogant statement because it's self-centered. You directly compared an idea people enjoy debating about and believing in to idol worship (Musk) and something you dismissed as a teenager in high school.

This comment is actually a much more reasonable expression of your point than the earlier one - you mentioned you respect the people who believe in religion and revised your claim to state that proof of the unfalsifiable is meaningless (which is nearly tautological). That logic is sound (though you could make an argument that attempting to prove something unfalsifiable when you're emotionally invested in it and it's otherwise harmless could be fulfilling).

Your earlier comment had a different tone and sentiment; namely, that belief in something unfalsifiable has no impact on someone's life, and that you can't understand why grown adults would bother with it. That was the specific sentiment I found to be offensive.

For what it's worth, I'm agnostic.


1) Discussion based on rationality, of those topics, is worthless. Elon Musk says he is almost certain we are living in a simulation, because of probabilities.

2) You're the one calling it idol worship. I'm pointing out that most people would not take this idea seriously if it weren't Elon Musk saying it - and I still believe this is true.

3) The high school comment is pretty relevant. An intro philosophy class will usually mention Descartes, who thought about this stuff 4 centuries ago.

4) I only said it's funny that grown adults take the simulation hypothesis seriously. Your statement was the one that extrapolated that to religion and found offense.


Ironically, you're the one engaged in seriously debating the merits (or lack thereof) of the simulation hypothesis with people in this thread.

I was merely joking, it's simply pretty funny to imagine a full-grown world of self-aware actors developing inside the OpenAI plan and then debating about the simulation hypothesis.


The negative ("we are not in a simulation") is the falsifiable version. It could be falsified by discovering a flaw in the simulator.

Bostrom's formulation has a properly constructed hypothesis: see http://simulation-argument.com/simulation.pdf


It can't. We cannot distinguish between flaws and non-flaws. What is a flaw anyways? That is ill-defined - we are conjecturing about what will or will not be present in a reality that we have no indication will resemble our reality.


> Why? It is ontologically no different from 1) solipsism

Sure, but it is vastly interesting to think about, and imagine about and have meandering ideas about. There's probably also trappings of psychoanalysis in here somewhere.


    "It's the question that drives us, Neo. It's the question 
    that brought you here. You know the question, just as I 
    did."

    "What is the Matrix?"


I tried it for a course in college, "Simulation of Complex Systems". Our goal was to get genetic programs to evolve tribe or herd behavior. After fixing the energy conservation bug that made it evolutionarily successful to immediately eat your own children, I ended up intelligently designing some basic DNA on the night before our presentation.


This particular subject interests me to no end.

Here are a few resources that might be helpful to anyone interested

https://www.complexityexplorer.org/

http://web.mit.edu/redingtn/www/netadv/Xcomplexit.html

http://tuvalu.santafe.edu/~simon/page6/page6.html



Evolutionarily beneficial to eat your own children? Could you explain how?


It didn't work in the long run but it was a common mutation in the early game, where two creatures would keep breeding and cannibalizing because the strange simulation environment gave them life energy for a long time.


I played a similar game on the Tandy computer in the 80s.


If we're living in a simulation and our creators see that we're running our own simulations, they may terminate our process to avoid nested virtualization.


But by doing so they could invoke the wrath of the grandparent simulation. Logic leads to an implicit understanding that all recursive simulations must be allowed.


Why would our creators want to avoid nested simulation?


The singularity would probably overheat the host computer.


Possibly. Our simulation might be run on hypercomputers whose performance isn't affected by that sort of complexity. Heat might not even exist in the host universe's physics.


System going down for maintenance now


It may have been down for maintenance for millennia since your comment, and just come back now, but none of us noticed!

Insert your favorite Matrix déjà vu reference here.


They probably already figured it out if they can build a simulation this huge.


That we're living in a simulation is taken for granted. On the other hand the idea that we can comprehend the nature of the simulation or its creators should not be taken for granted.

Does a glider in Conway's Game of Life comprehend the hardware its running on? Does it know the mind of Conway?


Programs writing other programs is an interesting concept, but isn't it naive in most cases?

If program X can write program Y, program X is probably unnecessarily complex with regards to solving problem Y. Program X could simply execute program Y's operations at their abstract level, which would save it all of the outputted language understanding and translation. The only tangible difference between the two would be the ability to "save" the commands, at which point program X is a scripting language and the input commands are the script.

Wouldn't this solution be better in most cases? If program X at runtime requires significant configuration to produce program Y but those configuration inputs are not saved. Then if program Y were to be changed, program X must be rerun with all of the configuration input again. In the scripting language scenario, the script is modified.

For a programming language to receive something simpler than the language it is written in, the ideas must be abstracted to reduce the amount of code required to perform the task. For example simple addition of multiple elements can be reduced to a "sum" function. The "programmer" using this must still have knowledge of the sum function and its use. Abstractions require the system to have knowledge of that domain, so with each new domain a whole other set of abstractions and complications are introduced.

So if a machine were to write the entire program, how does it differ from machine learning? While machine learning might be often used for smaller algorithms, this case would apply ML to the entire problem as sets of smaller problems. If a human must still program in some abstracted sense, isn't it just a scripting/programming language? How would a component of this suggested project not fit into the "machine learning" or "scripting/programming language" categories?


#2 Is incredibly ambitious but I expect it should be doable by a lone grad student over the course of a summer. Something like this would range in complexity from completely redefining programming languages (drastically lowering the skill required for complex programming tasks and thus feeding into #1) all the way up to general intelligence.

The tests referenced in [https://arxiv.org/ftp/arxiv/papers/1604/1604.04315.pdf], currently dominated by information retrieval techniques, seem more realistic and still feel hopelessly far away.

#3 should also allow looking into the use of AI to break into systems, in addition to detecting and defending against AI breaking into systems. A prototype for #4 could create an environment where trilobyte fuzzers could co-evolve into fearsome Artificial intelligences. The AIs that break into systems need not be as smart as the systems being broken into. Just as viruses are much less complicated than eukaryotic cells and yet are capable of wreaking great havoc against mammals, might this be a possible mad deterrent against out of control AI, developed by #1's opponents who use #2's breakthroughs? No AI could plausibly be bug free.

See also: Schild's Ladder.

#4 It would also be cool if humans were allowed to visit and interact with this virtual world.

See also: The Lifecycle of Software Objects

These projects have the same feel as: A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE, whose #4 and #1[+] were the only ones to see much progress. It's difficult to say whether we are 10 or another 50 years away from making meaningful progress on OpenAI's list, but I'm glad they made it because it seems somewhere along the past 60 years, we forgot how to dream.

[+] #3 warrants a honorable mention.


#2 Might be technically doable: I feel like there has got to be a way to "cheat" this. Maybe make something that Google searches similar problems and then automatically signs up for contests under several names and submits potential code snippet answers as solutions. Your program could run the snippets in sandboxes and ensure the inputs / outputs match the example code for the problem. You do this on a large enough scale you might eventually win a few just by dumb luck + your skill at programming something to detect how well random snippets of code fit a particular problem.


I think that sort of solution would work better as an educational tool or as part of an augmented programming environment. But I don't think it fits within the spirit of the task. It also would hit a ceiling really early on, if the parallel solutions to the Allen AI Science challenge are to be looked at as hints of how things might turn out.


You seriously think #2 is so easy a lone grad student could casually do it over a summer? ......


It was tongue in cheek. The original AI researchers posed the problem of General Intelligence to a single grad student and thought he would have it completed by the end of the summer... 50+ years later here we are with no solution.


#2 - Build an agent to win online programming competitions.

Given that online programming competitions tend to fall into distinct classes (dynamic programming, algorithmic challenges, graph problems, string problems), this seems maybe more "solvable" with a non-AI implementation?

Imagine you have a framework that can spit out sub-pieces of a solution that worked in a Unix pipe-like way (e.g, sort the graphs | find strongly connected components in each graph | spit answer of graph with lowest number of SCC).

Then you need to grind out some type of expert system to replicate the competitive programmer who currently chunks together those framework pieces.

Of course, this probably doesn't work at all for something like kaggle or stockfighter.io, where parsing the instructions and observing a dynamic system are key parts of the hacking. I am more thinking of SPOJ or similar . . .


There's been significant progress in data science competitions (Kaggle etc).

Not so much parsing of the problem itself, but building generic frameworks where - given data and a target variable - it will work out the type of problem (eg classification vs regression) and develop a predictive model without human intervention.

ICML has had workshops for the last two years on this problem[1][2]

[1] https://sites.google.com/site/automl2016/

[2] https://sites.google.com/site/automlwsicml15/


The hardest part here is not the actual generation of code, but to analyze the problem first and to determine what data structures/algorithms are best suited to tackle it. Also, there are a lot of real world examples used in these problems, so the AI would need a general understanding of the modern-day world, which again, is hard.

Lastly, even if the AI figures out the implementation, it has to take care of all the edge cases too, so that the test cases pass. But I suppose this is a trivial problem compared to the other two I mentioned above...


> non-AI implementation

> a framework that can spit out sub-pieces of a solution

> some type of expert system

What you've described is an AI implementation.


I love how much these things just FEEL like Elon Musk fantasizing about an idea, and just really wanting someone to work on it.


The goals are an AI that can augment itself (project 2) and defend itself (project 3). I've found Musk's fear of AI to be a little overblown, but this is nuclear weapon type rationalizing: "AI could be end the world as we know it, so we'd better race to create it first."

It's interesting that Projects 2 and 3 would make great businesses even if they only got halfway to "existential threat" level AI. Then there's project 4, version 0.1 of Musk's "we're all living inside a simulation" claim. Musk as creator of worlds.


The security project reminded me of the DARPA Cyber Grand Challenge - https://www.cybergrandchallenge.com/


> Detect if someone is using a covert breakthrough AI system in the world.

I bet the real purpose of this is not "It seems important to detect this", at least against superhuman AGI.

As that's asking you to win a game against a more intelligent game player, who is going to be able to out-think you and hide.

It seems more likely that:

The real purpose of this problem is to impose a 'secrecy tax' on entities who would like to recoup investment by covertly using such a system, thus reducing the incentive to build one in secret. I.e. if you know someone has invested in research that's going to limit your ability to profit from covertly building an AGI, this means you can only use it up to the limit of game theoretic detectability, reducing the profit motive of acting covertly in the first place.

Its a bit like once you break Enigma, you have to be careful how you use it. Having alternative explanations for how you got your edge then becomes valuable.

So, if you see a corporation building something that could provide an alternative explanations for huge returns, other than AGI - ideally for something that's very noisy (high variance) - and where it'll be hard to tell whether their success was by chance or on purpose, that's probably (very weak) evidence (in the bayesian sense) they are working on a secret AGI.

I suggest YCombinator is an ideal candidate... huge returns, very hard to tell if its luck or skill.


I suggest YCombinator is an ideal candidate... huge returns, very hard to tell if its luck or skill.

Maybe pg is actually an artificial super-intelligence. I mean, has anybody actually seen him in real life???


5. Focus on how you would use AI to augment personal human thinking.


#1 - reminded me of "Endgame: Singularity" (https://en.wikipedia.org/wiki/Endgame%3A_Singularity)


That stuff sounds so generic, not sure what is special about it :-)


I christen number 1 "The Ghost in the Shell Problem", in reference to the project at the core of the first movie's plot.


#1 - talk about distributed, automated, statistical analysis. What an exciting project to get to work on.


> What an exciting project to get to work on.

Indeed :). Anyone who's excited, please apply! Feel free to ping me with any questions: gdb@openai.com.


Anyone who starts a project for #1 will get instantly targeted by Samaritan's agents, right?


Is this job application for humans only or AI can apply?


Riffing on project no. 1, here's our post on "How to Control AI": http://deeplearning4j.org/skynet


>A complex simulation with many long-lived agents.

No need to start a new one here, just download Dwarf Fortress.


Except that DF Dwarves are, by definition, not long-lived.


This will certainly backfire


Theses "ideas" sound like either they were written by a 14 year old kid or someone very high. I was hoping for real science.

Yes I know my comment is probably against HN guidelines but there ought to be an exception for making fun of uber rich Silicon Valley types. They parody themselves. It reminds me a lot of the show Silicon Valley.


care to elaborate why this isn't "real science"?


It's real science fiction. None of these problems are even close to being practical except in the fantasy world created by pop-sci news outlets.

I was hoping for a serious discussion about the problems on the forefront of machine learning.


Well that's why they're courting not just experts, but "strong experts" of machine learning, whatever that means :P


It must be experts who have worked out in the OpenAI Gym. Apparently by playing video games.

https://gym.openai.com/


DARPA's already put 10s of millions into #3. What do you know that they and their teams don't?

https://cgc.darpa.mil/


You mean the government wastes money?

edit: The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.


You mean the government wastes money?

That's a ignorant cheap shot. Speculative, risky research is costly because lots of things don't work.

The DARPA site you linked above calls it a "fully automated computer security challenge". They are not claiming it's AI.

Show me a definition of AI that would include papers published at NIPS/ICML or even AAAI and wouldn't include the same techniques as are used in the DARPA Grand Challenge.

AI is just a label, it isn't something magical.


That's actually quite easy to do. The techniques used by the AI community (statistical estimation, numerical optimization, etc) are quite different from the techniques used to do things like the Cyber Grand Challenge (heavy use of SAT and other decision procedures for logical theories, lattice-based program analysis, formal notions of program semantics, etc). If you ask an AI researcher how knowledge of omega-complete partial orders can help automatically win programming contests, you'll get a blank look.

I'm actually kinda miffed that OpenAI's press release seems to think automatically writing/exploiting programs is an AI problem, and targeted the AI community in their proposals (as opposed to the programming languages community). I'm a program synthesis researcher, and know how to do major aspects of #2 and #3. I know a lot of people who are already working on them (with some quite impressive results, I might add). And none of us are machine learning people.

The important thing is: the Cyber Grand Challenge is funding a dozen teams to do exactly what OpenAI is hiring for. Call it AI or not, it's being done. You might look at the proposal to automatically exploit systems and call it sci-fi, but I look at it and think "Sure, that sounds doable."

Today's program analysis and synthesis technology allows for tools far beyond anything programmers see today. I'm excited to be part of a generation of researchers trying to turn it into the programming revolution we've been waiting for.


I'm agreeing with you(!?) I don't think any of this is SciFi (Well maybe the detection one is a bit out-there).

I don't think OpenAi would object to proposals from outside the AI/ML field. After all, people are doing Deep Learning based SAT solvers as class projects now, eg https://cs224d.stanford.edu/reports/BunzBenedikt.pdf


Yep. I talked to Dario, one of the authors of this press release. He's definitely interested in both kinds of approaches.


@Darmani Can you think of any ways traditional program synthesis techniques could be combined with machine learning to perform #2? Assume your system has access to a large amount of practice problems/solutions to train with.


"Traditional" program synthesis to me means the kind of stuff Dijkstra was doing 50 years ago, which works quite a bit differently than a lot of the constraint-solving based stuff that has really become hot in the last decade. But, answering what you actually meant to ask:

Oh certainly. DARPA's "MUSES" program (which I'm partially funded by) is $40 million into incorporating big data techniques into program analysis and synthesis. There are systems like FlashFill and Prophet which develop a statistical model of what human-written programs tend to look like, and use that to help prioritize the search space. There are also components in the problem other than the actual synthesis part, namely the natural language part. Fan Long and Tao Lei have a paper where they automatically read the "Input" section of programming contest problems and write an input generator. It's classic NLP, except for the part where they try running it on a test case (simple, but makes a big difference).

The reverse also is happening, with people incorporating synthesis into machine-learning. The paper by Kevin Ellis and Armando Solar-Lezama (my advisor) is a recent example.

I do get touchy when people label this kind of work "machine learning" and seem oblivious to the fact that an entire separate field exists and has most of the answers to these kinds of problems. Those examples are really both logic-based synthesizers that use a bit of machine learning inside, as opposed to "machine learning" systems.


I suspect OpenAi is coming from the Neural Turing Machine etc approach. But that doesn't preclude other approaches.

Also, NLP is at the very least closely aligned with "AI" research bit traditionally and looking at current trends.

I do get touchy when people label this kind of work "machine learning"

Don't ;) (Seriously - it's just a label. Embrace the attention)


You're right that it is an opportunity for attention. What seems to actually happen is that a bunch of people who really don't know what they're doing get a bunch of publicity, recruits, and potentially funding and tech-transfer, while we're sitting here with working systems running in production, not getting much. If you look at AI papers that try to touch programs, they have a tendency to not even cite work from the PL community that does the exact same thing but better. It's kinda like how if you Google "fitness," you're guaranteed to get really bad advice for all your results -- the people who actually know about fitness have lost the PR battle.

In short, you can say "it's just a label," but that's not a reason not to fight the battle over words.


Out there are a lot of announcements of really existent research results, even popular ones.


I agree but legitimate results are quickly extrapolated to Skynet by the media.


edit: Boneheaded passion got ahead of me and I forgot that the Gym was intended originally to be cross functional. TO be fair I don't think that aspect of the gym has been well publicized. Original comment retained for context:

How about instead of wasting time on this stuff, they build an actual benchmark test for AGI. Something everyone in the community agrees needs to be built, and basically nobody is working on. I suggest they start with the "anytime intelligence test" from Hernandez-Oralloa as a stepping stone.

Oh wait, benchmarks aren't sexy. Nevermind.


> they build an actual benchmark test for AGI

This is actually the project I'm currently leading (goal #1 here: https://openai.com/blog/openai-technical-goals/). We should have something interesting to share in a few months.


Duh, forgot about the gym. Sorry.


We also have something new coming, which we hope will focus research and drive forward progress towards AGI. (As always, looking for more engineers to work on it. Just need a strong software generalist skillset. gdb@openai.com.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: