Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> One of the things I like about competitive programming and the like is just getting to implement a clearly articulated problem

English versions of Codeforces problems may be well-defined but they are often very badly articulated and easy to misunderstand as a human reader. I still can't understand how they got AI to be able to generate plausible solutions from these problem statements.



They used the tests. The specification being very approximate is fine, because they had a prebuilt way to "check" if their result was good.


Wait what, they cheated to get this result? Only pretests are available to competitors before submitting. If they had access to the full test suite, then they had a HUGE advantage over actual competitors, and this result is way less impressive than claimed. Can you provide a source for this claim? I don't want to read the full paper.


If AlphaCode had access to full test suite then the result is not surprising at all.

You can fit anything given enough parameters.

https://fermatslibrary.com/s/drawing-an-elephant-with-four-c...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: