Hacker News new | past | comments | ask | show | jobs | submit login

reproducibility should be something that's baked into an experiment's design.

so, if their experiment was designed such that reproduction is inherently difficult, they should have designed it in a better way, and they should've used a toolset that wouldn't run into that problem.

a non-reproducible experiment isn't necessarily completely without value, but it's a thing that everyone should look askance at till it proves its worth.

(apologies if my comments don't apply to this experiment and if it is reproducible -- i didn't have time to read through the OP, but i thought this reply was still a worthwhile response to its specific parent comment)




No that's absolutely a fair and true point, my comment was more pointed at the RNG aspect. I have not looked into this specific one either but normally people would hopefully not publish their best randomly achieved run if the system cannot reproduce it or similar results.

That being said the paper in question doesn't seem to reference open source code anyway so I guess my point was kind of moot, apologies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: