Hacker News new | past | comments | ask | show | jobs | submit login

> Ok, but how would the researchers communicate their evaluation to non-experts?

Conferences, journals, and papers are not for non-experts. They are explicitly for experts to communicate with experts. The truth is that papers have never been validated and likely never will. Code often isn't uploaded alongside papers and when it is I know only a handful of people that look at it (including myself) and only one that executes it (and not often). Validation only happens with reproduction (i.e. grad students learning) and funding doesn't encourage that. Despite open source code, lots of ML is still difficult to reproduce, if it can be done at all.

We also use normal forms of communication like Twitter, HN, Reddit, email, etc but there's a lot of noise (as you note). We speak a different language though, so you can often tell.

Frankly, a lot of us are not concerned with explaining our work to layman. It's a lot of work, especially the more complex a subject is and we're already under high pressure to continue researching. It's never good enough. There's no clear "done with work" time in jobs like this. You're always working, so allocate your energy (I'm venting and mentally fatigued right now). I used to be passionate about teaching laymen but I'm tired of arguing with armchair experts. Still happy and passionate about teaching my students and performing research, so that's where I'll spend most of my energy: in the classroom or blogs. The more popular a subject is, the more likely this is to happen too, ironically.

Communication should come from news, university departments, and specialty science communicators, but that's broken down. Honestly, I just think it's a tough time for laymen to get accurate information. There's a lot of good information out there for you all (us researchers learn from publicly available materials) but expertise is being able to distinguish signal from noise, and the greater the popularity, the greater the noise. This isn't just true for ML, we see this in things like climate, nuclear, covid, gender/sexuality, and other hot topics. Only thing you can do is actually use a common strategy from researchers: have high doubt and look for patterns from research groups.




Personally I relish many of the third-string papers that people post on arXiv about run-of-the-mill text analysis projects they do because they give me more insight into the results I'll get and the challenges I'll face when I do my own text analysis projects.

If you go to a computer science conference you might talk about the headliners later but you actually learn a lot from talking to less famous people at the back of the room, scanning large numbers of poster papers, sharing a bottle of wine at dinner with four people and having one of them get way too drunk and talk trash about academics you distantly know, etc.

Lower-quality papers on arXiv give me a bit of that feel.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: