Hacker Newsnew | past | comments | ask | show | jobs | submit | mlpoknbji's commentslogin

> But we know that any person who uses AI is likely to improve at what they do.

Do we?


I could have sworn there was research that stated the more you use these tools the quicker your skills degrade, which honestly feels accurate to me and why I've started reading more technical books again.

> I've started reading more technical books again

How's that working out for you in the context of working with AI tools? Do you feel like it's helping you make better use of them? Or keeping your mind sharp?

I've been considering getting some books on core topics I haven't (re)visited in a long time to see if not having to write as much code anymore instead gives me time to (re)learn more and accelerate.


I just don't understand how someone can have these models at their disposal not learn anything?

The general lack of intellectual curiosity is just mind blowing to me.


Not until large-N research is done without sponsorship, support, or veiled threats from AI companies.

At which point, if the evidence turns out to be negative, it will be considered invalid because no model less recent than November 2027 is worth using for anything. If the evidence turns out to be slightly positive, it will be hailed as the next educational paradigm shift and AI training will be part of unemployment settlements.


I would even say it's likely the opposite. My output as a programmer is now much higher than before, but I am losing my programming skills with each use of claude code.

Let me add a single data point.

> is likely to improve at what they do

personally, my skills are not improving.

professionally, my output is increased


My software development skillset has improved. I’m learning and stress testing new patterns that would have taken far longer pre-AI. I’m also working in new domains and tech stacks that would have taken me much longer to get up to speed on.

People who use AI mindfully and actively can possibly improve.

The olden days of buidling skills and competencies are largely dying or dead when the skills and competencies are changing faster than skills and competency training ever intended to.


If things change fast, learning becomes even more important. And learning about the principles that don't change becomes most important of all.

Yup, continuous learning, the principles that don't change are in part identified and part still coming to the forefront.

We DEEPLY do not.

That's not, IMO, a "skills go down" position. It's respecting that this is a bigger maybe than anyone in living memory has encountered.


Clearly this means Anthropic believes this but would be nice to have a footnote pointing to research backing this claim.

It is also not very convincing considering that while the UI of Claude is not bad it is also not exactly stellar.

[dead]


As a student, I constantly worry about this. But everyone in my class is producing output at a pace I can't compete with without AI assistance.

what class are you in that "producing output at a [rapid] pace" is relevant to the grade?

pick any cs class

I have a minor in CS and no -producing the assignment by the deadline is important- grades are not based on quantity of code vs classmates.

I mean, maybe things have changed (I finished college about 20 years ago), but I don't remember producing large volumes of stuff as being a particularly important part of a CS degree.

Between a challenging job market, increasing new frontiers of learning (AI, MLops, parallel hardware) and an average mind like mine, a tool that increases throughput is likely to be adopted by masses, whether you like it or not and quality is not a concern for most, passing and getting an A is (most of my professors actively encourage to use LLMs for reports/code generation/presentations)

It will be a very interesting experiment when your generation of computer science graduates enters the job market, to put it mildly.

Individuals believe they act freely, but they are constrained and directed by historical forces beyond their awareness - Leo Tolstoy

Historical forces beyond your awareness cannot force you to submit mountains of slop.

slop is not a thing anymore, stop living in a fantasy world

Remember, you chose this. You chose not to learn, to offload your thinking in the name of competition.

What are you talking about? Slop existed long before AI and it will exist long after.

The last one already killed unique web designs, killed flash, gave us us soulless flat design and electron bloat.

They'll have to work pretty hard to outdo that!


That was never a worry in any of my CS classes.

Copying AI slop isn’t producing output! It’s also not conducive to learning

As if you are just a such a genius the models are of no use to you.

How can you not think that makes you sound like a complete moron?


I would urge you to leverage some critical thinking, re-read what I stated, and identify where I said that the models are of no use to me. If the ability to think for yourself without AI assistance hasn't fully atrophied on your end you may be able to see that you are the moron in this thread.

I guess I really am just that much smarter than you.

Yah and this seems to be supported by preliminary evidence on the impact of AI on things like retention and cognitive ability.

Not even just skills, motivation too.

Interesting post. The First Proof experiment really showed us the near future of AI/math interactions, some impressive success, but also lots of extremely hard to verify text, misformulated lean "proofs" etc. but progress on AI does math has indeed been impressive


Obviously not



This is a very interesting contribution to the AI/math space. I hope it can be seen by nonmathematicians interested in this. The mathematicians involved are quite well known (Martin Hairer is a Fields medalist). See https://www.reddit.com/r/math/comments/1qx77l7/a_new_ai_math... for some discussions.


Peer review should be disrupted, but doing peer review via social media is not the way to go.


It's a peer review platform build on atproto tech (aiui the vision), not to be social media, though I would not be surprised if it has elements of that

Peer review goes beyond the formal process, in the court of IRL. Social media is one place people talk about new research, share their evaluations and insights, and good work gets used and cited more.

Arxiv has been invaluable in starting to change the process, but we need more.


Has a bit of a leg up in that if it's only academics commenting, it would probably be way more usable than typical social media, maybe even outright good.


This also can be observed with more advanced math proofs. ChatGPT 5.2 pro is the best public model at math at the moment, but if pushed out of its comfort zone will make simple (and hard to spot) errors like stating an inequality but then applying it in a later step with the inequality reversed (not justified).


My favorite early chatgpt math problem was "prove there exists infinitely many even primes" . Easy! Take a finite set of even primes, multiply them and add one to get a number with a new even prime factor.

Of course, it's gotten a bit better than this.


IIRC, that is actually the standard proof that there are infinitely many primes[1] or maybe this variation on it[2].

[1]: https://en.wikipedia.org/wiki/Euclid%27s_theorem#Euclid's_pr...

[2]: https://en.wikipedia.org/wiki/Euclid%27s_theorem#Proof_using...


Yes this is the standard proof of infinitely many primes but note that my prompt asked for infinitely many even primes. The point is that GPT would take the correct proof and insert "even" at sensible places to get something that looks like a proof but is totally wrong.

Of course it's much better now, but with more pressure to prove something hard the models still just insert nonsense steps.


I think a more realistic answer is that professional mathematicians have tried to get LLMs to solve their problems and the LLMs have not been able to make any progress.


I think it's a bit early to tell whether GPT 5.2 has helped research mathematicians substantially given its recency. The models move so fast that even if all previous models were completely useless I wouldn't be sure this one would be. Let's wait a year and see? (it takes time to write papers)


It's helped, but it's not correct that mathematicians are scoring major results by just feeding their problems to gpt 5.2 pro, so the OP claim that mathematicians are just playing off AI output as their own is silly. Here, im talking about serious mathematical work, not people posting (unattributed AI slop to the arXiv).

I assume OP was mostly joking, but we need to take care about letting AI companies hype up their impressive progress at the expense of mathematics. This needs to be discussed responsibly.


I think "pretty soon" is a serious overstatement. This does not take into account the difficulty in formalizing definitions and theorem statements. This cannot be done autonomously (or, it can, but there will be serious errors) since there is no way to formalize the "text to lean" process.

What's more, there's almost surely going to turn out to be a large amount of human generated mathematics that's "basically" correct, in the sense that there exists a formal proof that morally fits the arc of the human proof, but there's informal/vague reasoning used (e.g. diagram arguments, etc) that are hard to really formalize, but an expert can use consistently without making a mistake. This will take a long time to formalize, and I expect will require a large amount of human and AI effort.


It's all up for debate, but personally I feel you're being too pessimistic there. The advances being made are faster than I had expected. The area is one where success will build upon and accelerate success, so I expect the rate of advance to increase and continue increasing.

This particular field seems ideal for AI, since verification enables identification of failure at all levels. If the definitions are wrong the theorems won't work and applications elsewhere won't work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: