Hacker Newsnew | past | comments | ask | show | jobs | submit | more Grothendank's commentslogin

There's a neat trick when you encounter jargon.

1. Identify the jargon terms you don't understand

2. Lookup papers that introduce the jargon terms

3. Skim-read the paper to get the gist of the jargon

If you don't want to do this, then you don't have to feel uneducated. You can simply choose to feel like your time is more important than skimming a dozen AI papers a week.

But for example, here's what I did to understand the parent comment:

1. I had no idea what lora is or how it relates to alpaca.

2. I looked up https://github.com/tloen/alpaca-lora

3. I read the abstract of the Lora paper: https://arxiv.org/pdf/2106.09685.pdf https://github.com/tloen/alpaca-lora

4. Now I know that Lora is just a way of using low rank matrices to reduce finetuning difficulty by a factor of like 10,000 or something ridiculous

5. Since I don't actually care about /how/ Lora does this, that's all I need to know.

6. TLDR; Lora is a way to fine-tune models like Llama while only touching a small fraction of the weights.

You can do this with any jargon term at all. Sure, I introduced more jargon in step 4 - low rank matrices. But if you need to, you can use the same trick again to learn about those. Eventually you'll ground yourself on basic college level linear algebra, which if you don't know, again you should learn.

The sooner you evolve this "dejargonizing" instinct rather than blocking yourself when you see new jargon, the less overwhelmed and uneducated you will feel.


> 3. Skim-read the paper to get the gist of the jargon

Or, you know, you could ask ChatGPT to explain it to you... Granted the term was coined 2021>=. Even if it wasn't but the paper is less than 32k tokens... 0.6c for the answer doesn't seem all that steep.

edit: grammar


This actually works!

It works astoundingly well with poorly written technical manuals. Looking at you, CMake reference manual O_O. It also helps translate unix man pages from Neckbeardese into clean and modern speech.

With science papers it's a bit more work. You must copy section by section into GPT4, despite the increased token limit.

But sure. Here's how it can work:

1. Copy relevant sections of the paper

2. As questions about the jargon:

"Explain ____ like I'm 5. What is ____ useful for? Why do we even need it?"

"Ah, now I understand _____. But I'm still confused about _____. Why do you mean when you say _____?"

"I'm starting to get it. One final question. What does it mean when ______?"

"I am now enlightened. Please lay down a sick beat and perform the Understanding Dance with me. Dances"

This actually works surprisingly well.


Yeah, I think education is a great use case here. Sure, the knowledge that's built into the model might be inaccurate or wrong but you can feed the model the knowledge you want to learn/processed.

What you get is a teacher that never tires, is infinitely patient, has infinite time, doesn't limit questions, doesn't judge you, really listens and has broad, multidisciplinary knowledge that correct-ish (for when it's needed). I've recently read somewhere that Stanford (?) has almost as many admin workers as they do students. Seems to me that this is a really bad time to be that bloated. Makes you wonder what you really spend your money on, is it worth it (yeah, I know, it's not just education that you get in return) and if you can get the same-ish effect for a lot cheaper and on your timetable.

Not that the models or field, now, are in a state that would produce a good teaching experience. I can however imagine a future not so distant that this would be possible. Recently on a whim I've asked it to produce a options trading curriculum for me. It did a wonderful job. I wouldn't trust it if I didn't know a little bit myself about the subject before but I came off really impressed.


No need to pay yourself. Uploaded https://arxiv.org/pdf/2106.09685.pdf to scisummary:

This text discusses various studies and advancements in the field of natural language processing (NLP) and machine learning. One study focuses on parameter-efficient transfer learning, and another examines the efficiency of adapter layers in NLP models. Further studies evaluate specific datasets for evaluating NLP models. The article proposes a method called LoRA (low rank adaptation) for adapting pre-trained neural network models to new tasks with fewer trainable parameters. LoRA allows for partial fine-tuning of pre-trained parameters and reduces VRAM usage. The article provides experimental evidence to support the claims that changing the rank of Delta W can affect the performance of models, and that LoRA outperforms other adaptation methods across different datasets. The authors propose LoRA as a more parameter-efficient approach to adapt pre-trained language models to multiple downstream applications.


I feel like Lecun did a poor job of localizing the technique relative to any philosophy, or really explaining what it does for a nontechnical audience.

Saying he gives talks to philosophers, or saying this pushes philosophy to its limits doesn't fix the problem that lecun does a poor job - in this presentation - of philosophically motivating the proposal.

Perhaps I am wrong, and you can point out exactly how lecun explicates the philosophy in the presentation - perhaps it's really embedded in the maths, which I have not appreciated.

Appealing to lecun's authority won't fix the opacity of the presentation. But interpreting it can help! Are you up for it?


I haven't even given it a deep read, so I unfortunately can't help shed any light. From my quick read, it didn't seem to me that laying out a digestible or rigorous philosophical perspective was really the point here. It seemed more directly biology inspired than philosophy.

I also don't seek to valorize Lecun. But he was a very early figure working on these technologies and from the beginning there was a neuroscience-inspired impetus to machine learning.

My point was sort of the opposite: that assuming Lecun doesn't clear the bar of having "exposure to the prior work on philosophy of mind and philosophy of language" seems like a weak bet.

edit: For clarity, I'm making the assumption that lifelong AI researchers who put time into learning from neuroscience... would also gravitate towards and seek to learn from the nearest relevant branches of philosophy.


It would be better to help zero people than it would be to develop the bitter attitude you've developed - this is because your attitude could prevent people from helping!

I feel you've helped too many people, with too little reward, and for that matter with too little progress in your ability to help.

That's not the fault of the people needing help, though, now is it? So why are you characterizing them as vampires?

IMO rather than embittering people against helping others, you should retire from helping ppl that aren't paying for your help. That way, you'll stop embittering and misguiding helper volunteers, whose jobs literally ARE to help people who don't seem to "get" the docs.


You're denying a person's lived experience and telling them to stop doing what they love. That's not very nice.

And I know what they're talking about. I'll put it a bit more politely: You can't help people who aren't willing to help themselves.

It's a thing. When you start helping out people in forums or on irc or what have you, sooner or later you'll encounter people who either can't be helped or just require far too much energy. No matter how much you'd like to help everyone, you just can't. It happens.

I believe parent poster has already calmed down, but I can understand their feelings. It's frustrating sometimes!

The many people you are able to help make up for it though.


like I said it's better to not help people at all than to evolve the attitude that some people don't want to help themselves. people typically want to help themselves but don't know how, and might have self-defeating habits - that's not a moral failing though, that's just how people are, and what educators must overcome.

When a helper fails to advocate for these people, it's wrong to conclude that they didn't want the help - a moral judgement. The help being offerred might not have been good enough, or the person might not have been ready for help.

In the end, whenever we do get to help someone we should be happy (since that improves the work. But when we fail someone - or we think someone has failed themselves - we should not be bitter, but become better at finding those who we can help, and improving what we offer to those we help.

I don't think you would doubt any of what I said. Instead, I think you're here to defend a morally negative view of students and others who "do not want to work" for the help. And I think that's wrong.


In roughly 90-99% of all cases, we are in agreement. And if you only encounter a few difficult <students> a year, it can definitely be worth the effort.

On the other hand, if you're in a situation where you're encountering random people on the internet at a rate of say 10-100 (or more) per day (possibly never to return); then the way you deal with it is you look for signs that the person has been trying. If they have: you go out of your way to help them, you absolutely do. But if they haven't, sometimes all you can do is hope that one day they'll run across someone like you in real life, who can teach them kindness and respect and humility.

I'm curious now, in what kind of environment have you worked? Are you a teacher?

see also: https://en.wiktionary.org/wiki/help_vampire


I agree with the other commenter, though maybe with less profanity.

Think of it this way, helping people is good, right? So you should help the people who are trying to help you.

> this is because your attitude could prevent people from helping!

Likewise, the attitude of people who put in minimal effort and expect lots of effort from volunteers causes the volunteers to burn out and makes it less likely that people will help you. Heck, even if they don't directly quit, they'll even end up with other people telling them to stop helping people.


Basically the thesis is that we can make needy and hapless students less needy and more self reliant by publicly roasting the most needy and hapless students among them. So basically lets bully the students into being more competent, because we're sad that our naive approach didn't help them.

That's an ethically bankrupt position, and I reject it.

You're simply wrong here. Any teacher who starts to be paranoid about the ineptitude of their students will be MISERABLE. Instead, teachers must establish better boundaries, and better materials and methods for their students.

For instance, I've /been/ the hapless student, and you probably have been too. I've asked silly X-Y questions. I've refused to read the docs before asking questions that have been asked millions of times before. And what helped me was people linking me to articles on how to ask good questions and how to get good answers and generally how to help and be helped.

Throwing in the towel and saying "man these students are just too lazy and expect too much" is not going to help the community, and it's not going to make the students magically ask more insightful and considerate questions. In my experience working with tutors, professors, mentors, and students in both professional, open source, and academic contexts, no skilled and happy teacher thinks or talks this way about their students.

So if you want to relate to students this way, just realize that it's no better than haplessly trying to help everyone in the first place:

- It still burns you out and makes you miserable (in fact, it's the one of the end stages of burnout)

- It still fails to make your students more self-reliant

If that's what you want, please continue to denigrate people who, after all, simply want to be helped and don't know how to be helped.


No one has put forward the thesis that one should roast needy and hapless students!

Rather the opposite. Though this document is written for people who ask questions, see also the section on how to help.

http://www.catb.org/~esr/faqs/smart-questions.html#idm667 (How To Answer Questions in a Helpful Way )

(Of course: this is ESR. He's ... opinionated. But does seem to have his heart in the right place in the end)


You're the one who wants to bully everyone who disagrees with you.


Tell the employer to close up shop, because they are too incompetent to design a business model that involves serving all customers, and doing any less is just fraud :shrug:


Often you can guilt trip sales into acting as your advocate to other departments of the org.

"I know it's not your job, but can you please make sure the person you'll transfer me to can help, uwu? I'm just a poor helpless customer, owo!"


This is definitely one of the drivers of anti-capitalist extremism, and as an anti-capitalist extremist and recruiter, I can only say, "thanks!"

Remember capitalism's job is to provide zero value for nonzero cost. Therefore, we must work to end capitalism at any cost! <3


how does it approach these issues?


By and large the UCC codified the common law as it developed in the context of commercial contracts, but streamlined and simplified some rules (in a few cases inverting them), especially in areas which were deemed to result in overly complex litigation with unpredictable outcomes. But irony of ironies, the irreducible complexity didn't magically disappear, so now instead of 1 problem you have 2, both of which law students are forced to wrestle with, though in the abstract the common law rules are the simpler of the two.


It's interesting because I have had a similar breakthroughs by using Bing to help organize and simplify my thoughts.

I have Pure-O - a mental variant of obsessive compulsive disorder where I ruminate compulsively about things that concern me. But often I can't converge on a solution.

Writing has often helped in the past, but sometimes it can't help. In those times, I just write and write and write and never reach a solution - in the ruminant state I seem to struggle to extract larger insights from the ruminations.

That's where AI can help - not just in boilerplate production, but in editorial input, and in looking up and brainstorming potential solutions.

When I put my writings together with the help of AI, I can do a number of useful things:

1. Bing can extract and merge key ideas that come up frequently in my rumination

2. Bing can help me identify and describe the values my ideas represent

3. Bing can summarize and help me understand the nature of the conflict

4. Bing can help me brainstorm and organize resolutions

5. Bing can even look up methods for understanding and solving the problem

Once I have the simplified, abstracted, and actionized version of my ruminations, I notice two things:

1. The pressure to ruminate about the conflict evaporates

2. My thoughts on the issue become clear, calm, and reflective

3. The reflection moves to my values and options

4. There is no more hesitation and splitting of hairs

5. I can finally pick a resolution to the conflict

I encourage you to experiment with AI as more than a boilerplate generator.

I encourage you to see whether it can help you:

1. write out complex and painful thoughts

2. engage them in ways you're currently struggling to do

3. refine and simplify them into clear and powerful values and facts

4. brainstorm nuanced solutions to conflicting emotions

I consider this similar to writing with a skilled and caring helper or counselor. Moreover, there is evidence (both scientific and personal) that the AI guidance teaches and renews the skills I need to perform the writing process on my own.

I see it as a powerful extension of the writing technique, not a replacement.


Reading this conversation is like watching Casey skillfully and lovingly jailbreak Uncle Bob's GPT personality prompt, yielding a new and exciting performance focused alter ego which I admiringly dub "SPEEDY BOB".


The claim is that writing causes you not to be disposed more positively - it doesn't really do that now does it? Writing is miserable work. No. The claim is that writing causes you to think more clearly and powerfully than not writing.

The issue with this argument is that co-writing with an AI might have that same benefit. If writers have some sort of elitism going on about the process of bashing their own heads against a text editor until gold issues forth, that's fine.

But that may have more to do with masochism than with refining one's thoughts. One can think and write deeply with the help of AI - in many cases more deeply than one can do on their own. Furthermore, co-writing with an AI helps bad writers improve [1]. These emerging facts are surprising, valuable, and should not be dismissed.

[1] https://economics.mit.edu/sites/default/files/inline-files/N...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: