Hacker News new | past | comments | ask | show | jobs | submit login

> We do understand how they work, we did build them. The mathematical foundation of these models are sound. The statistics behind them are well understood.

We don't understand how they work in the sense that we can't extract the algorithms they're using to accomplish the interesting/valuable "intellectual" labor they're doing. i.e. we cannot take GPT-4 and write human-legible code that faithfully represents the "heavy lifting" GPT-4 does when it writes code (or pick any other task you might ask it to do).

That inability makes it difficult to reliably predict when they'll fail, how to improve them in specific ways, etc.

The only way in which we "understand" them is that we understand the training process which created them (and even that's limited to reproducible open-source models), which is about as accurate as saying that we "understand" human cognition because we know about evolution. In reality, we understand very little about human cognition, certainly not enough to reliably reproduce it in silico or intervene on it without a bunch of very expensive (and failure-prone) trial-and-error.




> We don't understand how they work in the sense that we can't extract the algorithms they're using to accomplish the interesting/valuable "intellectual" labor they're doing. i.e. we cannot take GPT-4 and write human-legible code that faithfully represents the "heavy lifting" GPT-4 does when it writes code (or pick any other task you might ask it to do).

I think English is being a little clumsy here. At least I’m finding it hard to express what we do and don’t know.

We know why these models work. We know precisely how, physically, they come to their conclusions (it’s just processor instructions as with all software)

We don’t know precisely how to describe what they do in a formalized general way.

That is still very different from say an organic brain, where we barely even know how it works, physically.

My opinions:

I don’t think they are doing much mental “labor.” My intuition likens them to search.

They seem to excel at retrieving information encoded in their weights through training and in the context.

They are not good at generalizing.

They also, obviously, are able to accurately predict tokens such that the resulting text is very readable.

Larger models have a larger pool of information and that information is in a higher resolution, so to speak, since the larger better preforming models have more parameters.

I think much of this talk of “consciousness” or “AGI” is very much a product of human imagination, personification bias, and marketing.


>We know why these models work. We know precisely how, physically, they come to their conclusions (it’s just processor instructions as with all software)

I don't know why you would classify this as knowing much of anything. Processor instructions ? Really?

If the average user is given unfettered access to the entire source code of his/her favorite app, does he suddenly understand it ? That seems like a ridiculous assertion.

In reality, it's even worse. We can't pinpoint what weights, how and in what ways and instances are contributing exactly to basic things like whether a word should be preceded by 'the' or 'a' and it only gets more intractable as models get bigger and bigger.

Sure, you could probably say we understand these NNs better than brains but it's not by much at all.


> If the average user is given unfettered access to the entire source code of his/her favorite app, does he suddenly understand it ? That seems like a ridiculous assertion.

And one that I didn’t make.

I don’t think when we say “we understand” we’re talking about your average Joe.

I mean “we” as in all of human knowledge.

> We can't pinpoint what weights, how and in what ways and instances are contributing exactly to basic things like whether a word should be preceded by 'the' or 'a' and it only gets more intractable as models get bigger and bigger.

There is research coming out on this subject. I read a paper recently about how llama’s weights seemed to be grouped by concept like “president” or “actors.”

But just the fact that we know that information encoded in weights affects outcomes and we know the underlying mechanisms involved in the creation of those weights and the execution of the model shows that we know much more about how they work than an organic brain.

The whole organic brain thing is kind of a tangent anyway.

My point is that it’s not correct to say that we don’t know how these systems work. We do. It’s not voodoo.

We just don’t have a high level understanding of the form in which information is encoded in the weights of any given model.


> If the average user is given unfettered access to the entire source code of his/her favorite app, does he suddenly understand it ? That seems like a ridiculous assertion. And one that I didn’t make. I don’t think when we say “we understand” we’re talking about your average Joe. I mean “we” as in all of human knowledge.

It's an analogy. In understanding weights, even the best researchers are basically like the untrained average joe with source code.

>There is research coming out on this subject. I read a paper recently about how llama’s weights seemed to be grouped by concept like “president” or “actors.”

>But just the fact that we know that information encoded in weights affects outcomes and we know the underlying mechanisms involved in the creation of those weights and the execution of the model shows that we know much more about how they work than an organic brain.

I guess i just don't see how "information is encoded in the weights" is some great understanding ? It's as vague and un-actionable as you can get.

For training, the whole revolution of back-propagation and NNs in general is that we found a way to reinforce the right connections without knowing anything about how to form them or even what they actually are.

We no longer needed to understand how eyes detect objects to build an object detecting model. None of that knowledge suddenly poofed into our heads. Back-propagation is basically "reinforce whatever layers are closer to the right answer". Extremely powerful but useless for understanding.

Knowing the Transformer architecture unfortunately tells you very little about what a trained model is actually learning during training and what it has actually learnt.

"Information is encoded in a brain's neurons and this affects our actions". Literally nothing useful you can do with this information. That's why models need to be trained to fix even little issues.

If you want to say we understand models better than the brain then sure but you are severely overestimating how much that "better" is.


> It's as vague and un-actionable as you can get.

But it isn’t. Knowing that information is encoded in the weights gives us a route to deduce what a given model is doing.

And we are. Research is being done there.

> "Information is encoded in a brain's neurons and this affects our actions". Literally nothing useful you can do with this.

Different entirely. We don’t even know how to conceptualize how data is stored in the brain at all.

With a machine, we know everything. The data is stored in a binary format which represents a decimal number.

We also know what information should be present.

We can and are using this knowledge to reverse engineer what a given model is doing.

That is not something we can do with a brain because we don’t know how a brain works. The best we can do is see that there’s more blood flow in one area during certain tasks.

With these statistical models, we can carve out entire chunks of their weights and see what happens (interestingly not much. Apparently most weights don’t contribute significantly towards any token and can be ignored with little performance loss)

We can do that with these transformers models because we do know how they work.

Just because we don’t understand every aspect of every single model doesn’t mean we don’t know how they work.

I think we’re starting to run in circles and maybe splitting hairs over what “know how something works” means.

I don’t think we’re going to get much more constructive than this.

I highly recommend looking into LoRas. We can make Loras because we know how these models work.

We can’t do that for organic brains.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: