> The paper you linked is topically - and perhaps a bit ironically - co-authored with an Israel funded thinktank.
It's also made by a number of respected academics and academic institutions, and all you've done is ignore the content in favor of attacking a respectable source.
> Attributing this to people you disagree with on the issue is a telltale sign of bad faith
I've been in rooms in New York with very smart people who argued exactly this. That the founding of Israel was a mistake and that we should poof it away. (Something about decolonization usually meanders its way in there.)
They're a minority. Like the people who want to relocate Palestinians from Gaza are, to my knowledge, a minority. But they both exist.
> Like the people who want to relocate Palestinians from Gaza are, to my knowledge, a minority.
Not sure what "minority" you are talking about, because this is actually unfolding right in front of our eyes over the past 18 months with the backing of the most powerful country on the planet. Talking about this as if its some fringe idea is disingenuous.
> this is actually unfolding right in front of our eyes
One, it's not currently happening. It's being actively discussed. But to my knowledge, mass expulsion has not (yet) happened.
Two, many things happen despite being only popular with a minority. I haven't seen polling around Abrego Garcia's de facto kidnapping and illegal detention in El Salvador, but I doubt more than the baseline 20% or so of nutters (a) understand what's going on and (b) support it.
Sorry, I was (unclearly, admittedly) referring to Americans in the pro- and anti- camps. I’m not sure how eradicating Israel would poll in Gaza, but I’d wager it’s north of 50% there, too.
The resistors to your local store have to go through logistics as well. And doing the last mile yourself is a lot more inefficient than a post service doing it.
Transport is also quite small fraction of most products' environmental costs.
> If it's so easy, then why do people die from having lesions misdiagnosed as benign?
You're confusing False Negatives with True Negatives. For Non-Benign (Positive) vs. Benign (Negative) classification:
* True Positive Rate (TPR): non-benign classified as non-benign.
* False Positive Rate (FPR): benign misclassified as non-benign.
* True Negative Rate (TNR): benign classified as benign.
* False Negative Rate (FNR): non-benign misclassified as benign.
> It's quite easy to correctly classify 100% of benign cases as benign.
You can engineer a 100% TNR if you just classify everything as the "benign" negative class. The FNR is going to be 100% too, but that doesn't matter -- you correctly classified 100% of benign cases as benign.
> why do people die from having lesions misdiagnosed as benign?
Because the FNR is not 0%. FNR is important. You probably want a decent TPR in there as well. And FPR can be very important too, depending on how life-changing/painful/invasive the treatment for a positive case is!
How much of, especially "higher level cognition" like language, is encoded genetically is highly controversial and the thinking/pendulum in last decade or two has shifted substantially towards only general mechanisms being innate. E.g. the cortex may be in an essentially "random state" prior to getting input.
That's why I qualified all of my statements with "may" and "might". Still, I think it's extraordinarily unlikely that human brains could turn out, for example, to have no special bias for learning language. The training algorithm in our brains would have to be soany orders of magnitude better than the state of the art in ANNs that it would boggle the mind.
Consider the comparison with LLM training. A state of the art LLM that is, say, only an order of magnitude better than an average 4 year old human child in language use is trained on ~all of the human text ever produced, consuming many megawatts of power in the process. And it's helped with plenty of pre-processing of this text information, and receives virtually no noise.
In contrast, a human child that is not deaf acquires language from a noisy enviroment with plenty of auditory stimuli from which they first have to even understand that they are picking up language. To be able to communicate and thus receive significant feedback on the learning, they also have to learn how to control a very complex set of organs (tongue, lips, larynx, chest muscles), all with many degrees of freedom and precise timing needed to produce any sound whatsoever.
And yet virtually all human children learn all of this in a matter of 12-24 months, consuming, say, and then spend another 2-3 years learning more language without struggling as much with the basics of word recognition and pronunciation. And they do all this while consuming a total of some 5kWh, and this includes many bodily processes that are not directly related to language acquisition, and a lot of direct physical activity too.
So, either we are missing something extremely fundamental, or the initial state of the brain is very, very far from random and much of this was actually trained over tens or hundreds of thousands of years of evolution of the hominids.
Language capability is a bit difficult to quantify, but LLMs know tens of languages, and many of those better than at least the vast majority of even native humans at least grammar- and vocabulary-wise. They also encode magnitudes more fact-type knowledge than any human being. My take is that language isn't that hard but humans just kinda suck at it, like we suck at arithmetic and chess.
There sure is some "inductive bias" in the anatomy of the brain to develop things like language but it could be closer to how transformer architectures differ from pure MLPs.
The argument was for decades that no generic system can learn language from input alone. That turned out flat wrong.
Seems quite comprehensive. I wonder if there’s a way to “escape” from the system. It looks nice for getting started but in practice at some point you’ll want programmatic access for e.g. advanced API features.
E.g. a well behaving system independent pipeline serialization could work. Didn’t find anything about this with a quick glance of the docs.
Edit: with a bit longer glance there is a plugin system that allows adding custom code quite easily.
One of the maintainers here. Yes this is a big part of our plans. In addition to our plugin system which allows arbitrary Python scripts, we will soon publish how to add decorators to any existing script which can be run externally but logged into Transformer Lab. So you could do training anywhere but trigger evals in the app, for example.
reply