A key challenge with Alzheimer’s is there is no good mouse model for the disease. While some approximate the phenotype, it’s not clear that the disease model as commonly studied in mice matches well with mechanisms of the human disease. There’s some thinking in the field that this could be a key reason why so many treatments have appeared very promising in mice and haven’t panned out in humans.
As a neuroscientist, my biggest disagreement with the piece is the author’s argument for compositionality over emergence. The former makes me think of Prolog and lisp, while the later is a much better description for a brain. I think ermergence is a much more promising direction for AGI than compositionality.
Author here. So what! I am not talking about promising directions for AGI, I am talking about having computer systems that we can have confidence in. Sure, AGI if it ever happens will look more like emergence than compositionality, and I'm sure it won't feel a need to explain to us fallible humans why its decisions are correct. In the meantime, I'd like computer systems to be manageable, reliable, transparent, and accountable.
100% agree. When we explicitly segment and compose AI components, we are removing the ability for them to learn their own pathways between the components. We've been proven time and time again the bitter lesson[1]: that throwing a ton of data and compute at a model yields better results than what we could come up with.
That said, we can still isolate and modify parts of a network, and combine models trained for different tasks. But you need to break things down into components after the fact, instead of beforehand, in order to get the benefits of learning via scale of data + compute.
This is a well known phenomenon. It accounts for example in the flash perceived when someone inadvertently looks at an infrared class 5 laser and is blinded
I don't mean to discount the cool imaging-related reconstruction of a point spread function, but rather to say that ultrasound attenuation through the skull an soft tissue has already been well characterized and it's not a surprise that it is viable to pass through.
Correct me if I’m wrong - but the novel thing is not that it’s possible for ultrasound to pass through the skull, but that it’s possible for it to pass through the skull and back in a way that an image can be reconstructed.
> OpenWater's Transcranial Focused Ultrasound Platform. open-LIFU is an ultrasound platform designed to help researchers transmit focused ultrasound beams into subject’s brains, so that those researchers can learn more about how different types of ultrasound beams interact with the neurons in the brain. Unlike other focused ultrasound systems which are aimed only by their placement on the head, open-LIFU uses an array to precisely steer the ultrasound focus to the target location, while its wearable small size allows transmission through the forehead into a precise spot location in the brain even while the patient is moving.
FWIU NIRS is sufficient for most nontherepeautic diagnostics though. (Non-optogenetically, infrared light stimulates neuronal growth, and blue and green lights inhibit neuronal growth)
A commercial medical ultrasound imaging device in doppler mode can pick up and map onto the image plane some of the vessels in the brain through the skulls. But mostly just through the temporal bones(where the skulls is like 1-2mm thick). (The commercial machines run doppler on lower frequency than imaging signal so you get no s tructural image this way, only the color doppler map(unless you find a place in the skull where an emissary vein passes through the bone table where the image signal can ride through))
Through the temporal bone of most people you can catch some sparse doppler signals with average hospital gear.
The fontanelles enable good ultrasound imaging on an entirely different level. A highres greyscale image vs a few sparse blobs of doppler from major vessels.
I know the exercise was to p-hack, but instead I decided to one-shot my attempt at the most reasonable model from first principals:
- given that we are looking at a national scale, use only national politicians
- use the components from Macroeconomics 101: exclude inflation as that’s on the Fed, exclude stocks as too conflated with FX and international investing alternatives
- don’t needlessly withhold data
Tried one hypothesis, so p-value of 0.04 is accurate. Still OK to explore if you Bonferroni correct the p-Val afterwards
Fabric8Labs can print 100% density, whereas Desktop Metal is highly porous. Also Fabric8Labs can directly print pure copper, which has historically been very difficult. The process is also more energy efficient and better suited for small complex parts. Desktop Metal serves a different market in terms of material and size.
disclaimer: I'm a GP at Asimov Ventures and invested in Fabric8labs' pre-seed round.
> "directly print pure copper, which has historically been very difficult"
SLM [1] has been able to 3D print Copper with precision down to the size of a mechanical pencil's lead for a long time already. In what way is ECAM better? Is it more precision + no need to handle powder + no need for laser source and containment - ECAM being slower, or am I missing some crucial feature?
The high thermal conductivity of copper makes it difficult to maintain needed temperatures during SLM. Also, copper is prone to oxidation at high temperatures, further complicating (thermal based) laser melting 3D printing techniques. It’s more typical to print copper alloys than pure copper.
SLM machines typically use an Argon gas chamber. DED machines use an Argon gas shield.
> It’s more typical to print copper alloys than pure copper.
In the context of modern SLM, it depends on your definition of "pure" and "alloy". During the process, a bit of resin to is mixed into the powder and heat treated in a final step to get to 99.9% pure copper.
edit: Just fixed up my knowledge. Indeed alloys are typically used (99% copper with things like Chrome added on depending on use-case), tough the pure copper can be used with higher laser power.
Any references for 99.9% density with SLM copper? My understanding is that pure copper SLM printing is less frequently done as doesn’t work well with the infrared lasers on most machines, requires high heat & speed, and has more porosity than other alloys. It’s also hard to print so that it’s strong, conductive and heat stable.
Sorry I wasn't talking about density but the copper content of a powder which is printable. Googling a bit I found this presentation from 2022 showing that a density of 99.5% for pure copper is possible although at half the productivity of a copper alloy https://www.coppercouncil.org/wp-content/uploads/2022/02/TS2...
The copper use-case is what kick-ed off an industry-wide race towards offering blue laser as an option. There is more than just wavelength that goes into printing good copper results, but that is a major factor.
This is a super cool device. Note that the decoding is highly limited: they decode into one of five different sentences. This is easier than five words for example as there is more information to distinguish.
Unfortunately the media is blowing this way of out proportion as the larynx alone does not contain sufficient information to decode silent speech.
If you also sense the lips, tongue articulators, and jaw, then general English decoding becomes possible with high accuracy (eg see our recent work here: https://x.com/tbenst/status/1767952614157848859). It’s not in the preprint but I’ve done experiments with only the larynx recorded and performance is pretty abysmal on even a 10 word vocabulary—-hence why they did a five sentence task.
Why can't the muscles of the larnex and perhaps chest / diaphragm, be monitored and mapped to vocal chord noises, rather than full speech? Just put the noise in the throat and let the rest of the body make it work.
> If you also sense the lips, tongue articulators, and jaw, then general English decoding becomes possible with high accuracy
A bit OT but I see this frequently and I'm curious. Why do you English speakers (or just a US phenomenon?) tend to use the word "English" instead of "language", "linguistic" or one of its related words to refer to a general concept?
Not OP, but as a native English speaker and former scientist (though not in this area), I would interpret "x does y on English tasks" to mean "we tested this in English and don't know if the effect generalizes to other languages".
In this case we do know if the effect generalizes to other languages. It cannot fail to; the larynx, lips, tongue, and jaw are almost all there is. For example, vowels are conventionally defined by jaw position ("height"), tongue position ("frontness"), and lip configuration ("rounded" or not).
You might miss some things like creaky voice or ejectives, you'll probably miss aspiration, but all that does is give you a worst-case scenario analogous to a native speaker trying to understand someone with a foreign accent. Extremely high accuracy will be possible.
Sure, in the same sense that it would be "unscientific" to conclude that someone's amputated leg didn't regenerate by chance, because the sample size is only 1.
If you know how you're recognizing English, and you know that other languages do not differ from English in relevant ways, then you know you can recognize those other languages. Pretending you don't know something you do know is not scientific.
This seems like damned-either-way. If they had only tested English and asserted that it was universally applicable to all languages, it’s likely you (or someone else) would rightfully object that it’s annoying when English speakers assume that’s all there is.
That's not a similar claim. Anyone can be annoyed by anything; the idea that it's "unscientific" to state that a method of recognizing English by measuring the positions of the lips, tongue, and jaw alongside the activity of the larynx will apply to every other spoken language in the world is ludicrous on its face. It will, because those measurements capture nearly every dimension of phonetic variation that exists. No one could believe otherwise, except apparently for metabagel.
You don't know, though. You have a good working hypothesis and you can make reasoned predictions, but it remains untested. The core principle of science is that we test our hypotheses.
Well, no, they're minor elements everywhere. You don't need to be able to capture every phonemic distinction in a language to get a near-perfect transcription, as witnessed by the fact that people understand foreign accents without difficulty. The much larger problem in understanding foreign speech is the odd word choices and lack of grammaticality, but those problems don't arise when you're transcribing native speech.
For some comparisons, think about the fact that Semitic languages are traditionally written without bothering to indicate the vowels, or that while modern English has a phonemic distinction between voiced and unvoiced fricatives, this has a very uneven correspondence to the same distinction as it exists in the writing system. In the case of the interdental fricatives, the writing system does not even contemplate a distinction. And there's nothing particularly problematic about this; if you delete all the voicing information from a stretch of English speech, it stays about as intelligible as it was before. (A voicing difference in stops is not even audible to English speakers. It's audible in fricatives, but no one is going to be confused.)
> For some comparisons, think about the fact that Semitic languages are traditionally written without bothering to indicate the vowels, or that while modern English has a phonemic distinction between voiced and unvoiced fricatives, this has a very uneven correspondence to the same distinction as it exists in the writing system.
And there's a very uneven correspondence between vowels as they exist in speech, and as they exist in the English writing system. Thought dissent mannequin swipe them or bite roar a lie.
You're right that usually, in English, you can understand a sentence with aspiration information stripped out. But just because it's not (usually) significant in English, that doesn't mean that's universal across all languages! Wikipedia has a short lists of languages where aspiration makes a difference. https://en.wikipedia.org/wiki/Aspirated_consonant#Phonemic
> In many languages, such as Armenian, Korean, Lakota, Thai, Indo-Aryan languages, Dravidian languages, Icelandic, Faroese, Ancient Greek, and the varieties of Chinese, tenuis and aspirated consonants are phonemic. Unaspirated consonants like [p˭ s˭] and aspirated consonants like [pʰ ʰp sʰ] are separate phonemes, and words are distinguished by whether they have one or the other.
x1798DE captured my intent well. For example, tonal languages like Mandarin or Cantonese may be more difficult to decode if vocal cords aren’t vibrating, and languages with more phonemes that have both a voiced and unvoiced version might be more difficult. I still think decoding will be possible for general language, but that’s a hypothesis whereas I know it’s true for English.
> and languages with more phonemes that have both a voiced and unvoiced version might be more difficult.
I had the understanding that English is unusually rich in phonemes that occur in both a voiced and unvoiced version. But as I've mentioned sidethread, this just isn't very significant as far as transcribing English goes.
English has an almost full series of stop and fricative phonemes that exhibit voicing contrasts:
- Bilabial, alveolar, and velar stops /p, b, t, d, k, g/, though the distinction between /t/ and /d/ disappears intervocalically in American English. [In practice, English speakers differentiate these phonemes more by the contrast of aspiration than by the contrast of voicing.]
- Interdental, labiodental, alveolar, palatal, but generally not velar, fricatives /θ, ð, f, v, s, z, ʃ, ʒ/, along with palatal affricates /tʃ, dʒ/.
- Nasals and approximants are always voiced.
Compare a language like Mandarin Chinese, where there are between zero and one pairs of phonemes that contrast by voicing (the sound represented by pinyin "r" may be a voiced fricative otherwise equivalent to "sh", or it may be an approximant; there is no contrasting voiceless approximant), or Spanish, where only the stops feature this contrast.
What are the languages that have more voicing contrasts than English does? It would almost be necessary for such a language to distinguish between voiced and unvoiced vowels. (Some quick research suggests that Icelandic at least has a comparable number of voicing contrasts, but it is not obviously more than English and appears to be actively shrinking.)
> tonal languages like Mandarin or Cantonese may be more difficult to decode if vocal cords aren’t vibrating
More difficult, yes, but in the sense that decoding may take more computation, not that the error rate will go up.
Again, we can already observe that e.g. Mandarin speakers do not have trouble understanding text that carries no information about tone, nor do they have trouble understanding songs, where lexical tone is overridden by the melody of the song.
(What happens here depends what you mean. If you want to decode speech into pinyin with tone marks omitted, the lack of ability to measure tones will fail to be a problem by definition. If you want to decode into Chinese characters, you'll need a robust model of the language, at which point lack of tones will also fail to be a problem - the language model will cover for it. If you want to decode into pinyin with tone marks, you won't be able to do that without using a language model.)
I'd speculate English speakers are used to being part of a society where non-English speakers are present and politically important. It is polite not to assume that English = language. Even on the British Isles English isn't a universal thing. Let alone somewhere like America where it isn't even native.
"Language" just doesn't mean "English". In Australia if someone is talking about "language" on its own I'd assume they're Aboriginal advocates.
In the instances where a person says "English" in this kind of context, it catches your attention and you infer that the person is an English-speaker, and possibly American.
But when a person uses the generic word "language", you don't notice it.
This leads you to believe that English speakers "tend to use the word English," when that's not the case necessarily.
I don't know what this perceptual fallacy is called, but there's probably a word. In English :-)
There are about 6000 spoken languages around the world with an extreme variety in how they produce meaning. How could you make sweeping statements about all of them?