Hacker Newsnew | past | comments | ask | show | jobs | submit | baanist's commentslogin

I doubt you are going to change anyone's mind on this who is already convinced that computers can think and reason and all that's required is the right sequence of numbers. Moreover, the internet is awash with bots and AIs working on behalf of governments to spread political and economic propaganda. HN has moderation to avoid it but the moderators are human so they can't foil all attempts and notice all AIs and bots.


Most of what you've said is true but what exactly does it mean to "break barriers"? We can not escape the laws of chemistry, physics, and thermodynamics because we live on a compact manifold with finite resources which must be recycled eventually by the surrounding ecology. This is why plastics are now found in all newborns, the chemicals produced by our factories are recycled back into the ecology and our internal biomes.


Totally. I just meant that those are less direct. Species usually don't reach a point where they change the climate because they are stopped by other mechanisms long before. Our cycle is slower, which gives us time to destroy more stuff before we "get regulated", I suppose.


Like most things in software the use cases are the limits of one's imagination. The browser has always been a Turing complete development environment so this is just another demonstration.




What does this have to do with ants evolving to eat plastic ?


Maybe he means the mass extinction has already started (due to plastic pollution).


It's easier to take drugs even if it costs a lot of money.


Neural networks are Turing complete, i.e. there is a universal neural network that can compute any effectively computable function¹. Incidentally, when this is combined with Rice's theorem² it means that safety research is essentially an unsolvable problem because any non-trivial property of a sufficiently complex neural network, e.g. one that can simulate a Turing machine, will have properties which can not be predicted with finite computation.

1: https://www.sciencedirect.com/science/article/pii/0893965991...

2: https://en.wikipedia.org/wiki/Rice%27s_theorem?useskin=vecto...


super interesting, and i'd not seen either reference. thanks very much.


Why aren't AI researchers automating the search for efficient architectures?



There has been some work, but the problem is that its such a massive search space. Philosophically speaking, if you look at how humans came into existence, you could make an argument that the process of evolution from basic lifeforms can be represented as one giant compute per minute across of all of earth, where genetic selection happens and computation proceeds to the next minute. Thats a fuckload of compute.

In more practical terms, you would imagine that an advanced model contains some semblance of a CPU to be able to truly reason. Given that CPUs can be all NAND gates (which take 2 neurons to represent), and are structured in a recurrent way, you fundamentally have to rethink how to train such a network, because backprop obviously won't work to capture things like binary decision points.


I thought the whole point of neural networks was that they were good at searching through these spaces. I'm pretty sure OpenAI is pruning their models behind the scenes to reduce their costs because that's the only way they can keep reducing the cost per token. So their secret sauce at this point is whatever pruning AI they're using to whittle the large computation graphs into more cost efficient consumer products.


When you train a neural network, it is not search, it is descending through a curve.

If you were to search for billions of parameters by brute force, you literally could not do it in the lifespan of the universe.

A neural network is differentiable, meaning you can take the derivative of it. You train the parameters by taking finding gradient with respect to each parameter, and going in the opposite direction. Hence the name of the popular algorithm, gradient descent.


A biological neural network is certainly not differentiable. If the thing we want to build is not realizable with this technique, why can't we move on from it?

Gradient descent isn't the only way to do this. Evolutionary techniques can explore impossibly large, non-linear problem spaces.

Being able to define any kind of fitness function you want is sort of like a super power. You don't have to think in such constrained ways down this path.


The issue is that its still a massive search space.

You can do this yourself, go play nandgame, and beat it, at which point you should be able to make a cpu out of nandgates. Then set up a rnn that is the same layers at total layers of the nandgates and as wide as all the inputs, with every output being fed back into the first input. Then do PSO or GA on all the weights and see how long it takes you to make a fully functioning cpu.


>A biological neural network is certainly not differentiab

Biology is biology and has its constraints. Doesn't necessarily mean a biologically plausible optimizer would be the most efficient or correct way in silicon.

>If the thing we want to build is not realizable with this technique, why can't we move on from it?

All the biologically plausible optimizers we've fiddled with (and we've fiddled with quite a lot) just work (results wise) like gradient descent but worse. We've not "moved on" because gradient descent is and continues to be better.

>Evolutionary techniques can explore impossibly large, non-linear problem spaces.

Sure, with billions of years (and millions of concurrent experiments) on the table.


Program synthesis is a generalization of this. I’m not sure that many ML researchers have thought about the connections yet.


The search space is all off too wide, difficult to parameterize, and there is a wide gap between effective and ineffective architectures - ie: a very small change can make a network effectively DOA.


Notably architecture search was popular for small vision nets where the cost of many training runs was low enough. I suspect some of the train-then-prune approaches will come back, but even there only by the best funded teams.


Are all fruit fry brains the same? Does anyone know what has actually been mapped and why it would generalize from one fruit fly to the next?


I don't think that drosophila are eutelic (https://en.wikipedia.org/wiki/Eutely) so no two flies have precisely the same cells at precisely the same locations (that's true for c. elegans, whose connectome is probably the best studied).

The large-scale architecture will be roughly the same between any two individuals. You would likely need some sort of mapping (like an embedding) to generalize. It's definitely an active area of research.


The article describes it as slicing the fly brain into very thin slices, which are imaged by an electron microscope.

Then you analyze the slice images and determine the neurons and their connection. This is the hard part, and the breakthrough is an AI based method.

Pretty sure they've only mapped one brain so far.


Fortunately, the whole chain of slicing, imaging, and analysis are now at least partially automated, so in theory you can repeat the process with nothing more than some time on the equipment and a bit of compute.

In practice, I suspect there's a fair bit of grad student manual labor that keeps the pipeline flowing...


They crowdsourced three million manual corrections to the AI output, yeah.


That sounds like a great training set then.


Yes, they are apparently exactly the same, with exactly the same neurons and connections!

Happened to go for a walk with the corresponding author and made her repeat this fact for me.


I don't think that's correct- the nature article about the article says they don't, https://www.nature.com/articles/d41586-024-03190-y and drosophila are not eutelic (although I see that some insects do have "partial constancy"). Could you ask the author to clarify?

Looking in the paper more closely they say: """After matching, Schlegel et al.12 also compared our wiring diagram with the hemibrain where they overlap and showed that cell-type counts and strong connections were largely in agreement. This means that the combined effects of natural variability across individuals and ‘noise’ due to imperfect reconstruction tend to be modest, so our wiring diagram of a single brain should be useful for studying any wild-type Drosophila melanogaster individual. However, there are known differences between the brains of male and female flies46. In addition, principal neurons of the mushroom body, a brain structure required for olfactory learning and memory, show high variability12. Some mushroom body connectivity patterns have even been found to be near random47, although deviations from randomness have since been identified48. In short, Drosophila wiring diagrams are useful because of their stereotypy, yet also open the door to studies of connectome variation."""

i woudl expect the overall architecture to be the same, but not the cell identities or the connections. But as always, I'm happy to be shown wrong with facts.


No need to get angry and sarcastic.


highly stereotyped, definitely not identical


These algorithms are not capable of symbolic reasoning and abstract interpretation. The most obvious demonstration of this is that no algorithm on the market currently can solve sudoku puzzles even though they have spent billions of dollars "training" them on logic and reasoning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: