Kind of a false dichotomy. A great example is debuggers vs print statements. Some people get by just fine with print statements, others lean heavily on debuggers. Another example is IDE vs plain vIM.
Becoming obsolete is a fear of people who are not willing or able to learn arbitrary problem domains in a short amount of time. In that case learning to use a particular tool will only get you so far. The real skill is being able to learn quickly (enthusiasm helps).
So, "useless or dangerous tools" is not a self contradictory sentence.
Gas powered pogo sticks, shoe fitting X-ray, Radium flavored chocolates, Apollo LLTV, table saws, Flex Seal for joining two halves of boats together, exorbitantly parallelized x86 CPU, rackable Mac Pro with M1 SoC, image generation AI, etc.
AI tools fall onto the category of just in time learning. No, even semi-competent, software engineer is going to become obsolete because they don't know the newest and most hyped AI tool. And anyone stupid enough to hire on that basis isn't worth working for.
How processors work, cache and memory work, how the browser works, data structure and algorithms, even design patterns are all important foundationaly knowledge. How to tell an AI to shit out some code or answer a question definitely isn't.
Symbolic processing was obviously a bad approach to building a thinking machine. Well, obvious now, 40 years ago probably not as much, but there were strong hints back then, too.
"AI agent" roughly just means invoking the system repeatedly in a while loop, and giving the system a degree of control when to stop the loop. That's not a particularly novel or breakthrough idea, so similarities are not surprising.
I'm not convinced that symbolic processing doesn't still have a place in AI though. My feeling about language models is that, while they can be eerily good at solving problems, they're still not as capable of maintaining logical consistency as a symbolic program would be.
Sure, we obviously weren't going to get to this point with only symbolic processing, but it doesn't have to be either/or. I think combining neural nets with symbolic approaches could lead to some interesting results (and indeed I see some people are trying this, e.g. https://arxiv.org/abs/2409.11589)
I agree that symbolic processing still has a role - but I think it's the same role it has for us: formal reasoning. I.e. a specialized tool.
"Logical consistency" is exactly the kind of red herring that got us stuck with symbolic approach longer than it should. Humans aren't logically consistent either - except in some special situations, such as solving logic problems in school.
Nothing in how we think, how we perceive the world, categorize it and communicate about it has any sharp boundaries. Everything gets fuzzy or ill-defined if you focus on it. It's not by accident. It should've been apparent even then, that we think stochastically, not via formal logic. Or maybe the Bayesian interpretation of probabilities was too new back then?
Related blind alley we got stuck in for way longer than we should've (many people are still stuck there) is in trying to model natural language using formal grammars, or worse, argue that our minds must be processing them this way. It's not how language works. LLMs are arguably a conclusive empirical proof of that.
Yeah, I agree logic and symbolic reasoning have to be _applications_ of intelligence, not the actual substrate. My gut feel is that intelligence is almost definitionally chaotic and opaque. If one thing prevents superhuman AGI, I suspect it will be that targeted improvements in intelligence are almost impossible, and it will come down to the energy we can throw at the problem and the experiments we're able to run and evaluate.
What’s interesting to me is the rise of agentic approaches which are effectively “build a plethora of tools and heuristics” with an outer loop that combines, mutates and assigns values to these components. Where before that process was more rigid, we now have access to much more fluid intelligence but the structure feels similar - let the AI prod at the world and make experiments, then look at what worked and think of some plausible enhancements. At a certain point you’re enhancing the code that enhances the enhancer and all bets are off.
The problem is intensity/power, as discussed previously photon-photon interactions are weak, so you need very high intensities to get a reasonable nonlinear response. The issue is, that optical matrix operations work by spreading out the light over many parallel paths, i.e. reducing the intensity in each path. There might be some clever ways to overcome this, but so far everyone has avoided that problem. They said we did "optical deep learning" what they really did was an optical matrix multiplication, but saying that would not have resulted in a Nature publication.
The real issue is trying to backpropagate those nonlinear optics. You need a second nonlinear optical component that matches the derivative of the first nonlinear optical component. In the paper above, they approximate the derivative by slightly changing the parameters, but that means the training time scales linearly with the number of parameters in each layer.
Note: the authors claim it takes O(sqrt N) time, but they're forgetting that the learning rate mu = o(1/sqrt N) if you want to converge to a minimum:
Loss(theta + dtheta) = Loss(theta) + dtheta * dLoss(theta) + O(dtheta^2)
= Loss(theta) + mu * sqrtN * C (assuming Lipschitz continuous)
==> min(Loss) = mu * sqrtN * C/2
Not sure I agree with housing insurance as a public service… we do want to expose some risk to drive behavioral changes. People really shouldn’t be building houses in low lying areas near the shoreline in Florida. But if there’s no risk because it’s covered by other tax payers, then they will.