Hacker Newsnew | past | comments | ask | show | jobs | submit | more mafribe's commentslogin

Neuromorphic has been an ongoing failure (for general purpose processors or even AI accelerators), ever since Carver Mead introduced (and quickly abandoned them) them nearly half a century ago. Bill Dally (NVidia CTO) concurs: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From: Hardware for Deep Learning, HotChips 2023 keynote.

We have NO idea how the brain produces intelligence, and as long as that doesn't change, "neuromorphic" is merely a marketing term, like Neurotypical, Neurodivergent, Neurodiverse, Neuroethics, Neuroeconomics, Neuromarketing, Neurolaw, Neurosecurity, Neurotheology, Neuro-Linguistic Programming: the "neuro-" prefix is suggesting a deep scientific insight to fool the audience. There is no hope of us cracking the question of how the human brain produces high-level intelligence in the next decade or so.

Neuromorphic does work for some special purpose applications.


I like the feather analogy. Early on all humans knew about flight was from biology (watching birds fly) but trying to make a flying machine modeled after a bird would never work. We can fly today but plane designs are nothing like biological flying machines. In the same way, all we know about intelligence comes from biology and trying to invent an AGI modeled on biological intelligence may be just as impossible as a plane designed around how birds fly.

/way out of my area of expertise here


And it's only now, having built our own different kind of flying machine, that we understand the principles of avian flight well enough to build our own ornithopters. (We don't use ornithopters because they're not practical, but we've known how to build them since the 1960s.) We would have never gotten here had we just continued to try to blindly copy birds.


The paper goes out of its way not to compare the 42% figure with anything. Is "42% within the top 5 suggestions" good or bad?

How would an experienced engineer score on the same task?


This is wrong!

The term hyperplane already assumes that the hypothesis space that your learning algorithm searches has some kind of dimension and is some variant of an Euclidean / vector space (and its generalisations). This is not the case for many forms of ML, for example grammar induction (where the hypothesis space is Chomsky-style grammars) or inductive logic programming (hypothesis space are Prolog (or similar) programs), or, more generally, program synthesis (where programs form the hypothesis space).


It can also just be some sort of partitioning. I would be really surprised if there was no partitioning of some space.


Note that "some sort of partitioning" isn't a hyperplane. A partition is a set-theoretic concept. A hyperplane is (a generalisation of) a geometric concept, so has much more structure.


Alright how about coalgebra.


Multi-paradigm languages, and there are several popular ones including Scala, need strong software architecture enforcement. The software architects need to decide which style to use, e.g. purely functional and monadic, vs nicer-looking Java-style OO.


What a day to be alive, when Java-style OO is described as a nicer looking option!


What 3.0 debacle?

I've just started a new project and we are using Scala 3. It's wonderful. (Admittedly, it's blank-slate, so no dependencies to previous Scala, e.g. the old Scala approach to meta-programming.)

> expressing how I think.

What you are saying is basically that you like ML-family languages, which also include Haskell, Ocaml, Rust, Kotlin. Increasingly, other languages take the ML-family lessons from 1973 on board!


Hard drug addicts prefer using their limited income for drugs rather than rent. (Note that as of Jun 2023, hard drug addiction has no known cure.)


Without wishing to engage in a discussion of the merits of the respective positions, it might be interesting to know that Maturana rejected Luhmann's use of autopoiesis in social systems.


How? Where?


If my understanding is correct, because Maturana was relatively strict about using autopoiesis only for biological systems with material and energy flow.


> Throughout the history of the US

Is the US the world's only country?

Throughout the history of the world, who counts as XYZ has been redefined many times to suit the political goals of the people at the time.

I recall when we stopped being Yugoslavian, and started shooting at each other, as Serbs or Croats. The Russian guy sitting next me married a Ukrainian woman a decade ago. Recently Russians and Ukrainians started shooting at each other. There's a great deal to read on this topic if you're genuinely interested.

“The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.”


Of course not, but the context is a conference in the US.


And you got it wrong. The Irish were not considered non-white. The ethnic divisions were (and are) most definitely not just White vs PoC. It used to be extremely important whether a person was a Catholic or a Protestant, for example, all other kinds of Christianity being conveniently ignored.

I liked JeanHeyd's blog because he is a good writer (but a bit on the chatty side). I also love what he does for the C standardization process. His skin colour is completely irrelevant.


> The Irish were not considered non-white.

They absolutely were and you can find writing by Ben Franklin in their own words saying so explicitly.


> money are kept by the people in banks and not released

The banks will reinvest this money. (Cf Fractional-reserve banking).


> The banks will reinvest this money. (Cf Fractional-reserve banking).

Only if there is demand for the money at a high interest rate. The credit boom is dependent on the borrower as well.


Since there is often a misconception that formally verified software must be absolutely free from problems, from the conclusion of the paper: "Extensive testing on formally verified software is necessary for at least two reasons. First, the formal specification may not guarantee all the properties expected by users, but only critical ones (e.g. no miscompilation for CompCert). Second, critical bugs may still remain, because the formal specification might not exactly fit reality. However, in this case, critical bugs are in the—much smaller and simpler—[trusted computing base]"


An excellent clarification to make.

"Formally verified software" means that some portion of the software's expected behavior has been rendered as a proposition, and that proposition has been proven correct using some kind of proof assistant or theorem-proving software that we assume correctly validates the proof of the proposition. Bugs can exist in three places, then:

1. Within the theorem-proving software. 2. Within the proposition itself. 3. Within the portions of the code that are not formally verified.

We generally have good reason to suspect bugs in (1) are almost nonexistent. Bugs in (2) and (3) are relatively common, but (3) is no different from the bugs present in software with no formal verification.

A lot of modern theorem-proving research lies in helping people to address bugs in (2) by making propositions easier to write and verify, and in (3) through the same mechanism (because then we can formally verify a greater portion of the software).


> "Formally verified software" means that some portion of the software's expected behavior has been rendered as a proposition, and that proposition has been proven correct using some kind of proof assistant or theorem-proving software that we assume correctly validates the proof of the proposition.

All programs that don't contain undefined behavior satisfy this (in an annoying tautological way) because all programs are proofs of themselves. So the thing you're getting paid for, if you are, is to ask the right questions of your theorem prover (which may be a Python interpreter).


A compiler is a theorem prover, sure, but compilers vary in terms of expressiveness of the propositions it is capable of proving (a Python interpreter would be a very weak theorem prover ;) )

Consider: is there an equivalent concept of Turing Completeness for compilers with respect to computational propositions?


I know what you're saying, but there's a difference between "the expected behavior has been rendered as a proposition" (what I said) and "the expected behavior corresponds with a proposition" (what you're saying).

To "render something as a proposition" means for it to be literally written down in the terms of formal logic. Writing a program in Python doesn't generally constitute this form of expression, but writing a proof of a Python program would.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: