Indeed, what bqmjjx0kac proposes, is orthogonal to fuzzing (which is an algorithmic technique to drive test-coverage), and an instance of defining, and dynamically checking a property.
Fuzzing and constrained random, while both based on randomisation, are not the same thing.
A big problem of fuzzers from the point-of-view of hardware validation is that it's unclear what coverage guarantees they give. Would you tape out your processor design, once your fuzzer no longer finds bugs, if you had no idea about test coverage? OTOH, fuzzing has been very effective in software, so it is natural to ask if those gains can also be had for hardware.
The grandparent post is incorrect. H/w silicon validation or constrained random testing hasn't been the norm for 10 years, it's at least 20, which us when I first got into that industry.
And yes, we had coverage driven verification back in 2005 as well. No, we didn't "tape out" our CPUs until we'd hit our testing plan which was defined by coverage metrics.
P.s. pre-si verification had testing pipelines way back then before they became the norm in s/w.
Coverage in hardware is different from coverage in software. Solid part of System Verilog deals with that plus part of UVM. Plus coverage from simulation and property checking (aka formal) can be used together.
> "We note in particular that in the sequence induction IQ tasks, our system achieves human-level performance. This is notable because the Apperception Engine was not designed to solve these IQ tasks; it is not a bespoke hand-engineered solution to this particular domain. – Rather, it is a general purpose system that attempts to make sense of any sensory sequence, that just happens to be able to solve these IQ tasks “out of the box”.
Worth repeating:
> just happens to be able to solve these IQ tasks “out of the box”.
Honest question; what could anyone get out of this? What is the point of interpreting kant if the interpreter of kant can't even perceive what humans perceive? Kant doesn't seem to have much value outside of the context of human perception.
We share a universe of perception with that thing. If it generates the same output as humans with the same input we learn that we have learned nothing and this is also knowledge of reality.
This is being worked on with the Sail model [1]. In order for a RISC-V extension to be ratified it ought to be implemented in Sail. The understanding is also that the RISC-V ISA manual should be built with code snippets from the Sail model (similar to how the Arm ARM is build from ASL definition). The main issue is a lack of people willing and able to write Sail for RISC-V. But that is beginning to change, since RISC-V member companies are increasingly use Sail. As an example, the RISC-V exception type is defined in [2]. Is that precise enough for you?
The formal RISC-V ISA specification is not finished, you are welcome to make a PR to clarify things.
This is a much older phenomenon, called railway town, and it is so frequent that it has its own Wikipedia page [1]. At smaller scale this also happens in cities where new underground lines are built. It's a well-established way of paying for public transport. See [2] for a discussion.
Category theory has been extremely influential in computer science, primarily because a (maybe the) prototypical idealised programming language (the simply typed lambda-calculus) is at the same time, a logic and a convenient syntax for cartesian closed categories (= basically the nicest, most well-behaved kind of category (which is, in turn, a foundation of topoi, a generalisation of the essence of all logics)).
The trias gives extreme scope for technology transfer between mathematics, logic and programming.
Any source for that? I don't think that category theory has even a minuscule influence on computer science.
Type theory? Sure. Logic and model theory as well. Set theory? Number theory? Heck, even geometry is used for dozens of algorithms, not related to geometry (convex optimizing in an n dimensional space, Hamming distance, abstract convex geometry etc). But category theory? Do you have any influential papers or books in mind?
I would argue that the last 30 years (if you arbitrarily begin at Moggi’s paper on Monadic computation). Have shown ample impact of category theory on programming language theory and type theory more generally. Additionally, there are earlier work on the semantics of programming languages dating to the early 80’s with strong category theory roots. There is a large body of work (I’m on mobile and don’t have links) that uses category theory to backstop the development of algorithms and generic code solutions for all types of algebraic data types and container types for instance.
There is a thriving ‘school’ of computer science that views category theory as the third leg of the category theory, type theory, proof theory triangle that forms their basis for all computer science. This is very evident in the CS department at Carnegie-Mellon. If you are interested I’d recommend checking the backlog of lecture videos and presentations at the Oregon Programming Language Summer School program.
Turing award winner Dana Scott already mentions categories in his work on models for the untyped lambda-calculus, for example in his 1971 monograph on Continuous Lattices, which set the foundations of domain theory. Moggi monads came about, in parts, as a way of understanding domain theoretic constructins better.
The Haskell community was, quite clearly, heavily influenced by category theory. Monads (as a programming tool) fell out of that, and I suspect this directly influenced the treatment of async in Javascript (and elsewhere).
I do think "extremely influential" is overstating it. Outside of that, plus some niches of niches of academic CS, I can't really think of any other places where category theory is particularly important.
I am not sure that it influenced async in javascript. But the earliest monad paper I found in ~2 minutes of googling [0] even uses natural transformations which in my opinion counts as using CT.
Async/await comes originally from F#, brainchild of the Cambridge MSR lab that also maintain(s|ed) GHC. Async/await is just a limited monad syntax, and its originators were well aware of that fact. Insofar as monads are a categorical vocabulary for effects, then, async/await in JavaScript and all the other languages that now implement it all ultimately stem from category theory :)
More vaguely but also more sweepingly I think the general approach, now the standard in language design, of taking a pure language as base and then adding effects to it is established thanks to Moggi's work on monadic effects, which makes essentially all modern programming languages heavily influenced by CT (at a couple of steps' removal).
You could look at the papers being published in conferences like POPL, LICS, PLDI and ICFP.
The theory of (Moggi) monads and monad transformers has been influencing modern programming (and libraries) very heavily (e.g. all of Haskell, Scala's ZIO vs Cats, Rust approach to returning errors). Most modern programming language research engages in some form or other with linear types (and its relatives, like affine) and they come from Girard's linear logic. Both (Moggi) monads and linear logic are heavily influenced by their inventors learning of category theory. So I'd say, whenever you program in a modern language or use modern library design, you (indirectly) stand on the shoulders of many giants. Some of those giants were category theorists.
Interestingly, what I'm beginning to detect is an influence of computer science on category theory, if only because we want to verify abstract maths in automated tooling.
While technically there's a relationship between category theory and substructural logics (of course there is, category theory is general enough that there's a relationship between it and everything), they don't really strike me as having that much to do with it directly - they're a fairly straightforward extension (restriction) of typical presentations of constructive logic. And I wasn't aware that e.g. Rust picked up its type system via exposure through category theory. Is there a source for that narrative?
Girard mentions the connection with categories all the time.
For example in "Proofs and Types"
he proves various theorems along the lines of: the sub-category of coherence spaces and stable maps (one of the main models that LL was developed for) induced by some LL fragment is
cartesian closed category (IIRC). I think he developed LL in parts by fine-tuning it, until all the categories induced as models of fragments of LL have nice categorical properties. (To the extent that is possible.)
When I was a PhD student, my supervisor suggested that I learn category theory to understand linear logic. (Not sure, in retrospect, that was the best course of action, but that was my trajectory)
Rust's types evolved over many years. Rust used to have "typestate" for example. I had discussions with Graydon Hoare around 2011-ish about session types (which are linear). It struck me that Hoare knew exactly what I meant with the term. More generally, linear typing was just "in the air" in the early 2000s: you could not been serious in programming language design without being aware of linearity. Linearity was all over the research literature. Hoare was clearly very knowledgable in programming language research.
Like every theorem prover it has a logic that you use for stating propositions, and proving theorem. The logic is a specific logic that is closely related to HoTT.
Apropos: https://blog.regehr.org/archives/1687