Huh. I've been using a Jabra Evolve 75 (in particular, Jabra Evolve 7599-838-199) with my 2018 MacBook Pro and it works just fine with built-in bluetooth. I can even have it simultaneously connected to iPad and laptop and get audio from both. I wonder what explains the variability in quality.
As someone who has done both professionally, I think you're severely underestimating how hard teaching is. I think it's much harder to teach well than to code well.
Don't forget, too, that good teachers are experts in the subjects they teach. Teaching computer science or math well requires the same facility with "abstract symbol manipulation" that working as a software engineer does.
But it isn't really required that teachers teach well. If I were to try and give an honest assessment of the teachers (specifically primary school both public and private) I've had I would say maybe 5% were good teachers, 5% were genuinely bad and the rest just kind of coasted by having students do rote memorization.
I didn’t mean to single out teachers specifically — as parent was using them as example — but I meant that in general , professions where you have more human interaction is more desired by the majority of the human population .
If you have done both professionally, then by definition, you can code. I think your parent poster's point is that too few people can. They are not saying that teaching is easy.
Good question. Disclaimer: I’m in the lab that made Gen & was on the paper, so not impartial :)
Anglican is implemented in Clojure, and can be extended (by writing new Clojure code) to support new general-purpose inference engines. Creating those extensions requires an understanding of both the statistics and the PL concepts used in Anglican’s backend; you are essentially writing a new interpreter for arbitrary Anglican code.
Gen provides high level abstractions for writing custom inference algorithms for _specific models/problems_ (not entire general-purpose inference engines). Those abstractions don’t require reasoning about PL concepts like continuation-passing style, nor do they require the user to do any math by hand. Of course, since Gen is just Julia code, you can still reach in and implement new inference engines (just as in Anglican/Clojure) if you’re an expert. But I wouldn’t expect people who are not probabilistic programming researchers to do this (in either Anglican or Gen).
This is really cool! And awesome that (anecdotally) it seems to have been an effective tool for teaching high school students about logic and proof. Excited to dig in and learn more at some point about how this relates to formal proof assistants.
Hmm, I'm not sure I see the difference. Why is it not "symbolic"? The symbols that construct the neural network are what encodes translation invariance -- not some vector of reals.
Symbolic as in "symbolic algebra systems", "symbolic AI", etc [1]. Not as in having some symbols in the code for a NN.
A NN doesn't work with the domain objects directly and abstractly (e.g. considering a face, facial features, smiles, etc as first class things and doing some kind of symbolic manipulation at that level).
It crunches numbers that encode patterns capturing those things, but its logic is all about numbers, links between one layer and another, and so on -- it's not a program dealing with high level abstract entities.
To put it in another way, it's the difference between teaching, say, Prolog to identify some concept and a NN to do the same.
E.g. from the link "The most successful form of symbolic AI is expert systems, which use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols."
A NN does nothing like that (not in any immediate, first class, way, where the rules are expressed as plain rules given by the programmer, like "foo is X", "bar has the Y property", etc).
Here's another way to see it: how you'd solve a linear equation with regular algebra (the steps and transformation etc), and how a NN would encode the same.
A symbolic algebra system will let you express an equation in symbolic form (more or less like a mathematician would write it), and even show you all the intermediate steps you'd take until the solution.
A NN trained to solve the same type of equations doesn't do that (and can't). It just tells you the answer (or an approximation thereof).
We need a digital representation of real numbers. One possible choice is: a real number r is represented as the source code of a function which takes a natural number n to the nth digit of r. This seems no more like “cheating” (and to me, it is less like cheating) than to represent a real number r by a line segment of length r.
Or maybe more analogous to “differentiable programming” or “logic programming”: a style of programming that supports a broader range of operations than “running.”
Also, it’s not quite that you are using a DSL to encode Bayesian models traditionally represented in other ways; you are using a full-featured programming language to express a set of models larger than the set that is easy to express using existing formalisms. (That said, there are also some models that are easier to express using those other formalisms; you can think of them as being DSLs in a Turing-complete probabilistic programming language, which make it easier to express certain limited classes of programs.)