I love any talk that Bryan gives. The gist here seems to be that after the Node.js ecosystem favoring what he called approachability over his wishes to pursue reliability and expressiveness he had a long think and realized that each one is a worthy goal, but for different audiences and that there are always trade-offs across values.
This perspective, that programming languages have sort of cultural tradeoffs and you should make sure you are happy with those trade-offs seems obvious but it is not something I've heard anyone state before.
It also makes for an interesting way to examine the space of programming languages in general. It is a richer way to compare than the standard OO vs FP, and systems programming vs scripting, etc. Rust wants to be approachable, but their top level value is safety, so it will never be as approachable as javascript.
This is one of my all-time favorite talks ever, for all the reasons you state. I think it’s such an interesting framing that I gave a talk about how I see Rust via this lens at QCon London recently.
It seems that this decision was made without thinking about actual development and implementation. People who silently believe that this is only a matter of EU let me tell u something. A lot of EU copyright laws were adopted in the US so if they find a way to monetise it, or if EU will find a way to accurately control it you'll see how it becomes a problem not only for EU but the rest of the world too. Now every customer orientated business who helps people to create something online and works in EU market will have to implement filters to their software/website/platform u name it which will cost money; otherwise, you might lose a lot of traffic. I think this is a matter of every country in the world who does any business or has any traffic from EU countries. Maybe they won't push this through in a first place. It would be a very unpopular decision before the election. Also, initiatives of brands mentioned in this article are crucial, and I am proud to be a customer, it is an excellent example for everyone.
> Structure is really just a decision-acceleration process
This is exactly holacracy, teal, and other so-called "flat" leadership models are so inefficient. The power structure is there whether it's visible or not. Ignore at your own peril.
Both 3.7 and 3.8 have so many performance improvements. But still, what people like most about Python is code readability. Performance isn't as important as in the past, because of how hardware performance has scaled.
Very true, but it does let me make stuff faster than my co-workers who are using C++.. without their "well python is so slow" argument from ten years ago.
Not CPUwise. But they have increased the number of CPUs available and that's where python has problems because of the GIL. Fortunately the multiprocessing library seems like a good workaround to the GIL issue.
I guess we can argue the semantics of "much" but comparing my current ryzen system to my core2duo system from 10 years ago the performance difference is massive.
I think what you literally need is a network that is "narrow" but also has many layers. This is because what PR features over simple linear regression is interaction terms, and multiple layers are an alternate way to get those. This also hints at one problem with true PR - in a multi-variate context, the amount of coefficients you'll need to fit is going to scale very badly given the number of variables and the degree of your polynomial. NN layers have natural dimenionality-reduction properties that make them more flexible.
Yup, you can define distinctness of constructive real numbers as two algorithms returning rationals that will differ by more than the approximation error ε they were given as input, for some arbitrarily small ε - but equality involves proving that two algorithms will never be apart in this way - and this cannot be done in general. As you say, even comparison with zero is problematic. So instead of equivalence relations, you need to define the apartness relation - the notion that a supposed equivalence between two such numerical algorithms can be constructively refuted. This is nothing new - it's a well-known part of constructive math, and is also the sensible way to formalize numerical analysis.
The whole point of "special education" is that it does things that don't scale to everyone (so it involves guessing who's going to get the most benefit from it, and these are the students you regard as "gifted"-- although in practice commitment and "grit" are a lot more important than supposed aptitude). Individualized teaching for each student, with a "mastery learning" model (i.e. no fixed time-based curriculum; you keep working near your current level in any given subfield until you achieve mastery at that level, then move on) is what gives the best results, but it can only be achieved practically via computer assistance.