Legacy V-cycles (needs - spec - code - test - integration - product) were such that everything was written down and planned in advance for months/years. So, if the customer had made an error, his/her needs had changed... you were basically screwed.
Agile advocates for Short V cycles while getting often user feedback. But it's a V cycle.
- PO speaks to the customer = get needs
- PO writes tickets, UX design something = specification
- And then it follows the classical cycle : develop, test, integrate, deliver.
What's remain around agile (ceremonies & co) feel much more like bullshit to me, and people follow it religiously without understanding the core idea of agile as they think V cycle is an insult.
The topic is really interesting, and this book had a big impact in my understanding of the world, yet I found it pretty annoying to read. The grudge born by J. Pearl against the statistics community who rejected his ideas is way too present in the book IMHO. He's almost like “I was right all along you fuckers, who's the boss now!” on every single page, and I feel it's really disserving his ideas.
That was what put me off about that book too. I was really excited to learn about his math, but the continuous hard sell combined with attacking traditional statistical methods (which still have a lot of use) was pretty off putting. I will probably pick it back up, but it was not a great way to hook readers or bring them around to your way of thinking.
I think "Probabilistic Reasoning in Intelligent Systems" is a better starting point for Pearl. Not much easier but more familiar ground if you're coming from today's ML mindset.
As an aside, once while reading through Wasserman's "All of Statistics", he somehow hypnotized me into seeing the title of Ch.16 as "Casual Inference", so anyone who knows me knows that I can't help making a dad-joke about casual inference when the topic comes up.
That is unlikely to happen. Crystal has no corporate backing, it has no distinguishing features (in comparison to other, more popular languages), and most of all, it has no Rails.
Crystal has Lucky and Amber — both are reasonable for small fun projects. They're still missing what makes Rails Rails, though. Rails is much more than MVC. It's the Ruby ecosystem and the tooling that really allow Rails to become what it is, and that is hyper-productive. You don't spend time reinventing the wheel, you spend time working on domain-specific problems.
Sure, chicken vs egg, but while everyone is hopping from language to language, framework to framework, packaging system to packaging system because they saw it on Reddit or HN, I'll be over here shipping Ruby and contributing to the Crystal ecosystem in my spare time.
Do you mean rails as in the framework people with choose crystal for? (It doesn't) Or a rails-like web framework in general? (it has https://amberframework.org/ )
The release notes of Crystal 0.34 say "Having as much as possible portable code is part of the goal of the std-lib. One of the areas that were in need of polishing was how Errno and WinError were handled. The Errno and WinError exceptions are now gone, and were replaced by a new hierarchy of exceptions."
So I have modified
rescue ex: Errno
raise ErrorException.new(ex.message, NONE) if ex.errno == Errno::EPIPE
to
rescue ex: IO::Error
raise ErrorException.new(ex.message, NONE) if ex.os_error == Errno::EPIPE
I get that these bullets points are answering What instead of Why but for those that are more readily discernible, like "In a year-and-a-half, the time required to train a large image classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds", what's causing this? Are models getting smaller without a loss in accuracy? Is training distributed over a greater amount of cheaper machines? Personally, I'd be more excited about the former rather than the latter. We can't all afford MegatronLM-type experiments - https://nv-adlr.github.io/MegatronLM.
Both. Companies are certainly building bigger and bigger clusters for training.
At the same time though, consumer GPUs have gotten significantly faster (compare e.g. an Nvidia 2080TI to a 980TI), and learning algorithms keep improving / better learning algorithms become more widely used (e.g. Adam instead of stochastic gradient descent).
And also, architectural search allowed for neural networks to use more efficient builtin blocks, using many less parameters, and achieving the same accuracy with smaller models (and lowering training cost)
The improvements in the report are mainly from improvements in cloud infrastructure, but that's not to say there haven't been improvements in developing small, efficient models as well. One notable model that was introduced in 2017 was MobileNet, which aimed to create a model that could function on a mobile device without much loss in accuracy. There have been many more attempts to shrink models for use on devices with limited resources since 2017. These smaller models tend to have lower training times as well.