Hacker News new | past | comments | ask | show | jobs | submit | keithyjohnson's comments login

Which inductive bias?


Presumably that the output at step (n) is conditioned only the output of step (n-1).


Hai davai!


hahahha... thank you for this one


Keep it up


Agile, an empirical survey of whatever the hell it actually is.


Agile is a short-lived V cycle repeated for ever.

Legacy V-cycles (needs - spec - code - test - integration - product) were such that everything was written down and planned in advance for months/years. So, if the customer had made an error, his/her needs had changed... you were basically screwed.

Agile advocates for Short V cycles while getting often user feedback. But it's a V cycle.

- PO speaks to the customer = get needs - PO writes tickets, UX design something = specification - And then it follows the classical cycle : develop, test, integrate, deliver.

What's remain around agile (ceremonies & co) feel much more like bullshit to me, and people follow it religiously without understanding the core idea of agile as they think V cycle is an insult.


The Book of Why by Judea Pearl may be a good starting point for anyone interested in this.


The topic is really interesting, and this book had a big impact in my understanding of the world, yet I found it pretty annoying to read. The grudge born by J. Pearl against the statistics community who rejected his ideas is way too present in the book IMHO. He's almost like “I was right all along you fuckers, who's the boss now!” on every single page, and I feel it's really disserving his ideas.


That was what put me off about that book too. I was really excited to learn about his math, but the continuous hard sell combined with attacking traditional statistical methods (which still have a lot of use) was pretty off putting. I will probably pick it back up, but it was not a great way to hook readers or bring them around to your way of thinking.


I only made it halfway but my impression was, "maybe your critics are correct because I can't tell if any of this makes sense."


I think "Probabilistic Reasoning in Intelligent Systems" is a better starting point for Pearl. Not much easier but more familiar ground if you're coming from today's ML mindset.

As an aside, once while reading through Wasserman's "All of Statistics", he somehow hypnotized me into seeing the title of Ch.16 as "Casual Inference", so anyone who knows me knows that I can't help making a dad-joke about casual inference when the topic comes up.


Looks like you can request to beta test it here - https://docs.google.com/forms/d/1wOal6PSRxXcMmmzXXHygDC5_rGR...


I hope someday Crystal will supplant Java.


That is unlikely to happen. Crystal has no corporate backing, it has no distinguishing features (in comparison to other, more popular languages), and most of all, it has no Rails.


Crystal has Lucky and Amber — both are reasonable for small fun projects. They're still missing what makes Rails Rails, though. Rails is much more than MVC. It's the Ruby ecosystem and the tooling that really allow Rails to become what it is, and that is hyper-productive. You don't spend time reinventing the wheel, you spend time working on domain-specific problems.

Sure, chicken vs egg, but while everyone is hopping from language to language, framework to framework, packaging system to packaging system because they saw it on Reddit or HN, I'll be over here shipping Ruby and contributing to the Crystal ecosystem in my spare time.


Do you mean rails as in the framework people with choose crystal for? (It doesn't) Or a rails-like web framework in general? (it has https://amberframework.org/ )


It would be nice if we could use the C FFI to allow seamless interaction between MRI ruby and Crystal.

Allowing us to profile and easily port parts of an existing rails or ruby app would be really useful.


Does Java have a Rails?



Why Play instead of spring?


Is that closer to a "Rails for Java" than Play is?


Crystal is comparable to Java in performance. I am impressed with it. See https://github.com/nukata/little-scheme#performance and https://github.com/nukata/little-scheme#performance-on-the-t...


I have just updated my Scheme interpreter in Crystal (https://github.com/nukata/little-scheme-in-crystal), which I used on the above benchmark test, along to Crystal 0.34.

The release notes of Crystal 0.34 say "Having as much as possible portable code is part of the goal of the std-lib. One of the areas that were in need of polishing was how Errno and WinError were handled. The Errno and WinError exceptions are now gone, and were replaced by a new hierarchy of exceptions." So I have modified

    rescue ex: Errno
      raise ErrorException.new(ex.message, NONE) if ex.errno == Errno::EPIPE
to

    rescue ex: IO::Error
      raise ErrorException.new(ex.message, NONE) if ex.os_error == Errno::EPIPE
though it is still dependent on POSIX. >_<


Where is there use-case overlap with the JVM ecosystem and crystal?


I suppose the obvious answer is web back-ends, though that is true of practically every language.


I guess AOT compiled jRubby/TruffleRuby.


First they need to support Windows as well as Java 1.0 did back in 1996, then they have 25 years to catch up.

Plus the JVMs already have jRubby and TruffleRuby, including AOT support.


Thank god I now have an easy way to communicate with all my satellites.


You joke. But maybe you’ve never considered what satellite data applications you could have written because it wasn’t easy to do in the past?


True enough for me at least. But now that I have thought about it, I yet remain of GPs opinion.


I get that these bullets points are answering What instead of Why but for those that are more readily discernible, like "In a year-and-a-half, the time required to train a large image classification system on cloud infrastructure has fallen from about three hours in October 2017 to about 88 seconds", what's causing this? Are models getting smaller without a loss in accuracy? Is training distributed over a greater amount of cheaper machines? Personally, I'd be more excited about the former rather than the latter. We can't all afford MegatronLM-type experiments - https://nv-adlr.github.io/MegatronLM.


Both. Companies are certainly building bigger and bigger clusters for training.

At the same time though, consumer GPUs have gotten significantly faster (compare e.g. an Nvidia 2080TI to a 980TI), and learning algorithms keep improving / better learning algorithms become more widely used (e.g. Adam instead of stochastic gradient descent).


And also, architectural search allowed for neural networks to use more efficient builtin blocks, using many less parameters, and achieving the same accuracy with smaller models (and lowering training cost)


The improvements in the report are mainly from improvements in cloud infrastructure, but that's not to say there haven't been improvements in developing small, efficient models as well. One notable model that was introduced in 2017 was MobileNet, which aimed to create a model that could function on a mobile device without much loss in accuracy. There have been many more attempts to shrink models for use on devices with limited resources since 2017. These smaller models tend to have lower training times as well.


read the actual report instead of just the bullet points. the speed improvement is a function of cost on cloud hardware


Is it ironic that I'm going to quote this passage in my grad school admissions essay?


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: