...which has reference #14 has Sen listed as the author of another article (can't find direct link), here's #14 though:
Sen, C. K. & Ghatak, S. miRNA control of tissue repair and regeneration. Am. J. Pathol. 185, 2629–2640 (2015).
I had played various Infocom games as a kid, simply amazing. Then recently "discovered" that there is this exciting corner of the Multiverse know as Interactive Fiction, which has been flourishing quite well for a long time. Just finished "Beautiful Dreamer" by Woodson, enjoyed that very much.
> It's offloading the cost of hiring on to prospective candidates.
Why is that a bad thing? The prospective candidates aren't entitled to anything, the burden should be on them to prove they have any capabilities for the job.
> What is the employer putting up? Generally nothing.
Totally incorrect, have you ever had to hire somebody? The initial phone screens alone to weed out the 99% of complete incompetents takes many hours/days. Even getting to the list of phone screens, involves many hours of sifting through resumes, where each resume is cleverly designed to hide the fact that the prospective candidate is incompetent.
I'm quite impressed with the Hololens, but keep in mind that it's a totally a first gen developer product, so was better than my expectations. Yes the field of view is a tad narrow, and yes, the gestures still don't pick up at much at they should. But the holograms look stellar, the position tracking is quite stellar, objects that are placed, stay placed very well while you move about, the effect works very well. The floating planets that my kids stuck on the living room ceiling, continue to stay floating there until somebody later comes by and moves or resizes them. I think this device will have a bright future.
I agree about the future, and when I step back and put my engineering hat on I also realize how impressive what they actually have pulled off is. It's just that after hearing about this amazing Hololens for a year and then finally getting to try one my expectations where really high, and I have to admit the reality fell far short of what I was hoping for.
71 device years, on an p2.16xlarge instance, I think the NSA could certainly come up with something way moar better in a shorter timeframe, assuming they haven't already done so.
> The time needed for a homogeneous cluster to produce the collision would then have been of 114 K20-years, 95 K40-years or 71 K80-years
If I'm reading that correctly, 852 (71 * 12) K80 cards gets that down to a month, which sounds well within the reach of NSA et al.
Even getting it down to a day (71 * 12 * 30=25,560 cards) seems feasible. Assuming $10k per card ($5k launch price + doubled to account for supporting hardware), the upfront investment is around $0.25 billion, a figure that sounds plausible given, e.g., that the Utah data centre is budgeted at around $2 billion.
Edit: formatting fix. Also, this is of course assuming custom hardware designed for a specific hash function isn't employed.
I was interested if just using standard hardware was feasible due to the flexibility it provides (same hardware to, e.g., forge SHA-1 collisions, bruteforce other hash functions, or run speech-to-text/train ML models on a pretty much unbounded dataset), though I agree anything that is required frequently and/or quickly would probably be moved to specialised hardware.
I've been doing concurrent programming for 17 years, it's been pretty much the same game, at least with Java/C/C#/C++ which is my experience. The abstractions are helpful, but clearly no panacea. The most common problems I see today are with Tasks w/lambdas and Parallel.ForEach, where synchronization is either completely missing or misused, or unnecessary (i.e. a better design would have been to remove all shared state to begin with). The next main problem I run into, folks tend to sprinkle in concurrent code in an ad-hoc fashion, even using the understandable abstractions. That works fine, until it doesn't, with the "not working" state being rather difficult to detect.
- The occasionally hilarious chapter on concurrency (especially the "Concurrentgate" section) from Andrei Alexandrescu's "The D Programming Language", available in its entirety here: http://www.informit.com/articles/article.aspx?p=1609144
I think MS made great progress with the TPL and kickstarted an industry-wide movement with async/await. But certain aspects of C# still drive me nuts and will allow a junior dev to blow a leg off (I'd give my left arm for C++-style const references and/or compiler-enforced immutability). At one point I went so far as to play around with a Rosyln analyzer to tackle the problem (https://github.com/markwaterman/CondensedDotNet/blob/master/...), but gave up after realizing anything more then a token effort would be a huge undertaking.
Other languages are nibbling away at the edge of the concurrency problem with language-level support for CSP (golang), actors, etc., but, outside of the functional world (Erlang), I don't seen anyone working to address concurrency from the ground up.