Hacker Newsnew | past | comments | ask | show | jobs | submit | more programnature's commentslogin

The faster people realize this the better.

What does the technology look like for achieving these kinds of goals?


I would say it's actually fairly simple from a technology perspective: good, well-documented APIs, as much SaaS apps as reasonable, and CTOs in government who get it.


I think the first two are, indeed, fairly simple from a technology perspective. CTOs in government (or whatever other role is responsible for making the data in question available) isn't simple or technology-related.

From my perspective -- both working with groups mandated to make data available and researchers consuming public datasets -- those responsible for making the data available DON'T get it in the vast majority of instances. It's tough to tell if it's obtuseness, incompetence, or... call it what you will, but if you are mandated to make information available that can directly assess your performance or the performance of the organization you lead, you might not have the right incentives.

A recent experience: the federal (US) government releases data regarding clinical trials conducted by drug companies and universities available for download in a format that they basically made up. OK, no problem, I've written lots of parsers. Ingest the data from the source files, but wait! There's no data dictionary or even a vague description of the relationships between the contents of the (many) files they publish. You can make pretty good guesses, but it definitely doesn't follow a well-documented API (or schema, whatever). Just a recent gripe that's stuck in my craw, but it's not an isolated case in my experience. I have come across a few that are very good and follow the best practices you note, but most I've worked with do not. I would guess that the former have your third characteristic; the latter likely do not.


If you have a bunch of tiny apps, how do their data models relate? Who runs them, and owns the data? Theres a lot of assumptions baked into the status quo described in the article, that technology has been developed around. Sweet spot for building this stuff might be a little different.


You are the target audience of this book. The Wolfram Language has gotten huge, making it harder for beginners to find their way around the basics via reference docs.

That said, WL is in fact based on pretty simple primitives. If you are familiar with functional programming, things like NestList are familiar and you'll start learning by finding the analogues of your favorite FP constructs.

WL also has a lot of functions that you might call "optional": those that could be restated as simple combinations of other functions. The rationale there is pretty simple: if there is a well-defined, commonly used chunk of computation, it should be given a name (and design). The fact that a fair amount of work is put into both the name and design of these things gives a higher sense of coherence (and predictability) than you might expect from their number.


In the announcement they called this the "reference implementation".


tl;dr: lol.

The author is claiming Relay is a scheme to kill the open web. A simpler theory is just that Relay solves a problem Facebook (and many other people it seems) are having. The idea that REST has shortcomings for Facebook's use case is an unthinkable thought for the author.

No argument is given for why we should consider REST to be a foundation of the open web. REST != HTTP.

The post is full of logic-defying assertions like "Facebook wants to replace REST with Relay for the same reason they want to replace DOM engines with React Native."


Topological data analysis is amazing, too bad all the hype around DL is leaving it in relative obscurity.


Clojure/script already has this. Its called transients


Fukuyama's recent book Political Order and Political Decay goes into some detail on the forest service, from its origins as a model of a modern effective government agency in the Progressive era to one captured by special interests and turned ineffective by conflicting mandates from congress.


What's the rationale? Who are the target users? I go to the site and I have no idea what exactly this does that's better than the alternatives. It sounds like its trying to do everything.


The target users are academics who collaboratively use mathematical software like SageMath (http://sagemath.org), Octave, Cython, R, IPython, etc., in their teaching and research, but don't want to have to wrestle with installation problems and coordination with collaborators (say via Git). Numerically, most users are students taking courses from such academics. I started the project because I was teaching courses on SageMath, Cython, and LaTeX to students, and the installation burden for the students was a major problem. Also, I was frustrated by how difficult SageMath is for people to install on their own computers... even after 8 years of development (it only seems to get harder over the years, not easier!).


This may be less interesting to you, but SMC seems like an ideal platform for data-science and software related job interviews. A few years ago I interviewed with Enthought. We used Google Docs for the real time coding!


Thanks for suggesting that. I think SMC could in fact work well for those applications, though I don't have any real insight into them or know how to get into those markets myself.


at least 2 forces come to mind - 1. Humans want to believe they are special in some way 2. Theory of evolution is under "selective pressure" to explain complexity since the watchmaker analogy


Interesting that Wolfram predicted this in 2002, was mocked by Cosma Shalizi as having "absolutely no understanding of evolution"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: