I've been looking for examples of applications built with scala and akka that I can learn from. I've built small, toy-ish stuff but I feel I learn a lot by looking at examples. Any have any recommendations?
... not exactly. he also examines the number of defects turned in by different pairs of programmers compared with single ones, and examines some of the financial cost of pair programming. Its a better read than you make it out to be.
I'm only saying this because your entire point is about being pedantic with the language, one of the definitions of steal[0] is "to appropriate (ideas, credit, words, etc.) without right or acknowledgment." In this case, whether its a copy or not is immaterial. I've seen a bunch of people parroting the line that if you're not taking a tangible thing you're not stealing. This is plainly false both by connotation and denotation.
It is not the responsibility of the purchaser to know information which is purposefully withheld from them. From the article
> Scott Sweet's multi-billion dollar hedge fund client flipped the stock at $42. His subsequent short made his firm its "largest profit of the year," Sweet said. There's "no way" a retail investor could have known about the lowered projections, unless he or she "had a friend at a multi-billion dollar institution," he added.
Please explain to me, when information is withheld from the purchasers and only specific clients notified as to circumstantial and meaningful changes to the state of the offering, how anyone could ever "know what you're doing?' In fact, Morgan Stanley was actively misleading investors by continuing to adjust the specifications of the offering to make it look better.
Analogy: If an automaker produced a new car which was secretly designed to become worthless (engine would fuse together) after 3 months and only told one rich people not to buy it, would that be fine? What if there come back was 'you could always open the hood and see our computer components which execute after 3 months, its not our fault you don't know what you're doing'
My understanding is that the information was not withheld from retail investors; rather, it wasn't actively disseminated to them, and the significance of the information impressed upon them.
The reduced revenue estimates were public information; I recall reading about them myself before the IPO took place. If someone had put me in charge of billions of dollars and told me to take a position on the Facebook IPO, I would have shorted the stock, as many others did.
The crux of the issue is that wealthy institutional investors had analysts at their disposal to point out the revised revenue estimates, and retail investors like Swaminathan didn't. Retail investors were ill-prepared for the IPO, and they got burned.
They're ill prepared to consume the information in the format its delivered (if it is delivered at all). Notice, the revised S-1 doesn't say the amount of the adjustment, you had to receive that from Facebook via another channel. So, it doesn't really have to do with special analyst job knowledge, it has to do with special treatment.
I mean, "actively disseminated" seems a bit generous. They call 3 institutions to let them know meanwhile individual (notice i'm intentionally not saying "retail" because thats become some sort of in-crowd, brow beating, bullshit term for shaming regular, non-hedge fund investors) are left to read smoke signals. Its not that the institutional investors "had analysts at their disposal" its that the systems is built to make sure institutional investors and hedge funds get information others don't.
You can say what you want, but the quote from the hedge fund manager seems much more clear then your opinion, and he's a domain expert who took part in the situation.
>"There's "no way" a retail investor could have known about the lowered projections, unless he or she "had a friend at a multi-billion dollar institution," he added."
She shouldn't have taken such a large position in a single stock. As a retail investor, that is stupidity.
As a retail investor, it is your job to find someone who knows what they are doing and can teach you how to invest or manage your investments for you. After that, if you are investing over 10% of your portfolio in one stock you are asking to be broke.
Your car analogy doesn't work because there is really no way to diversify your car whereas there are plenty of ways to avoid this kind of bad decision with your investments.
The part you quote was not the only warning sign to stay away from the IPO. Just a quick search brought a whole page (published before IPO day) of reasons to stay away: http://www.zdnet.com/blog/feeds/facebook-ipo-risk-factors-an.... Regardless, big IPOs like Facebook are driven by hype, not sound financials. I'd be willing to bet that Facebook could have used 24 point type on their login page stating: "we're losing money hand over fist" and there would still be people lining up to pay $42/share when that bell rang.
Were insiders withholding information? I'm not going to argue one way or another. How do you "know what you're doing"? Start by knowing that insiders would stick it to their own grandmothers if they could get ten cents more per share. But when one's whole strategy is to hope for a first day "pop", that's playing a lottery ticket, not investing (exhibit: Zynga). For one, if one buys after the opening bell, you're not going to profit from the pop, you are the pop.
You can. For java if you have a source directory, you New Project -> Java -> Create from existing source code. If there isn't an option for that in php, its because no body has written it, not because its not possible.
I have been in discussions about this with one of my friends working in academic materials research. Its amazing the amount of work today done by scientist at universities writing code without very basic software development tools.
I'm talking opening their code in notepad, 'versioning' files by sending around zip files with numbers manually added to the end of the file name, etc.
This doesn't even begin to scratch the surface of the 'reproducible results' problem. Often times, the software I've seen is 'rough' to be kind. Most times its not even possible to get the software running (missing some really specific library or some changes to a dependency which haven't been distributed) or its built for a super specific environment and makes huge assumptions on what can 'be assumed about the system.' This same software produces results which end up being published in journals.
If any of these places had money to spend, I think there could be a valuable business in teaching science types how to better manage their software. Its really unfortunate that outside of a few core libraries (numpy, etc.) the default method is for each researcher to rebuild the components they need.
I'm surprised about only 11% of results being reproducible. It seems lower then I'd expect. I agree we don't want to optimize for reproducibility, but obviously there is some problem here that needs to be addressed.
> Its amazing the amount of work today done by scientist at universities writing code without very basic software development tools.
I agree 100%. I recently quit my PhD so I still know a lot of people on the frontlines of science. One of these friends recently asked me to help them with a coding issue so they gave me an ssh login to group's server. I login and start reading the source.
It was all Fortran, with comments throughout like "C A major bug was present in all versions of this program dated prior to 1993." What bug, and of what significance for past results? Unknowable. As far as I can tell from the comments, the software has been hacked on intermittently by various people of various skill since at least 1985 without ever using source control or even starting a basic CHANGELOG describing the program's evolution. The README is a copy/paste of some old emails about the project. There are no tests.
So even though computer modeling projects should, in theory, be highly reproducible... it often seems like researchers are not taking the necessary steps to know what state their codebase was in at the time certain results were obtained.
This is an entirely different issue than code; code mostly does the same thing when you run it twice. There's no such guarantee in biology. A cancer cell line growing in one lab may behave differently than descendants of those cells in a different lab. This may be due to slight differences in the timings between feeding the cells and the experiments, stochastic responses built into the biology, slight variations between batches of input materials for the cells, mutations in the genomes as the cell line grows, or even mistaking one cell line for another.
Reproducibility of software is a truly trivial problem in comparison.
Also, sometimes, doing the experiment is extremely hard. I know a guy who only slightly jokingly claims he got his Ph.D. on one brain cell. He spent a couple of years building a setup to measure electrical activity of neurons, and 'had' one cell for half an hour or so (you stick an electrode in a cell, hope it doesn't die in the process, and then hope your subject animal remains perfectly subdued, and that there will not be external vibrations that make your electrode move, thus losing contact with the cell or killing it)
Reproducible? Many people could do it, if they made the effort, but how long it would take is anybody's guess.
Experiments like that require a lot of fingerspitzengefühl from those performing them. Worse, that doesn't readily translate between labs. For example, an experimental setup in a small lab might force an experimenter in a body posture that makes his hand vibrate less when doing the experiment. If he isn't aware of that advantage, he will not be able to repeat his experiment in a better lab (I also know guys who jokingly stated they got best results with a slight hangover; there might have been some truth in that)
Oh, I agree. Biological experiment reproducibility is an incredibly hard problem. You are probably right that it is 'trivial' by comparison in the same way that landing on mars is trivial to landing on Alpha Centauri.
"Generally, academic software is stapled together on a tight deadline; an expert user has to coerce it into running; and it's not pretty code. Academic code is about "proof of concept." These rough edges make academics reluctant to release their software. But, that doesn't mean they shouldn't.
Most open source licenses (1) require source and modifications to be shared with binaries, and (2) absolve authors of legal liability.
An open source license for academics has additional needs: (1) it should require that source and modifications used to validate scientific claims be released with those claims; and (2) more importantly, it should absolve authors of shame, embarrassment and ridicule for ugly code."
I think that's what the folks at Software Carpentry [0] are trying to do. I went on one of their courses, and you're taught the basics of writing good software, version control and databases (SQLite). I've frequently recommended it to fellow scientists.
That article says "Data are ideal for managing with Git."
I one time tried using git to manage my data. The problem is, I frequently have thousands of files and gigabytes of data. And git just does not handle that well.[1]
One time, I even tried building a git repo that just had the history of pdb snapshots. The PDB frequently has updates, and I have run into many cases where an analysis of a structure was done in a paper 3 years ago, but the structure has been updated and changed since then, making the paper make no sense until I thought to look at the history of changes to the structure. Unfortunately, git could not handle this at all when I tried it, taking days to construct the repo and then that repo was unbearably slow when I tried to use it.
Git would probably work well for storing the data used by most bench scientists, but for a computational chemist puking up gigabytes of data weekly on a single project, it is sadly horrible for handling the history of your data.
As someone who, fresh out of high school, coded for a quite published astrophysicist at a major government research institution, I can confirm that I had no idea what I was doing.
I realized that by replying i'm probably encouraging you to continue posting this non-sense. I agree with Joachim below, this is barely related to the article.
I think the overlooked part of this, once we step back from the natural desire to pick 'the best', is that people who care about the platforms are providing a vast set of starting examples for people looking to get started on each network. Its easy to do a side by side comparison of similar tasks across languages which is something that is very valuable and, in my experience, relatively novel. Thanks for all your amazing work!
Its the stages of grief. Something you've invested a considerable amount of your life in was revealed to you in the last 24 hours to be not what you believed it to be. That causes grief and causes denial.