Hacker Newsnew | past | comments | ask | show | jobs | submit | urish's commentslogin

tl;dr by Laura Norén (@digitalFlaneuse on twitter):

Stanford professor, Ilya Strebulaev, and Will Gornall of the University of British Columbia recalculated the valuation of 100+ companies known as unicorns (startups valued at $1bn +) and showed many aren't worth nearly as much as they claim. Why? Because math. Startups typically issue different classes of stock in each fundraising round but their valuations are oversimplified by applying the price of the most recent round to all outstanding shares. Every company they looked at was overvalued, 53 lost their $1bn unicorn status, and 13 were overvalued by more than 100 percent. ... "Some unicorns have made such generous promises to their preferred shareholders that their common shares are nearly worthless," the two professors wrote. In my opinion, this is an example of two things 1) lots of people cannot apply their math skillz and 2) the ethos of finance contains much magical thinking. The entire industry is obsessed with unicorns. According to Scottish myth, unicorns were ruthlessly hounded by clamoring hoards, simultaneously scapegoated for being the aberrant creatures they are and loved to death (e.g. abused, fatally) for their magical powers. Lesson: it's clear that many in finance are not good at applying their history and culture skillz, either.


My only disagreement is with: >Human processes just add additional bias

Bias with respect to what? As you say, there is already bias baked into the data collection and the algorithmic choices.

The bias that human editors introduce is different, but not necessarily larger, however you even measure it. There are also myriad human choices behind the choice and deployment details of the algorithm.

An important plus for human editors is greater interpretability and greater transparency regarding the biases the system ends up showing.


It's been reproduced across many experiments that humans will add bias that harms accuracy when making decisions. I.e., if x[6] represents race, humans will systematically wrongly weight x[6].

Machines simply don't do this.

As you say, there is already bias baked into the data collection and the algorithmic choices.

That's not what I said.

What I said is that you can't have collective equality (e.g. same rate of false positives, lack of disparate impact) and also accuracy (getting the right answer) except in trivial/unrealistic cases.

Human editors are fundamentally less interpretable and transparent than machines. You can easily interrogate machines and test for bias; how do you do that to humans?

Or, to take a historical example, why did colleges switch from algorithms to humans when the supreme court said that transparent racial bias is forbidden?


Weird that the filter didn't catch it - it seems to be exactly the same URL in both cases.


The URLs were encoded differently.


I wonder at what point does using fewer words become detrimental to understanding.


I think it depends entirely on the audience. If a word is appropriate, and your audience is comfortable with it, then not using it is detrimental.

What if some of your audience knows the word? Then it's a judgement call.


Debt: The First 5,000 Years by David Graeber.

I'm cautious to say it changed my life, but it definitely changed my view on many things. I'm more aware of the ubiquity and power of debt, and I can no longer take those for granted.

It's an extremely interesting read and has a broader intellectual appeal, elucidating the roots of money, morality, and the roles of markets, nations, and friends with regard to those.


I definitely think of that book is life changing - it changed how I understand the very concept of money.

What I really liked was that he put the development of money (and markets, morality, nations etc) in a great historical context so you can see why things happened the way they did.


I second that. A CS PhD without student funding (at least a few 10K $ a year) is suspect.


Hmmm... I'll ask the professor !!!


I'm surprised by how relatively little machine learning research they have. Microsoft, Google, IBM and Yahoo seem much better represented at the core ML conferences like ICML and NIPS.


But why would you equate "lazy" with unethical?

That's already an assumption that could be challenged. We now don't deem someone who's unwilling to work from sun up to sun down in the rice fields as lazy or unethical, but 300 years ago that might have been the case. Why? Because now a few people grow our food so efficiently that most people can afford to be "lazy" and work 40 hours a week in an office job, or at least an air-conditioned job.


But the rest of us pay the efficient farmers for that food. That's the difference.


what is unethical is to work 20 hours a week when you are perfectly capable of 40, collect 40 hours worth of pay, and let people who work harder take up the slack. That is patently unfair to the ones who work the full week.

Fairness matters.


I'm a PhD student working in machine learning, on the border of mathematical optimization. I also have a research project about mapping influence and innovation in the history of contemporary music, with ML tools.

I heard about HN from my brother, who's a psychiatrist. I wonder how he got here though.


http://developer.rovicorp.com/docs does some of this stuff, albeit it's a bit cumbersome. I'm still using them since they have some metadata available which is unique.

(these are the guys behind http://www.allmusic.com)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: