Hacker News new | past | comments | ask | show | jobs | submit | fuscy's comments login

I have no idea what i-shirt is but it's likely that if an user uses copyrighted material for his i-shirt, you'll have to pay because it's your platform. Bonus points for not using a censorship filter in trying to do this.


No UGC will be allowed and the text you add will not be displayed anywhere :)


Ask him again about censorship in the EU when his most popular shirt is "Hitler did nothing wrong" or some other such nonsense.


This is not related to business but there's a horror story with Romania (it's in the EU) asking a news organization to provide informants information related to some corruption leaks.

The information is requested by the national GDPR enforcer so it bypasses the prevention written in the GDPR about news leaks.

Now there's a trial going around with this which blocked any further spread of that information until it's solved. It can be easily seen how the GDPR can be weaponized.


Isn't that just straight abuse of the law? AFAIK GDPR only protects your personal information, it can't be used to request someone else's personal information (if anything, you could argue that GDPR prevents you from giving out another person's info).


This isn't the police or the parliament asking for the information. It's the regulatory body that does inspections to companies to see if they respect GDPR.

So the pretext they're using is that they want to see the information to make sure that the news organisation is not selling it or mishandling it to other third parties. In the process, they'll be able to get the information and maybe it will go to the people involved in the corruption charges (which is the head of one part of the Parliament).


wouldn't any other regulatory body be able to ask that data to check, for example, if they are doing _anything illegal_ with that data?

For example, can't you check for all data to verify that the business is not doing anything with forbidden individuals or countries? (think OFAC)

I don't think GDPR allows anything more than any other law.


Potentual for abuse of laws is one of the concerns people have about laws.


They aren't actually following the letter of the law, so to me it's unclear how much they actually abuse the law rather than simply pasting the GDPR logo in one corner in a sort of legal phishing attempt.


It's a concern people have about governments.


The subjects of the injunction can likely refuse and appeal to the European Court of Justice, which exists precisely to sort out these situations.


The idea is not to hold people to their actions for all time. Even people who have been in jail are considered reformed. It's like someone holding you to your words when you were 4 and said you wanted to be an astronaut (maybe).


Well I'd be worried if there's the power with no oversight to request anything from personal nudes to the Coca Cola recipe and Area 51 secrets and then have no repercussions if made public.


TL;DR required. I didn't quite get it from the article. Is Facebook accused of doing something illegal under the laws at the time of the action?

If I were to sue them for anything, do I have any grounds?


China doesn’t have a good track record of policies: the pest control, the one child. One could also argue that their economy is also showing a crisis in the future.

This system if it is unstable and has side effects will probably screw up two generations at least.

The main issue I see is that people with low scores can “infect” other people’s scores. Considering this like a viral phenomenon the score will be impacted starting with family, friends, colleagues, strangers. I can’t see a solution except going the old route of “killing the nine family relations”.

I can see some kind of ghetto of low social score people doing barter and what not.

There are some contradictions like donating blood giving good score but what if for someone with a low score. There goes empathy if punished.

Or some exploits like colluding and creating cartels of increasing social score artificially.


The message being sent seems to be that shining light on government things, a hero makes not.

I'm not going to compare Assange with Superman, but remember that Superman was Clark Kent and he would have also shown the world if someone did nasty things while publicly wearing a halo of virtue (government).


Besides chat, WeChat allows a lot of stuff from hailing cabs, ordering food, doing payments, the sky is the limit because businesses can integrate with it.

On the other hand, I look at all Western social apps and cringe: Facebook made a big deal from launching their instant games for messenger, Twitch from allowing some extra monetization options, YouTube actually reducing opportunities for people to make money.

Look at China with their WeChat, Weibo, bilibili, QQ and many more..

I wish I could open Facebook, start doing a live and people can send me money (red packets, rockets, cute cats, auspicious objects etc.) for doing it, right there.

Meanwhile there's the EU where if you receive 5E, you have the IRS from 28 countries breathing down your neck, asking for their cut, fining you, smacking a GDPR notice because why not and passing legislation to "enable" competitive startups similar to the US (/semi-sarcastic).


I never understood why colonizing Mars is needed.

Even if a meteorite hits Earth or Yellowstone erupts, Earth would be more "hospitable" to life than any other planet in our solar system because Earth has a lot of resources that are right under our noses: breathable atmosphere, radiation shielding, easily mined metals and organic matter.

It's much easier to build and maintain a bio-dome in the Sahara desert, Arctic region, Cheyenne mountains or underwater than on Mars.

Earth at it's worst is much better than anywhere else in our solar system.


>I never understood why colonizing Mars is needed.

Personally I see quite a few positive things behind colonising Mars.

I think it's not about having a "backup", it's not about resources (or in a really long time) it's about the challenge.

If we can put our best engineers to solve how we re-use and recycle water, how we grow crops in extreme condition, how to control the O2 CO2 cycle (at a larger scale than on the space station), how to engineer a space craft that survive such harsh conditions we will end up with:

- technology to help our crops on earth

- technology to help with our water crisis

- technology to build sturdier structures

- international collaboration, that usually keep engineers from working on mass weaponry (The USA/Russian space program is actually motivated by exactly that: keeping the rocket scientists busy instead of working on ICBMs)

Mars just happen to be a goal silly enough that we'll get interesting discoveries and advances that I can't even foresee.

Another way to keep engineers busy and force international collaboration is to have a common threat. For that I believe the asteroid threat is both a real enough threat and a good subject to work collaboratively.


It's kind of like learning how to program when you don't have any task to complete. Especially for a data store of some kind. If you've ever tried to learn SQL, even if you've got an example database, you'll find it's incredibly difficult not because the syntax is all that difficult or the logic is particularly daunting. No, you'll find it's difficult because you don't have any questions to answer. You'll feel, "Okay, now what?" When you try to make up a question to answer, it's difficult to tell if you're answering the question correctly. You need the focus that a real problem gives you. You need the guidance that understanding what the data means gives you (or someone else) to know if your answer is right or wrong.

It's easy to see what a tool is designed to do. It's very difficult to see what a tool can be used for and why you might want to use it that way, let alone when you might want to deviate from that or find alternatives due to limitations or new requirements. Or when you might need entirely different tools.

We learn best when we're working on a problem. Just like going to the moon required solving a lot of problems which led to major advancements in the 20th century, going to Mars, colonizing Mars, and colonizing the moon have even more challenges.

Goals give research and development a clear purpose beyond, "I dunno, make something people want that we can sell."


I'm saving this answer. Love it.


Not to mention that Mars gives humanity an opportunity to improve skills essential to expanding to other solar systems.

Besides the obvious - large-scale terraforming - a "practice" settlement of Mars gives us an opportunity to innovate and refine practically every engineering skill, form of social organization, or general skillful endeavor in humanity's repertoire.

Metallurgy, genetics, geology, farming, medicine, psychology - all of these fields are bound to discover new phenomenon and methodologies under the constraints and conditions of an alien planet.

Not to be glib, but necessity is mother of invention. And there is no necessity as powerful as the drive for survival.


There's an old sci-fi idea, that if I recall Carl Sagan agreed with, that we should be working to colonize Venus now (which I believe "now" meant the 1970s at the time), because worst case if the greenhouse effect spirals out of control unstoppably (as climate change predictions have feared since the 1970s), surviving on Earth is going to be the same problem as colonizing Venus.


Others already mentioned benefits for us currently living humans.

Long term it is also interesting from an evolutionary aspect. Humans communities on earth are less and less isolated from each other, making it near impossible to evolve in different directions. That increases the risk of getting stuck in a bad local optimum.

Gravity well's are an isolating factor. Living on mars will require a highly isolated economy producing essential goods. Once that prove of concept exist can it be reproduced everywhere else in the solar system (or beyond).

Those communities would not be subject to many of the tragedy of the commons situations we have here on earth with our shared ecosystem. No climate change. No plastic in water. No gene manipulation. Hell, you can even build a space station for only white people, if you are into that. Down the line, we'll see what works best.

Earth being that highly interconnected / -depended makes it more peaceful than ever, so war might become a problem again. Communication is still somewhat easy, so working together will likely still be beneficial.


> Humans communities on earth are less and less isolated from each other, making it near impossible to evolve in different directions. That increases the risk of getting stuck in a bad local optimum.

That is not how evolution works. You want more mixing to increase fitness, not less. Small, isolated populations are notorious for harboring deleterious genetic variants that decrease overall fitness.


> That increases the risk of getting stuck in a bad local optimum.

This is an important concern, but I don't think it really improves on a space colony. To make it habitable the pool would be limited by size - and worse - by heavily controlled environment.


I don't think you can really "justify" extraterrestrial colonization on critical, legible, practical grounds. There is no roi.

The real, immediate reasons are inspiration related. Some of that gets very practical, though it's always at least speculative. Technological spillover, for example, can be justification in itself. Earthen solidarity is maybe another, and I do actually think that the existence of a few of us elsewhere helps form the concept of "us." Both of these are "on mission" in the sense that they might be key to human survival.

Anyway... Mars colonization is a horizon goal. It's something to focus our minds. Practical activities are dictated by minor goals (eg visit Mars, generate energy locally, etc). A self sustainable mars colony that can survive earths vogon destruction is so far ahead that it's more of a symbol than anything.

We didn't have true practical reasons for going to the moon, or for the ISS either. The main reason is (imo) that space faring is a human mission, for its own sake.

No matter what though, I dont think space faring is something you could sell to your conservative money manager. It's imagination dependant (and inspiring)... a job for da Vinci, not Medici.


I can't see any path to colonising the galaxy that doesn't start with colonising the solar system. It's always going to be a long term project, there are always going to be things on earth that look like more immediate priorities, but I want humanity to spread among the stars, and if we don't start now then when?


Personally I think colonizing planets is a waste of time. There's plenty of minerals and water in asteroids and other space junk, and they don't have large gravity wells to deal with. When it comes time to think about colonizing other star systems, it's going to take a really, really long time to get to them and we'll have little idea what their worlds are like. Seems to me it'd be better to work on colonizing space itself, because you'll pretty much have to anyways to make interstellar travel viable, and once you've done it there really isn't a good reason to live on a planet instead unless it happens to be a lot like earth.


Isn't a planet just a big spaceship with things built in? You can dig inside it for things, you don't need to capture asteroids, planets can travel through space and have a protective shield - the atmosphere, and Earth is already traveling at huge speeds, which are at very least on the level with the fastest spaceships we can currently make. Sure guiding a planet maybe harder than a spaceship made specifically for that purpose, but all the benefits of having resources right underneath, and planets are a proven space ships, with billions of years of testing, no man-made spaceship can match that for billions of years.


> planets can travel through space and have a protective shield - the atmosphere

Most worlds in our system don't have a significant atmosphere, most of those that do have too much of it. Like gravity, atmosphere also poses a problem for getting back to space.

> no man-made spaceship can match that for billions of years.

Not unmaintained, no, but if it was unmaintained that probably means everyone who lives there is dead anyway.


> Personally I think colonizing planets is a waste of time. There's plenty of minerals and water in asteroids and other space junk, and they don't have large gravity wells to deal with.

I more-or-less agree with this much, but the most suitable place to start learning to colonize space rocks is probably Phobos.


Humans are terribly designed for space exploration: die quickly, require sustenance, can't handle high G forces, can't handle radiation.

I always believed that if humanity is to colonize something, it would be through robots that do all the exploring and mining for us.


Humans were terribly designed to cross the Atlantic by swimming as well, but we designed boats so eventually we did.

Yes, robots are a cheaper and more reliable way to explore and extract resources but I doubt mankind will be happy with just that. We like to explore, and expand, and face challenges so I'd bet that no matter the setbacks or price tag, we will eventually prefer to do this things in person.

As an alternative: we could even redesign ourselves for space exploration if needed be.


Humans can physically swim from Eurasia to the Americas. https://en.wikipedia.org/wiki/Lynne_Cox

As to your wider point, the ability to breath oxygen and have long term fat stores makes an Atlantic crossing very easy. We moved from one palace with humans to another place with humans, hardly a massive feat of engendering. Meanwhile people in far more primitive craft ended up living in Hawaii (2,200+ miles from the nearest land mass) of all palaces.


Astronauts have traveled further on the moon on the three missions that included the Lunar Roving Vehicle than all our mars rovers managed combined.

If you give a mars rover a command, it takes an average of 12 minutes for that command to reach mars, and another 12 minutes for the confirmation to come back. That makes remote control very hard, and we aren't all that good with autonomous robots.

So until we figure that whole artificial intelligence thing out humans are a much better bet for getting significant amounts of science and mining done than robots.


The rovers we've sent to Mars weren't designed to travel great distances. Autonomous rovers built for maximum travel distance would be pretty easy except for supplying them with enough energy and protecting their delicate parts from the elements. Both problems are harder for humans, even on a one-way mission.


What's wrong with low G tolerance for interstellar travel? If speed of light is ~310^8, 1G ~ 10m/s^2, you would need (ignoring reletivistic effects) 310^7 seconds to reach the speed of light. There are 86400 seconds in a day, so you would need ~347 days to reach the speed of light if you were accelerating at 1G, were it not for relativistic effects.


Except relativistic effects are at play and iirc we don’t have “permanent” acceleration tech.


> iirc we don’t have “permanent” acceleration tech

Which is exactly why low G-tolerance doesn't matter. You have a limited amount of delta-V, whether you apply it all in one go or spread out over a year makes little difference given the timescales that are already involved in interstellar travel.


Humans may also be the best at handling what you stated. You’re comparing us to robots but if you say humans were designed then our collective knowledge limits us to 1 at present. So we’re the best and worst.


Humans build tools though.


more and more it sounds like the raw materials we need are available in the places we’d like to colonize - except energy. I’m starting to wonder whether once we have moved past fossil fuels if colonization beyond earth becomes dramatically easier.


You're assuming colonization is needed, when in reality we are just another species and are nothing to the universe on the cosmic scale. And yet, relative to the pointless wars and countless trillions spent primate-posturing, putting humans on a second floating rock is just as, if not more, sensible.

For me, it's something inspiring, a grand adventure, and may allow us to confirm the presence of life outside of our own globe. That alone would pay the cost of tickets, since shattering the illusion that our globe is the only one endowed with life by a creator, that is an outcome devoutly to be wished by rational beings.


What about an impact like what formed the moon? If the entire crust liquifies, I have a hard time believing it would be easier to survive on lava-earth than Mars.


You might enjoy this 4 minute video of Dr. Robert Zubrin giving three reasons why we should go to Mars.

I find it very inspirational/motivational to re-energize my focus on my own work, even though I'm not working on anything Mars related:

https://www.youtube.com/watch?v=plTRdGF-ycs


I concur. It's an interesting endeavour though - which mashes well with humanities impulse to progress and push the boundaries. Not everybody cares about that, but there are enough dreamers that marvel about humanity becoming an space faring civilization. The demand for science fiction is proof of that.

However, I personally don't understand why everyone wants to land on a dirtball that has a fraction of earths gravity. It will be crippling for the human physiology and pretty much precludes travel back to earth after a few generations of adaption.

I'd much prefer to build rotating orbital space stations that provide 1G centrifugal force. Easier on the body, better view out the window and no need to enter/escape gravity wells all the time. The dirtballs can be colonized by robots that don't care about gravity that much and can harvest resources for the stations.

Not that I have a say about this at all.


It is a very good point but consider Mars as a stepping point with very different conditions from the ones you can find on the desert. Gravity for example. Isolation, communication difficulties, technological difficulties that we need to overcome if we are to colonize other parts of the solar system.


I am surprised that as travel to Mars is becoming more of a reality, there are not a lot more folks trying to reach Mars for commercial gain.

I am correlating this back to when the Europeans are going around the globe and colonising every land mass they can land on. Why are should Mars be any different?


As Elon Musk said you could have crates of cocaine on the surface of Mars ready to be transported back to Earth and it would still make more sense to buy it locally. It's just too expensive right now. Future spacecraft might change that but as with everything else this will come with scale and this scale right now doesn't have enough customers to justify itself.


Bringing a ton of gold to Europe from South America is waaaaaay cheaper than bringing a ton of anything to Europe from Mars.


>Earth at it's worst is much better than anywhere else in our solar system.

Today, yes. But the first step to terraforming a dead planet is to colonize a dead planet. If we take the first steps and commit to our colony there, maybe someday we can have a second Earth.


Indeed, but you are forgetting that Earth also has the biggest threat of them all... Humans


but you are forgetting that Earth also has the biggest threat of them all... Humans

So will any place we colonize.


Well in the absence of government regulations you could populate the new place with clones of yourself and institute a personal theocracy.


A Little Prince fantasy?


I don't think there is contention in saying that it is a better place to live than earth in any circumstance.

Rather, I think it is the power of the process of a global collaboration in what would be the most exciting adventure for humanity in a generation. Us embarking on such an adventure would lead (I think the majority) of people to consider the importance of a "global" perspective and would probably be good for people to treat others better and make countries less likely to kill each other over seemingly "trivial" matters.


If we colonize mars, and reach the point where we have a full fledged society there it can act as a means of quickly recovering from a catastrophe on earth. The benefit of a fully functioning city/nation sending resources to assist a ravaged earth would boost earths recovery time if the worst were to happen.

Secondly, it acts as a separate bed of innovation. On earth we solve for earth problems, on mars they would solve for different problems, potentially leading to scientific and tech breakthroughs we would otherwise overlook.


On the time frames we ought to be planning on I see Mars as a scientific research base and waystation on journeys further out. In the long run population growth is going to start back up again as the genes people who desire kids in the modern world prosper so it'd be nice as a source of living space, though really O'Neil cylinders are the real solution there.


Much like the space program [1], colonizing Mars will be a focal point for developing technologies for improving life on Earth.

[1] https://en.wikipedia.org/wiki/NASA_spinoff_technologies


http://www.basicknowledge101.com/subjects/space.html has interesting details about Earth e.g. Earth day was only 6 hours long 4.5 Billion years ago


The same reason you do hello world before proceeding to more complex software engineering.


Sort of agree - BUT I would think that colonizing the moon would be a better bet than Mars (for now). Less travel, easier to supply and we can still prove out stations, growing food etc


> Earth would be more "hospitable" to life than any other planet in our solar system because

That's a false statement. A large enough meteor could cause a sterilization event.


> I never understood why colonizing Mars is needed.

For the same reason in tech we have backup servers, HA clustering, disaster recovery sites, etc. A bit of redundancy helps in case something goes wrong.

> Earth would be more "hospitable" to life than any other planet in our solar system

Currently, sure. But other planets, especially mars, can be terraformed and made habitable one day ( it would take hundreds or thousands of years ).

> It's much easier to build and maintain a bio-dome in the Sahara desert, Arctic region, Cheyenne mountains or underwater than on Mars.

Sure but a terraformed mars can support more life eventually.

> Earth at it's worst is much better than anywhere else in our solar system.

The same goes for siberia, alaska, canada, etc. There are far more hospitable places to live on earth than those places. But people still explored, migrated and settled. It's human nature.

Edit: Not sure why I'm being downvoted but if anyone is interested in a talk of mars colonization, here is an interesting TED talk.

https://www.youtube.com/watch?v=t9c7aheZxls


>I never understood why colonizing Mars is needed.

Correct, but short-sighted.

However much Earth has, we'll quickly gobble it all up.

To quote Bartlett : "The greatest shortcoming of the human race is our inability to understand the exponential function."

(https://en.wikipedia.org/wiki/Albert_Allen_Bartlett)

The only way out is up, and the first stepping stone is the Moon, next is Mars.


Neverending exponential growth is not possible through space colonization. We can grow at most as the surface area of a sphere that expands at the speed of light, realistically much slower.


In sufficiently long term, it is not entirely correct, because colonized planets will eventually start sending out colonizers themselves.


That doesn't work. They'll only reach places already colonized.


"The greatest shortcoming of the human race is our inability to understand the exponential function."

In which case the second greatest must be to assume that all growth is exponential.


The eye opening thing here is not that the AI failed, but why it failed.

At start the AI is like a baby, it doesn't know anything or have any opinions. By teaching it using a set of data, in this case a set of resumes and the outcome then it can form an opinion.

The AI becoming biased tells that the "teacher" was biased also. So actually Amazon's recruiting process seems to be a mess with the technical skills on the resume amounting to zilch, gender and the aggressiveness of the resume's language being the most important (because that's how the human recruiters actually hired people when someone put a resume).

The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates). What matters is the rejection rate which it learned from the data.. The hiring process is inherently biased against women.

Technically one could say that the AI was successful because it emulated the current Amazon hiring status.


> The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates).

This is incorrect. The key thing to keep in mind is that they are not just predicting who is a good candidate, they are also ranking by the certainty of their prediction.

Lower numbers of female candidates could plausibly lead to lower certainty for the prediction model as it would have less data on those people. I've never trained a model on resumes, but I definitely often see this "lower certainty on minorites" thing for models I do train.

The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

Now, I'm not saying that Amazon's data isn't biased. I would not be surprised if it were. I'm just saying we should be careful in understanding what is evidence of bias and what is not.


It's wrong even if their model doesn't output a certainty (not all classifiers do). Almost all ML algorithms optimize the expected classification error under the training distribution. So if the training data contains 90% men, it's better to classify those men at 100% accuracy and women at 0% accuracy, than it is to classify both with 89.9% accuracy. Any unsophisticated model will do this.

gp: "The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates)."

This is false for typical models.


> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

This is not true.

Probabilistic-ly speaking, if we are computing P(hiring | gender); Lower certainty means there is a high variance in prior over women. However, over a large dataset, the "score" would almost certainly be equal to the mean of the distribution, and be independent of the variance.

In simpler words, if there was a frequency diagram of scores for each gender (most likely bell curves), then only the peak of the bell curve would matter. The flatness / thinness of the curve would be completely irrelevant to the final score. The peak is the mean, and the flatness is the uncertainty. Only the mean matters.


There's not enough information about how their ML algorithm works, nor how large their dataset was for any of the above reasoning to be justified. Fwiw, many ranking functions do indeed take certainty into account, penalizing populations with few data points.


If they were using any sort of neural networks approach with stochastic gradient descent, the network would have to spend some "gradient juice" to cut a divot that recognizes and penalizes women's colleges and the like. It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

Unless they presented lots of unqualified resumes of people not in tech as part of the training, which seems like something someone might think reasonable. Then, the model would (correctly) determine that very few people coming from women's colleges are CS majors, and penalize them. However, I'd still expect a well built model to adjust so that if someone was a CS major, it would adjust accordingly and get rid of any default penalty for being at a particular college.

If the whole thing was hand-engineered, then of course all bets are off. It's hard to deal well with unbalanced classes, and as you mentioned, without knowing what their data looks like we can only speculate on what really happened.

But I will say this: this is not a general failure of ML, these sorts of problems can be avoided if you know what you're doing, unless your data is garbage.


> It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

That's exactly the issue we are talking about here. Woman's colleges would have less training data so they would get updated less. For many classes of models (such as neural networks with weight decay or common initialization schemes) this would encourage the model to be more "neutral" about women and assign predictions closer to 0.5 for them. This might not affect the overall accuracy for women (as it might not influence whether or not they go above or below 0.5), but it would cause the predictions for women to be less confident and thus have a lower ranking (closer to the middle of the pack as opposed to the top).


I don't think I'm with you. A neural net cannot do this - picking apart male and female tokens requires a signal in the gradients that force the two classes apart. If there's no gradient, then something like weight decay will just zero out the weights for the "gender" feature, even if it's there to begin with. Confidence wouldn't enter in, because the feature is irrelevant to the loss function.

A class imbalance doesn't change that: if there's no gradient to follow, then the class in question will be strictly ignored unless you've somehow forced the model to pay attention to it in the architecture (which is possible, but would take some specific effort).

What I'm suggesting is that it's likely that they did (perhaps accidentally?) let a loss gradient between the classes slip into their data, because they had a whole bunch of female resumes that were from people not in tech. That would explain the difference, whereas at least with NNs, simply having imbalanced classes would not.


supposing waiter and waitress are both equally qualifying for a job, and most applicants are men, won't the ai score waiter as being more valuable than waitress?


Not generally. The entire point being made is that whether one feature is deemed to be more valuable than another feature depends not just on the data fed into the system but also on the training method used.

Specifically, the gp is pointing out that typical approaches will not pay attention to a feature that doesn't have many data points associated with it. In other words, if it hasn't seen very much of something then it won't "form an opinion" about it and thus the other features will be the ones determining the output value.

Additionally, the gp also points out that if you were to accidentally do something (say, feed in non-tech resumes) that exposed your model to an otherwise missing feature (say, predominantly female hobbies or women's colleges or whatever) in a negative light, then you will have (inadvertently) directly trained your model to treat those features as negatives.

Of course, another (hacky) hypothetical (noted elsewhere in this thread) would be to use "resume + hire/pass" as your data set. In that case, your model would simply try to emulate your current hiring practices. If your current practices exhibit a notable bias towards a given feature, then your model presumably will too.


How did you control for these things? Wondering what patterns there are that people use to prevent social discrimination.

Seems challenging since much of AI, especially classification, is essentially a discrimination algorithm.


There are a few ways you can tackle this issue: 1) have the same algorithm for each group, but train separately (so in the end you have two different weights); 2) over-sample the group under represented in the data; 3) make the penalty more severe for guessing wrongly on female then male applicants during training; 4) apply weights to gender encoding; 5) use more then just resumes as data.

This isn't an insurmountable problem, but does require extra work then just "encode, throw it in and see what happens".

Amazon only scrapped the original team, but formed a new one in which diversity is a goal for the output.


Or: don't include gender in the training data.


They didn’t. It was discovered through other signals (mention of membership in “women’s” clubs etc.


So they did. It should be obvious that if you don't want to include gender, then you have to sanitize gender-related data.


That's not as easy as one might think.

Machine learning generally doesn't have any prior opinions about things and will learn any possible correlation in the data.

It could for example discover that certain words or sentence structures used in the resume are more likely associated with bad candidates. Later you find out that <protected class> has a huge amount of people that use these certain words/structures while most other people don't.

And now the AI discriminates against them.

ML will pick up on any possible signal including noise.


More than that, though. Graduates of all-women colleges were also caught. If you're using school as a data point, that's extremely hard to sanitize.


Then what is the purpose of this? At some point you want this thing to "discriminate" (or "select", if this is a better word) people based on what they have done in life. Which is not negative per se.


But you don't want it to select based on gender.


Would it though? A school name is essentially just that, no gender information there, even with the "women" prefix. If you discriminate other schools, you can do it too with those. FWIW there could be a difference in performance which the ML finds.


It would. Just because it's not explicitly looking for a "W" in the gender field doesn't mean it's not able to determine gender and discriminate based on that. The article and the discussion is all about how these things, despite not explicitly being told to discriminate based on gender, or race, or any number of factors, can still end up gathering a set of factors that are more prevalent among those groups, and discriminate against those people all the same.


>despite not explicitly being told to discriminate based on gender, or race, or any number of factors

Then this is completeley useless. You want this "AI" to discriminate based on a number of things. That's the whole point. You want to find people that can work for you. If a specific school or title is a bad indicator (based on what you hired now), then it just is that.


> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

I don't think that's true. "No bias" means that gender is irrelevant (i.e. its correlation with outcome is 0%). Therefore the system shouldn't even take it into account - it would evaluate both men and women just by other criteria (experience, technical skills, etc), and it would have equal amounts of data for both (because it wouldn't even see them as different).

You need bias to even separate the dataset into distinct categories.


> "No bias" means that gender is irrelevant

False. If we're talking about the technical statistical definition, bias means systematic deviation from the underlying truth in the data -- see this article by Chris Stucchio with some images for clarification:

https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what...

"In statistics, a “bias” is defined as a statistical predictor which makes errors that all have the same direction. A separate term — “variance” — is used to describe errors without any particular direction.

It’s important to distinguish bias (making errors with a common direction) from variance which is simply inaccuracy with no particular direction."


I think the comments I replied to mean bias as in “sexist bias”.


Bias as in racism, sexism, etc, has multiple definitions, some of which are mutually exclusive.


Well, it was clear that _you_ think so.

My point was that you should consider the meaning of the word under which the post you're replying to is correct, especially given that the author was claiming specific domain experience.


The original was:

> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

your post said:

> If we're talking about the technical statistical definition, bias means systematic deviation from the underlying truth in the data

So I think my interpretation is correct, even though it's not "the technically statistically correct usage". You were referring to the bias of the algorithm (i.e. the mean divergence from the mean in the data), whereas we were referring to the "hiring bias" evident in the data. In fact, your "bias" was mentioned as "lower rankings for women" - i.e. "the algorithm would have (statistical) bias even without (sexist) bias in the data" and I was replying that I think that's false.


Question: So technically, the AI is not bias against women per se, but a set of characteristics / properties, that are more common among women.

I'm not trying to split hairs (or argue), as much as further clarify the difference between (the common definition of) human bias and that of statistical bias.


Correct.

Computers are very bad at actually discriminating against people, they will pick up a possible bias in a statistical dataset (ie, <protected class> uses certain sentence structure and is statistically less likely to get or keep the job).

Sometimes computers also pick up on statistical truths that we don't like, ie, you assign a ML to classify how likely someone is to pay back their loan and it picks up on poor people and bad neighborhoods, disproportionately affecting people of color or low income households. In theory there is nothing wrong with the data, after all, these are the people who are least likely to pay back a loan, but our moral framework usually classifies this as bad and discriminatory.

Machine Learning (AI) doesn't have moral frameworks and doesn't know what the truth is. The answers it can give us may not be answers we like or want or should have.

on a side note; human bias is usually not that different since the brain can be simplified as a bayesian filter; there are predictions on the present based on past experience, reevaluation of past experience based on current experience and prediction of future experience based on past and current experience. It's a simplification but usually most human bias is based on one of these, either explicitly social (bad experience with certain classes of people) or implicitly (tribalism).


> the brain can be simplified as a bayesian filter

I agree with everything else in your post, but just wanted to note that while this is true to some extent, the brain is much less rational than a pure Bayesian inference system; there are a lot of baked in heuristics designed to short-circuit the collection of data that would be required to make high-quality Bayesian inferences.

This is why excessive stereotyping and tribalism are a fundamental human trait; a pure Bayesian system wouldn't jump to conclusions as quickly as humans do, nor would it refuse to change its mind from those hastily-formed opinions.


> the AI is not bias against women per se

I think I'd make the claim a bit less strongly -- we don't know if there is statistical bias or non-statistical/"gender bias" in the data; both are possible based on what we know.

However exploring the statistical bias possibility, the simple way this could happen is if the data have properties like:

1. For whatever reason, fewer women than men choose to be software engineers 2. For whatever reason, the women that choose to be software engineers are better at it than men

(Note I'm just using hypotheticals here, I'm not making claims about the truth of these, or whether it's gender bias that they are true/false).

Depending on how you've set up your classifier, you could effectively be asking "does this candidate look like software engineers I've already hired"? If so, under the first case, you'd correctly answer "not much". Or you could easily go the other way and "bias" towards women if you fit your model to the top 1% where women are better than men, in our hypothetical dataset.

This would result in "gender bias" in the results, but there's no statistical bias here, since your algorithm is correctly answering the question you asked. It's probably the wrong question though!

Figuring out if/when you're asking the right question is quite difficult, and as the sibling comment rightly pointed out, sometimes (e.g. insurance pricing) the strictly "correct" result (from a business/financial point of view) ends up being considered discriminatory under the moral lens.

This is why we can't just wash our hands of these problems and let a machine do it; until we're comfortable that machines understand our morality, they will do that part wrong.


The article didn't specify how they labeled resumes for training. You're assuming that it was based on whether or not the candidate was hire. Nobody with an iota of experience in machine learning would do something like that. (For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.)

A far more reasonable way would be to take resumes of people who were hired and train the model based on their performance. For example, you could rate resumes of people who promptly quit or got fired as less attractive than resumes of people who stayed with the company for a long time. You could also factor in performance reviews.

It is entirely possible that such model would search for people who aren't usually preferred. E.g. if your recruiters are biased against Ph.D.'s, but you have some Ph.D.'s and they're highly productive, the algorithm could pick this up and rate Ph.D. resumes higher.

Now, you still wouldn't know anything about people whom you didn't hire. This means there is some possibility your employees are not representative of general population and your model would be biased because of that.

Let's say your recruiters are biased against Ph.D.'s and so they undergo extra scrutiny. You only hire candidates with a doctoral degree if they are amazing. This means within your company a doctoral degree is a good predictor of success, but in the world at large it could be a bad criteria to use.


I'm not a ML guy, but reading this, it almost sounds like the training data needs to be a fictional, idealized set, and not based on real world data that already has bias slants built in. Possibly composites of real world candidates with idealized characteristics and fictional career trajectories. Basically, what-my-company-looks-like vs what-I-want-it-to-look-like. I'm not sure this is even possible.

Its an interesting questions. On one hand, a practical person could argue: "Well, this is what my company looks like, and these are the types of people who fit with our culture and make it, so be it. Find me these types of candidates."

VS

"I don't like the way may company culture looks, I would rather it was more diverse. This mono-culture is potentially leaving money on the table from not being diverse enough. I'm going to take my current employees, chart their career path, composite them (maybe), tweak some of the ugly race and gender stats for those who were promoted, and feed this to my hiring algorithm."


> the training data needs to be a fictional, idealized set, and not based on real world data that already has bias slants built in

Thatd be great, but in this case (as in most ML cases) the idea is not "follow this known, tedious process" but instead "we have inputs and results but dont know the rules that connect them, can you figure out the rules?"

> this is what my company looks like

In tech hiring, no one wants the team they have...they want more people but without regrets (including regretting the cost)


> You're assuming that it was based on whether or not the candidate was hire. Nobody with an iota of experience in machine learning would do something like that. (For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.)

It's a fine strategy if all you're trying to do is cost-cut and replace the people that currently make these decisions (without changing the decisions).

I agree that most people with ML experience would want to do better, and could think of ways to do so with the right data, but if all the data that's available is "resume + hire/no-hire", then this might be the best they could do (or at least the limit of their assignment).


A reasonable assumption but, in practice, false. Many companies believe (perhaps correctly) that their hiring system is good. Using hiring outcomes would be a reasonable dependent variable, especially if supply is lower than demand, performance is difficult to measure, or there’s a huge surplus of applications which need to be cut down to a smaller number of human assessed resumes.


Men are promoted quicker, and more often, than women.


There was a company meeting one year at Amazon when they proudly announced that men and women were paid within 1-2% of each other for the same roles. It completely missed the point which you raise.

I want to see reports of average tenure and time between promotions by gender. I suspect that the reason we don't see those published is that the numbers are damning.


Or possibly noone did a study of sufficient size that passed peer review.

It's also not hard to make the pay gap 1-2% just like it's not hard to make it 25% (both values are valid). Statistics is a fun field. Don't trust statistics you didn't fake yourself.

Amazon could easily cook the numbers to get to 1-2%, I doubt anyone checked the process of determining that number if it's unbiased and fair and accounts for other factors or not.


I didn't write anything about promotions. I mentioned tenure and performance reviews.

If you had a way to accurately predict that some company would systematically donwrate you and eventually fire you or force you to quit, would you want to interview there? If you were a recruiter in that company and could accurately predict the same, would it be ethical for you to hire the candidate anyway?

This is not to say that I approve of blindly trusting AI to filter candidates, but the overall issue isn't nearly as simple as many comments here make it out to be.


Does it corelate with performance?


And how is performance measured?

Aggressive behavior is considered admirable in men, and deplorable in women. Many women I know have noted comments in their performance reviews about their behavior - various words that can all be distilled to "bitchy".


And then you take your experience, connections and expertise to leave and start your own company where none of this happens.

But is that what we see in real life?

I don't have data or sources at hand, but I'd bet top dollar that F-M ratio among employees is much more lopsided in male favor among founders[0].

[0] Not using the word CEO, because that can be appointed for somewhat arbitrary reasons.


citation needed


downvoters, please explain. The statement makes sense when you look at it in tech where there are more men than women. So it may appear that more men are getting promoted compared to their women counterparts. But that doesn't mean men >>> women, it's just statistics at play.


> For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.

Many companies are fine with false negatives in their hiring process. Better to pass on a good candidate than hire a bad one.


This also means that if you hire unqualified women only because they are women, then your AI will have bias against women.


This seems to assume that performance evaluation is itself free from bias.


This doesn’t seem to be a reasonable conclusion. There is no reason to assume the AI’s assessment methods will mirror those of the recruiters. If Amazon did most of it’s hiring when programming was a task primarily performed by men, and so Amazon didn’t receive many female applicants, they could be unbiased while still amassing a data set that skewed heavily male. The machine would then just correctly assess that female resumes don’t match, as closely, the resumes of successful past candidates. Perhaps I’m ignorant about AI, but I don’t see why the number of candidates of each gender shouldn’t increase the strength of the signal. “Aggressiveness” in the resume may be correlated but not causal. If the AI was fed the heights of the candidates, it might reject women for being too short, but that would not indicate height is a criteria of Amazon recruiters hiring.


This is a subtle point but worth stating -- AI does not mirror or copy human reasoning.

AI is designed to get the same results as a human. How it gets to those results is often very, very different. I'm having trouble finding it, but there was an article a while back trying to do focus tracking between humans and computers for image recognition. What they found was that even when computers were relatively consistent with humans in results, they often focused on different parts of the image and relied on different correlations.

That doesn't mean that Amazon isn't biased. I mean, let's be honest, it probably is; there's no way a company this large is going to be able to perfectly filter or train every employee and on average tech bias trends against women. BUT, the point is that even if Amazon were to completely eliminate bias from every single hiring decision it used in its training data, an AI still might introduce a racial or gendered bias on its own if the data were skewed or had an unseen correlation that researchers didn't intend.


The whole aim of the AI was to make decisions like the recruiters did -- that is explicitly what they were aiming to do. It might be worth reading the article as it addresses your two ideas (the aim of the project and the fact that the training set was indeed heavily male).


Hey. I did read the article. It doesn’t support the conclusion OP is drawing. The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”. Obviously machines don’t make decisions like humans do. They’re trying to reverse engineer an alternate decisions making process from the previous outcomes.


> The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”.

This is why AI is so confusing. All "AI" does is rapidly accelerate human decisions by not involving them, so that speed and consistency are guaranteed. They are not replacements for human decision making, they are replacements for human decision making at scale.

If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.


> If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.

I wonder to what extent people want to solve it and perhaps more importantly whether or not it can be solved at all...


This is all happening before the interview, even. The AI, as far as I can see from the article, was just sorting resumes into accept/reject piles, based on the kinds of resumes that led to hire/pass results in the hands of humans.


So the recruiters may or may not have been biased, but if the previous outcomes were (based on the candidate pool) then the AI is sure to have been "taught" that bias.

Unless Amazon is willing to accept a) another pool of data or b) that the data will yield bias and apply a correction, the AI is almost guaranteed to be taught the bias.


Yep, I agree a skewed dataset is not good for the task of correcting an unequal distribution and is likely to maintain or even increase it.


Aren't the "previous outcomes" past hiring decisions though?


Yes, but you have to know what pool you started with. As an overly simplistic example, if a bank used historical mortgage approval records from primarily German neighbourhoods to train AI, it might become racist against non-Germans despite that it’s just an artifact of the demographics of the time. I think it just shows how not ready for prime time AI is.


Control question for if you're making a certain intellectual mistake.

The data set will also have skewed heavily against people named "David". Probably only ~1% of the successful applicants.

Would you also expect the machine to be biased against candidates named David?


What if people named David got hired 10/100 times in the past but people named Denise only got hired 6/100 times?

Hiring practices as expressed in the data get picked up by the machine and applied accordingly. As such, David is predicted to be a better hire than Denise.

This is not about "David" vs. "Denise", but how the machine learning process will aggregate and classify names. David and David-like names will come out on top while obscure names it has no idea how to deal with (0/0 historically) will probably be given no weighting at all.

Sorry "Daud!" Our algorithm says David is better.


I would expect the AI isn't fed names as an input, but rather things Amazon wants to weigh like experience, awards and education.


This isn't correct, the worry isn't that a single group is small, its that a single group is large. (basically if one group is large, you can get by ignoring all the smaller groups).

This is most common with binary problems.


I'm going to make a supposition here but one of the first things I think they did (especially when trying to fix the AI) was to balance and normalize the data so that there would be no skew between men and women number of records in the data set.

If my supposition is correct then the other parameters are at fault here from which gender and language used stick out.

Another supposition I'm going to make is that they even removed the gender from the data set so that AI didn't know it, but cross-referencing still showed "faulty" results due to hidden bias that the AI can pick up, like language used.


If they did normalize the data across gender, then you’re correct it may indicate bias on Amazon’s part. But I don’t know about that. The article doesn’t provide enough information. I think it should be obvious, to Amazon as well, that if you want to repair inequality in a trait (gender) you can’t use an unequal dataset to train a machine to select people. I just don’t think it follows that machine bias must mirror human bias.


Did you read the article?

(Serious question. Not intended as snark. Genuinely wondering if I'm missing some deeper current in your post?)


Twice. It doesn’t support OP’s conclusions.


"they could be unbiased while still amassing a data set that skewed heavily male" - this sounds like a self contradiction


Is the NBA biased against white guys?


I don't know - is it? What is the difference between bias and inferring information from skewed data?


Bias, to me, is the active (perhaps unconscious) discrimination based on a trait. Skew is an unequal distribution of that trait as a result of bias in favor of other traits, historical circumstances, or anything other than discrimination.

The NBA wants good basketball players. If they happen to be white, I imagine they'd draft them with equal enthusiasm as any other player. So no, it isn't.


Do you have some information not present in the article? There seem to be some assumptions on the training process in your comment that are not sourced in the article.

I'll don my flack jacket for this one, but based on population statistics I believe a statistically significant number of women have children. A plausible hypothesis is that a typical female candidate is at a 9 month disadvantage against male employees and that that is a statistically significant effect detected by this Amazon tool.

Now, the article says that the results of the tool were 'nearly random', so that probably wasn't the issue. But just because the result of a machine learning process is biased does not indicate that the teacher is biased. It indicates that the data is biased, and bias always has a chance to be linked to real-world phenomenon.


Does Amazon give 9 months of parental leave, or are you saying women employees are disadvantaged for their entire pregnancy?


Ah. Sorry, silly me. A quick search suggests 20 weeks, so ~4.5 months.

Obviously I don't have much specific insight, so maybe there is a culture where they don't use leave entitlements. But if there are indicators that identify a sub-population taking a potentially 20 week contiguous break it is entirely plausible that it would turn up as a statistically significant effect in an objective performance measure. All else being equal, then a machine learning model could pick up on that.

The point isn't that it is the be-all and end all, just that the model might be picking up on something real. There are actual differences in the physical world.


The term "AI" is over-hyped. What we have now is advanced pattern recognition, not intelligence.

Pattern recognition will learn any biases in your training data. An intelligent enough* being does much more than pattern recognition -- intelligent beings have concepts of ethics, social responsibility, value systems, dreams, ideals, and is able to know what to look for and what to ignore in the process of learning.

A dumb pattern recognition algorithm aims to maximize its correctness. Gradient descent does exactly that. It wants to be correct as much of the time as possible. An intelligent enough being, on the other hand, has at least an idea of de-prioritizing mathematical correctness and putting ethics first.

Deep learning in its current state is emphatically NOT what I would call "intelligence" in that respect.

Google had a big media blooper when their algorithm mistakenly recognized a black person as a gorilla [0]. The fundamental problem here is that state-of-the-art machine learning is not intelligent enough. It sees dark-colored pixels with a face and goes "oh, gorilla". Nothing else. The very fact that people were offended by that is a sign that people are truly intelligent. The fact that the algorithm didn't even know it was offending people is a sign that the algorithm is stupid. Emotions, the ability to be offended, and the ability to understand what offends others, are all products of true intelligence.

If you used today's state-of-the-art machine learning, fed it real data from today's world, and asked it to classify them into [good people, criminals, terrorists], you would result in an algorithm that labels all black people as criminals and all people with black hair and beards as terrorists. The algorithm might even be the most mathematically correct model. The very fact that you (I sincerely hope) cringe at the above is a sign that YOU are intelligent and this algorithm is stupid.

*People are overall intelligent, and some people behave more intelligently than others. There are members of society that do unintelligent things, like stereotyping, over-generalization, and prejudice, and others who don't.

[0] https://www.theverge.com/2018/1/12/16882408/google-racist-go...


We are pattern recognition machines. If you consider pattern matching unintelligent, then machines are more intelligent that we are since they rely more on logic than pattern matching.

For the black man = gorilla problem, an untaught human, a small child for instance, can easily make the same mistake. Especially if he has seen few black people. And well educated adults can also make the mistake initially, even if they hate to admit it.

However, in the last case, a second pattern recognition happen, one that matches the result of the image classifier with social rules. And it turns out that mixing black men and gorillas is a clear anti-pattern and anything that isn't certain is incorrect.

Unlike us, computer image classifiers typically aren't taught social rules, so like a small child, they will tell things without filter. It will probably change in the future for public facing AIs.

Not stereotyping is not a mark of intelligence, it is a mark of a certain type of education. And I don't see why it couldn't be done with the usual machine learning techniques.


> social rules

I claim it isn't just social rules -- part of that is empathy, which is a manifestation of intelligence that I think is beyond pattern matching.

If a white person were mislabeled as a cat, it would be a cute funny mistake. Labeling people as dogs, not so much. Gorillas, even worse. Despite that gorillas are more intelligent and empathetic than cats. Oh, and bodybuilder white celebrity boxing champion as a gorilla, may actually be okay. The same guy as a dog, no. It makes no sense to a logic-based algorithm. But humans "get it".

A human gets it because they could imagine the mistake happening against them, with absolutely zero prior training data. You don't need to have seen 500 examples of people being called gorillas, cats, dogs, turtles and whatever else.

If you want to say that a hundred pattern recognition algorithms working together in a delicate way might manifest intelligence, I think that is possible. But the point is one task-specific lowly pattern recognition algorithm, which is today's state of the art, is pretty stupid.


> We are pattern recognition machines.

That's just one function. That's not the entirety of what the brain (and body) does.

> If you consider pattern matching unintelligent,

What do you think pattern matching IS? Round ball round hole does not require intelligence. It requires physics. The convoluted rube goldberg meat machine what we use to do it, doesn't change what it is. Making the choice of will and approximations, are more signs of intelligence, imo.


"a worldview built on the important of causation is being challenged by a preponderance of correlations. The possession of knowledge, which once meant an understanding of the past, is coming to mean an ability to predict the future." - Big Data (Schonberger & Cukier)

so, knowledge now is allegedly possession of the future, rather than possession of the past.

This is because the future and past are structurally the same thing in these models. Each could be missing, but re-creatable links.

Also, conflicting correlations can be shown all the time. if almost any correlation can be shown to be real, what's true? How do we deal with conflicting correlations?


They didn't scrap it because of this gender problem. That wasn't why it failed. They scrapped it because it didn't work anyway.

Note the title is "Amazon scraps secret AI recruiting tool that showed bias against women" not "Amazon scraps secret AI recruiting tool because it showed bias against women". But I guess the real title is less clickbaity - "Amazon scraps secret AI recruiting tool because it didn't work".


The same AI should be applied to hiring nurses and various other fields which show population skews in gender, as well as fields which are not skewed. I'd be curious as to the outcome.


It failed because rationally interpreting gender data leads to politically incorrect conclusions.


How did you come to the conclusion that gender was being the most important, rather than skills or aggressiveness?


I don't think that's what the parent was claiming; the parent says "gender and aggressiveness" were most important and skills listed on the resume as providing such an unclear signal for actual hires that they were not picked up by the AI.


Without regard to this particular issue, you also have to concern yourself with the bias of the person determining if the AI has a bias.


> The AI becoming biased tells that the "teacher" was biased also.

That doesn’t follow.


Someone had to decide on the training material. Note that saying that they had bias does not mean that they acted with malicious intent; most likely they didn't. That doesn't change the outcome, however.


Thanks for spelling this out, I think this is exactly how to look at this.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: