Hacker Newsnew | past | comments | ask | show | jobs | submit | DeliriousDog's commentslogin

Completely agree on limiting yourself reducing the nesting of code.

There is more difficulty working with code such as

if x { //... } else { //... }

Than code like

if x { // returns from this block } // execution continues

In some legacy code I sometimes encounter very long chains of checks which nest 4-8 layers, which becomes very difficult to maintain a mental model of where you are in execution at any point. I try to refactor into the second pattern from above when possible.


This comment is disgusting. Voting is what they did about it, and they still have their rights at risk.


Turns out voting is not enough! Damn, if only those Black Panthers got out to vote, we would've fixed racism in America. Shame.


Maybe they should simply refuse to give up power like your candidate did the last time no?


Because that’s what happened. We didn’t just finish a Biden presidency did we?

The left can’t admit they’re continue not understanding voter IDs, it doesn’t mean we’re going to shut up until they’re implemented nationwide.


You aren't attempting to defraud anyone.

The intent is clearly to prevent entities from publishing clearly fake/ill-gotten reviews. The first amendment does not protect your speech when that speech is used to assist in committing another crime. The second amendment exists, but that does not give you carte blanche to shoot people (extreme example).

For a speech related example, see the Freeman v Giuliani case[^1], where the defense stated that they "have a first amendment right to lie," which was ruled to not be the case in defamation.

Also remember that there needs to be some measurable level of harm inflicted. A silly comment in this thread is unlikely to have any measurable level of harm, but cheating reviews may result in tens to hundreds of thousands of dollars in sales.

[^1]: APNews https://apnews.com/article/giuliani-2020-election-georgia-de...)


There was a (now deleted comment) about how there is no proof of wage discrimination for Uber/Lyft drivers, which was posted with no evidence.

This video (https://www.youtube.com/watch?v=OEXJmNj6SPk) was recently published which shows drivers being offered the same gigs, but different payment amounts. Note that I could not find a published version of the data they collected in this video.

That is not explicitly proof of wrongdoing, but clearly algorithmic price setting can be demonstrated as not always offering the same payment to the same drivers for the same work. There may be a valid reason to why this is the case, but as the calculation method is closed source, the individuals being offered the wage are unaware of why they would be paid less than their peers.

This is work that is often considered "low skill" - which should actually make it extremely cut and dry as to why an individual would be paid more or less. Are they making their pickups faster? Are their customers more satisfied? If that's the case, why would they sometimes be offered more money than their peers and sometimes less money?

Almost all workers here are price takers, and suffer greatly from the information asymmetry present. Companies hiding behind "oh but the algorithm says..." is a poor excuse for inequality.

Edit: Because discrimination is in the title of the OP, I feel the need to clarify: in no way is the above saying that the video posted is proof of discrimination. Inequality need not be discrimination. When there is inequality without any measurable source, we need to be skeptical of the reason. Maybe one driver has better customer feedback, therefore they get offered a higher wage. There are many logical explanations for the result, but Uber/Lyft do not seem to engage with the discussion. This should raise red flags. That does not conclude that they are discriminating against anyone, and that would be a poor conclusion to draw without a true investigation.


I could see an AI trying to hunt out a person's bottom line. I could offer this job for $10 to everyone, but maybe I'll subtract 0 to 4 dollars when I offer it and see who does or doesn't bite. If someone bites on lower pay, I then record that information and offer them further lower pay in the future.

This isn't really abnormal. Every job does this by setting a wage they are willing to pay and seeing who signs up, knowing that person will now need to only be paid that wage. What is different is the scale and the frequency this is being done. Instead of doing this in a way that impacts a person once every job change, it now impacts them multiple times a day, and the data recorded is more detailed and can be acted on more directly.

None of this is discrimination against a protected class, but if there are any reasons one demographic might, on average, accept lower pay than another, it will lead to large scale discrimination.

The problem is that our common discussion on these topics is lacking the rigor, nuance, or depth to handle questions about this, and thus ends up with two large camps. One that looks at the methods, sees no obvious discrimination in the methods, and say it doesn't count as inequality. The other that looks at the outcomes, notices the clear difference in outcome this leads to, and calls it inequality. Both are, by their own metrics, correct.


“Price discrimination” (or in this case “wage discrimination”) as described in microeconomics is exactly this—the same seller/buyer demanding/offering different prices for the same goods depending on their idea of how much the buyer/seller will bear. The term has nothing to do except etymology with what sociologists, lawyers, or politicians mean by the word “discrimination” (not that those three groups mean the same thing by it).


The issue is that many small scale price discriminations on individually reasonable criteria might present itself as a large scale discrimination of the type that lawmakers and others do care about. The way terms are overloaded does no favors, but even if we updated the terminology to resolve this, I think the underlying issue will remain.

Pink tax is an example of this happening, though on a scale needing far invasive technology than is currently available. It is presented as (big) discrimination even though it happens as price discrimination.


It’s more than that, I think: if this paper holds up (or if it doesn’t, but the ideas it covers are valid and the practices it’s concerned with later come into being) then it’s describing a mechanism for pushing down worker wages at the individual level, and within potentially any or all bands of the economy toward the market-clearing rate per worker. A market of many workers becomes many markets of one worker.

This is, um, potentially really bad. It’s several effects that already happen in, if you will, chunkier ways in our economy (especially in the US, with weak or absent unions and poor labor protection laws, compared to many other developed states) becoming applied at a much finer level of resolution (so to speak).


Stay tuned for my new app: Wildcatr

It is installed on gig worker phones and monitors the offered rates. When one worker is offered abusive rates, all other workers have their future offers filtered from view for some period of time unless it exceeds the typical offer by more than the amount the abused worker missed out on.


The issue isn't the "hunt for the bottom line" but the fact that simultaneously multiple parties are offered different price points for an unknown reason (to the workers).

You say it's not discrimination, but you cannot definitively make that claim. That's the issue. Red lining isn't immediately discrimination against a protected class, but silently is it. This is not to say that Uber/Lyft are discriminating against a protected class - it's just that because of the lack of transparency we don't know that they are not.

This is a hard thing for people to accept, but we need to take a deep look at how we implement ML to classify things tied to individuals. It's very easy to de-humanize the humans affected by the systems we build, because "it's just an algorithm."


> it's just that because of the lack of transparency we don't know that they are not.

Is this not the case regardless of whether an algorithm is used or not?


Setting labor price by exchange-like auction is an abusive practice in any context.

Companies get a pass on job interviews because it's basically impossible to prove. But this doesn't make it ok, it just makes it less damaging than the remedy. (Or at least arguably, a lot of people do argue otherwise, and lots of people are looking for better remedies.)


Not deleted, just (formerly?) flagged to death by other users: https://news.ycombinator.com/item?id=41513943. (The HN software seems to kill newer users’ comments more readily, is my impression.) You can enable “showdead” on your profile page if you want to see such comments.


Thank you! Reading that comment was actually what prompted me to make an account just to reply that, and then when I did it was gone.


showdead is essential for Orange Reddit but it will show downvoted comments in unreadable light grey on light grey. So you probably want a user CSS (e.g., with Stylus) like this (italic and the particular color are to taste, of course):

    .c5a, .c73, .c82, .c88, .c9c, .cae, .cbe, .cce, .cdd { color: #222; font-style: italic; }


My immediate thought is that it would probably get better results of they intentionally set pay based on social media predictors of wage sensitivity. I expect that you could that there are fingerprints of wage sensitivity. And that could amount to what's basically predictive union breaking via wage increase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: