Hacker News new | past | comments | ask | show | jobs | submit | wyqydsyq's comments login

> in this case, facial recognition tech was unreliable entirely because of the people running it.

Wrong.

In this case the facial recognition tech itself was arguably not unreliable. The human operators were the unreliable factor.

The recognition tech reliably tagged the impersonator as Ousmane exactly as it was instructed to do. The system worked exactly as intended. It is the human operator whose intention was wrong.

This has nothing to do with AI being unreliable and everything to do with the employees of this SIS company going "yep kid's black, he's the one who did it" without half a thought.


> The human operators were the unreliable factor.

Separating the two things (operator and technology) is merely a technicality, in reality they are not transparent to the general public and should be treated as parts of the same.

If AI™ only happens in a black box behind closed doors with people getting the opportunity to make inconvenient results disappear and that is basically the way it is trying to be established then no, it is not reliable.

The operator is part of this whole system, if it can't be used without the unreliable operator, the tech is not reliable.


> In this case the facial recognition tech itself was arguably not unreliable.

You could train a fake money detection AI with cat videos from youtube, it would probably do something with images of cats. However I hope no one would try to argue in front of a court that the resulting AI would be reliable at detecting fake money. In this case a stolen "do not use as id" slip was apparently good enough to serve as input validation, I would be surprised if their database wasn't overflowing with bad data.


It wasn't validation, it was labelling.

The AI correctly identified that the faces of people in two videos were the same person. It did it's job. The person responsible for correctly identifying the person in both these videos failed to do their job correctly.


IMO, the operators of the AI are part of the production execution of that AI. There were not proper constraints in place to prevent this from happening, and it wasn't fixed after the first time(s) that it occurred.

Sure you can argue that the pure tech worked as designed, but the system includes the human element as much as the tech element.


The operators made it worse of course, whether enabled by AI or not. We've been dealing with this exact same issue, without AI, for decades when it comes to identity theft.

These "operators" that you say are part of the production execution of "AI" are also part of every single institution that led up to this person being here and existing. Everything from registering their birth, capturing their DMV details, capturing their details when enrolling at a school and opening a bank account, taking their photo for the driver's license, etc. All gooey, imperfect, corruptible and fallible people that taint the "chain of custody" about the person's identity.


>This has nothing to do with AI being unreliable

This has everything to do with AI being unreliable. AI is unreliable. It was unreliable yet again here. Anyone relying on AI as the evidence in a prosecution needs to have their faces rubbed in the ignorance. AI is as unreliable as any human being. In fact, it is more unreliable because people understand the limits of the reliability of humans and look for corroboration and examine the evidence with a mind on its potential shortcomings. Yet there is still this pervasive and pernicious belief that "Computers don't lie" which is completely and utterly false and worth repeating often, with emphasis until it is part of human collective understanding of the world to the same degree that water is wet.

"Computer says x" by itself counts for nothing. Anyone presenting evidence like that by alone should be presenting a huge red flag that something nasty is going on, via either malice or sheer incompetence.


The AI was reliable. It correctly identified that the individuals in two videos were the same person.

The problem was that one human recorded a name that they should have known was not real, and another person read that name in a report and arrested someone with that name. The story would have been no different if a human had correctly recognized the same person in both videos.


What requires them to get approval from the FAA for something outside the FAA's jurisdiction?


The FAA claims jurisdiction worldwide. Did you not see the case where Musk launched from India because he could not get reasonable US approval and they pitched a fit?


I recall that but I don’t think that was SpaceX. Some other satellite firm.


> I find them useful as I write code because they allow me to state my intention (in plain English), translate it to code, then compare statements and see how well I’ve achieved my goal.

IMO this is exactly what unit tests should be used for. Replace these comments with unit tests and you're doing TDD. The practical difference is by stating your intention as unit tests, you not only verify your initial implementation is correct but also that any future changes remain correct.


If you are not careful, you end up with 1000s of tests that are tightly coupled to your implementation, testing internal details of your systems, not their behavior. They become a massive burden if you decide to refactor a system and change its internal implementation.

Do test-driven development, but consider deleting these internal tests once you have scoped out how your initial implementation should work.


Meaning your employer now has all the more reason to replace you as soon as an opportunity presents itself


They can. But it would be very expensive to replace more than 20 years of domain experience.


It would probably be a lot more expensive to replace more than 20 years of domain experience immediately because you quit to go work at rand($FAANG).


Exactly this. If you are a single point of failure knowledge silo, the business sees you as a liability since if you decide to quit, their business is then screwed. This is why turning yourself into a silo is making yourself disposable - it gives the employer a very good reason to try and find an alternative to continuing their reliance on you.


I agree, meanwhile they prefer giving me a raise than start looking for backup or replacement


Why does one exclude the other?


short term, it cost more to bring a backup/replacement than give me a raise.


> No, you just make yourself disposable.

You might be making yourself disposable specifically in the context of your original role which you are basically making redundant by documenting and automating everything - but trust me any sensible business will want to keep a staff member who massively improved their team's productivity (by enabling them to be more independent with less silo'd knowledge) and reduced the business' risk exposure (by making potentially critical knowledge more accessible and reducing single points of failure).

Employees who try to become indesposable by turning themselves into a mega-silo of knowledge that nobody else in the org has might gain some job security in the short term but they lose out on any potential job progression.

Turning yourself into a silo like this is practically blackmailing your employer into keeping you. They might keep you employed because they need to keep their systems online, but they will also not think twice about replacing you as soon as an opportunity is presented. That is making yourself disposable.

Turning yourself into a leader who improves the outcomes of various teams in a business will not only make you indespensable, it's genuinely the best (if not the only realistic) path for an engineer to work their way up into more senior or C-suite positions.


>trust me any sensible business will want to keep a staff member who massively improved their team's productivity

Whether sensible or not a lot of companies do not do this.

It is difficult to measure productivity increases at the best of times. Increasing your teams productivity doubly so. Increasing their productivity may even be net negative for you as it makes them look more effective compared to you.

This may be counterproductive for the company in question and seem idiotic but that doesn't stop it from being common. It's common precisely because it's a template for how most workers are treated. If you see programmers as individual resources not markedly different to Uber drivers (and SO many do) this way of thinking is completely natural.

>Turning yourself into a silo like this is practically blackmailing your employer into keeping you. They might keep you employed because they need to keep their systems online, but they will also not think twice about replacing you as soon as an opportunity is presented.

Right. But how else do you deal with a company that demonstrates repeatedly that they wouldn't think twice about replacing you no matter what? Starting from that assumption along with the restriction that you cannot easily hop jobs - what else are you supposed to do?

Plenty of tech is egalitarian and productive and not like this, but a larger unfashionable un-talked about underbelly of the industry most certainly is. The bimodal salary distribution also comes attached to bimodal working conditions.


> Turning yourself into a silo like this is practically blackmailing your employer into keeping you. They might keep you employed because they need to keep their systems online, but they will also not think twice about replacing you as soon as an opportunity is presented.

All employer / employee relationships have this dynamic. I get paid _n_ because my employer cannot figure out an effective way to do it cheaper, but as soon as they can I will no longer be paid _n_. While I think blackmail isn't an accurate description of this, what you're describing exists in all employment situations. Making yourself more disposable is only going to give the employer more power, which might work out better or worse depending on the employer.


> All employer / employee relationships have this dynamic

Wrong, no employer relies on a newly hired employee the way they rely on a senior knowledge silo'd engineer with a decade+ of domain knowledge which they are reluctant to share. In most jobs the employee is not in a position to have that kind of bargaining leverage. That leverage is only gained by an employee either intentionally and maliciously making the codebase as esoteric and unmaintainable as possible, or poor management not planning for these risks sufficiently e.g. not hiring more staff for a legacy COBOL system team as it's members leave until there's only one guy left who understands it.

> While I think blackmail isn't an accurate description of this

I'd argue it is. You know their business depends on you as a result of you actively working to make it depend on you, you use that knowledge to leverage your position in bargaining for higher pay or to never get fired from a low-effort "cruiser" job. Essentially blackmailing the business into continuing your employment under threat of losses caused by you leaving and nobody else being able to maintain the mess you created. This is not a normal employer / employee relationship dynamic at all.


People that play zero sum games only get to play with other zero sum players. There is some risk they will lose. People that play win-win games get to play with other winners (and be winners).

Be a winner.


I hate to break it to you, but every economic scenario is a zero-sum game. In order for an economy to work you have to have a relatively fixed amount of resources.

This whole win-win ideology is mostly a tool to erase where the losers are, and to make losers feel better about losing. Every increase in my paycheck is a decrease in someone else's.


If that was even remotely true, we would still be sitting in caves sharing raw animal scraps. Fortunately it's not true, and things like discovering fire and inventing tools don't take resources away from others, but instead increase resources for everyone. This is just as true in the modern economy.

Obviously the inventor of fire got more of a reward from his invention's bounty, but everyone still benefited. Similarly whoever automates your job gets most of the reward, but ideally we all benefit through cheaper products.

At least, that's how it's supposed to work. Obviously it doesn't always.


> It's true that Apple have a better track record of keeping data private, and there's been no Cambridge Analytics style atrocities

You seem to be completely unaware of the multitude of data breaches and leaks that Apple has faced, some they've even tried to actively cover up. Apple has a very obvious track record of disregarding their customers interests in favour of trying to protect their PR. Why notify customers and address data breaches when you can cover it up and pretend to be the champions of customer privacy when announcing anti-competitive policies that gives Apple monopolies?


> If you’re looking for a way to share logic between front-end and back-end code

So... exactly like isomorphic/universal JavaScript?

What exactly is the practical benefit in this, compared to say, JS functions shared by serialising them as strings or by abstracting them into a common dependency package which can be ran both on front-end and back-end?


Or if any of the students are from New Zealand


NZ = Sux

Aus = Sex


Most Kiwi accents I've heard noticeably pronounced other vowels as "i". e.g. "dick" instead of "deck", "tinnis" instead of "tennis"


> The interesting question to me is, why is it that they are literally the ONLY large tech company that is willing to offer me this tradeoff?

Maybe other corporations consider it unethical to charge their customers a premium for a false sense of privacy and security?

> Samsung, Google, Facebook, Amazon and Microsoft don't sell privacy

And neither does Apple, they just sell you on the promise of privacy. The reality is quite far removed from the perception most customers are given by their marketing and PR campaigns.

Apple might be a bit more stringent on enforcement of data sharing with third parties compared to other large tech corps, but that doesn't magically mean your privacy is invulnerable through their devices and services.

There have been multiple cases of them being caught out being hypocritical in regards to privacy, there have been multiple data breaches of Apple services and platforms to varying degrees of severity. Since the recent Epic lawsuit, it's also been revealed that Apple decided to not notify some 150 million of their customers who were victims of a data breach.

Anyone who actually thinks Apple cares even remotely about their privacy is living in a fantasy land. Unless you think being not alerted of your personal data getting exposed in a data breach of their systems is somehow not in your privacy's best interests.


> they just sell you on the promise of privacy. > there have been multiple data breaches of Apple services and platforms

What makes Apple different, is the decision to design all their products and services in a way that limits (or avoids all together) collection user information. For example, almost all of the "smarts" of the iPhone are executed on the device, without sending your data like location and pictures to Apple's servers for processing.

Apple also enforces through App Store review that app developers are mindful of user's privacy and every instance where data is collected needs to be explained and properly justified.

Regarding the story about the 128 million infected devices, it was a virus which infected developer Macs, resulting in some apps also including malicious code. No user data was leaked and it seems end-users suffered no ill-effects cf. https://www.macrumors.com/2021/05/07/xcodeghost-malware-2015...

Of course, no product and service can be 100% secure forever... hacks and malware happen sometimes. That's when practices like app isolation or sandboxing (which is very strict on the iPhone) and explicitly asking users for permissions (so apps can't just choose to get any sensor telemetry they want) comes into play. If an app has been compromised, then the malware is limited to the permissions already granted to the compromised app. Nothing more.


If you accept that corporations are not driven by ethics, then that doesn’t make sense.


> No. It is the domain. No one else in this business has a short pronounceable *mail.com

Except for, oh, I don't know, maybe https://mail.com/ ?


Wow, you get this popup on the site :

    Note: Your browser version is outdated. We recommend using the new Firefox Browser. Download now for free!
My browser is up-to-date, their "download now" link takes you to their own download page with a custom Firefox download. They seem like call center scammer level scum. Wonder why Mozzila is allowing them to use their trademarks like this.


They aren't really in a serious email business.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: