Hacker News new | past | comments | ask | show | jobs | submit | buza's comments login

Spend time using them. You'll quickly learn how much they aren't living up to the hype.


Presumably it would need to be trained with good examples of solutions to complex problems. Maybe there just won't be that many for the foreseeable future.


Yeh exactly. AI was trained on real data, and in a few years, there will only be AI versions of real data to run off. ie. alot of that really crappy code will be deployed to github, and the result will be that and more shitty code will be written.

Thats pretty much what has happened over the last 30 years. You need less skills to produce more and more. Google search started it all, i remmeber back then I quipped the Offshored devs wouldnt be able to get a job without an internet connection because previously us engineers prided ourselves on understand and memorising APIs. The Ai programmer will outpace India's cert mills in quick succession but it wont benifit anyone except the Large Corperations who own the Ai. Atleast the Google devs added questions to SO, the AI dev is leecher who only profits, plagerises, never contributes.


Interesting, point. But won't, there be allot of so called AI whisperers who train and own models that are very good.


I believe this is the study being referenced. https://jamanetwork.com/journals/jama/fullarticle/2787741


Thank you for sharing - and note that this is not the full study.

If you follow the ling for the full study, you end up at https://jamanetwork.com/journals/jamainternalmedicine/fullar....

I recommend reading the comments, there are serious concerns about the validity of the study.

There are also fundamental math errors - which the authors acknowledge, search for "we acknowledge" in the comments. Funny part - even while acknowledging it, they still fail to do the math correctly - PPV is 39.7% not 59%.

Still not enough, read Esther Rodriguez' comment on the paper.

Still not enough, read this post on numerical inconsistencies: https://pubpeer.com/publications/0a3dd058f6fb53312c5ddd858ad... (Although I think this is possibly addressed by https://jamanetwork.com/journals/jamainternalmedicine/fullar... - but I currently don't have the time to follow up in detail)

The paper is, as the kids say, "somewhat sus".

Might be innocent mistakes. Might even still be a correct conclusion - but this paper alone can't stand by itself, and there's nothing else backing it up (as far as I can tell with a quick search)


Yep that’s the one, google search was pretty much useless for finding this.


I thought that surely some mention of end user programming would have been warranted. https://en.wikipedia.org/wiki/End-user_development


I took this class with zero functional programming exposure as a grad student, fwiw


I was also reminded of this patent, which has the potential to be applied in problematic contexts. https://www.patentlyapple.com/patently-apple/2016/06/apple-w...


Even more incredible is that the opt out link requests a clear view of your face to proceed.


Even more incredible that it requires a picture of your ID


It makes sense to me. This is a database keyed by facial images. They don't know your name with certainty. The only way to look you up in the database is by face. Then presumably they need the ID to make sure it's you who's requesting the info. Hard to imagine how else to do it, given the nature of the technology.


Exactly. Otherwise, this is just a vector for any person to exploit the process and freely play cop by uploading a photo of someone they're trying to track.


But isn't that whole value of their service?


Yes, but the value is to provide that in exchange for money, and ostensibly only to law enforcement agencies and similar organizations. (If you try to sign up on their website, it says "Clearview is available to active law enforcement personnel" and that you need to apply for access.) If you're a random citizen and can get the same data, especially for free, the value proposition breaks. And the privacy implications would be worse.

So, I get why they ask for ID, even though I also get the reluctance to give them your ID since it could help tune their system.


How much would you worry about your ID being leaked, if/when Clearview AI is hacked into? What can be done with the info?


How else would they know what images to remove?


My wife gave birth last year in California, and immediately upon returning home, we located the paperwork for requesting the DNA be destroyed and submitted it. We received a response telling us that our request was received, but that's all. Unfortunately neither of us can remember where to find the form we needed to send it to request it be destroyed. Maybe it was in the hospital paperwork?


Reminds me of the Apple patent for disabling cameras in such situations.

http://bits.blogs.nytimes.com/2011/06/03/apple-patents-way-t...


If you captured and replayed the IR flashing signals you might be able to abuse such a system to prevent people from taking pictures of other things, such as yourself. I wonder if a little bit of cryptography embedded in the signal and the phone could make replay attacks more difficult, perhaps something similar to the codes used by garage door openers and car keyfobs. Or maybe by encrypting a counter.

If the IR sensor was separate from the camera module you might be able to put tape over it and prevent the signal from being received by your phone, defeating it. But if the IR sensor was actually a custom component embedded inside the CCD chip (which is not out of the question for Apple) then this would be difficult to defeat. You'd have to carefully mask out the non-IR-filtered area of the CCD chip with material that filters IR.


Question for the developers: It looks like the iOS app allows you to download apps built with the desktop tool, which supports evaluation of Javascript snippets. Isn't this against the App Store policy for the download and execution of arbitrary code?


Hi, one of the devs here! You're correct, PencilCase: Player can evaluate JavaScript using the JavaScriptCore framework included on the device. The App Store already has a lot of apps available that allow exactly this or other very similar options (Python, Lua, etc.) and we look forward to seeing how people take advantage of this ability.


The difference I'm specifically curious about is the issue about 'downloading' arbitrary code and evaluating it. Current app offerings don't have this support because Apple has forbidden it in the past unless the evaluation goes through the Javascript context of a web view, which doesn't appear to be the case for PencilCase. I guess I'm curious if they have relaxed their restrictions in this case, are unaware of that feature, or something else.


Yeah, App Store guidelines can be ambiguous, and JavaScriptCore becoming a public framework could be seen as an endorsement of this type of functionality, but we don't have any information that you wouldn't.


Not if it runs in with JavascriptCore I think.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: