A lot of states are working on legislation that includes requirements for watermarking AI generated content. But it seldom defines AI with any rigor, making me wonder if soon everyone will need to label everything as made with AI to be on the safe side, kinda like prop 65 warnings.
This is not quite like the "AI" that's hyped in recent years, the key component is OpenCV and it has been around for decades. Few years ago, this might have been called Machine Learning (ML) instead of Artificial Intelligence (AI).
So it doesn't actually drop hats onto heads and doesn't use what most people would consider AI... I think I could probably rig up something to gracelessly shove an item out of an open window too which is basically what we're left with. It'd take longer to create the app for booking appointments, and to set up everything for payment processing.
You have discovered a secret area of my personalized "pet peeves" level: just a few days ago I saw an article (maybe video) about how "AI" tracks you in a restaurant. Screenshot was from an OpenCV-based app with a bounding box around each person, it counted how many people are in the establishment, who is a waiter and who is a customer, and how long they have been there.
There's an old saying: "Yesterday's AI is today's algorithm". Few would consider A* search for route-planning or Alpha-Beta pruning for game playing to be "Capital A Captial I" today, but they absolutely were back at their inception. Heck, the various modern elaborations on A* are mostly still published in a journal of AI (AAAI).
https://en.wikipedia.org/wiki/AI_effect We got it named already, it just needs to be properly propagated until there's no value left in calling things 'AI'.
This is a fair point and maybe someone more well versed can correct me but pretty much all state of the art image recognition is trained neural networks nowadays right? A* is still something a human can reasonably code, it seems to me that there is a legitimate distinction between these types of things nowadays.
Yes, no more machine code. Everything was to be written in BASIC. ...how we laughed at that outlandish idea. It was so obvious performance would be... well... what we have today pretty much.
IKR? If you can't hand-pick where instructions are located on the drum, you may have to use separate constants, and if that's the case what is even the point?
If you spend a few hours writing a bit of code that has to run for decades, millions or billions of times per day on hundreds of thousands or millions of machines it seems quite significant to use only the instructions needed to make it work. A few hundreds of thousands extra seems a lot. One would imagine other useful things could be done with quintillions or septillions of cycles besides saving a few development hours.
No, in an introduction to data structures and algorithms class. It’s pretty odd behavior to disagree with someone who is simply sharing their lived experience.
Yeah sorry, rereading, that came off as way aggressive for no reason. Rereading the chain, I think I just meant that it’s an algorithm that was frequently taught in AI classes, so at least some profs think it counts, even though it was called an algorithm.
Maybe it is easier to define what isn't AI? Toshiba's handwritten postal code recognizers from the 1970s? Fuzzy logic in washing machines that adjusts the pre-programmed cycle based on laundry weight and dirtyness?
Historically, we often call something AI while we don’t really understand how it works. After that it quietly gets subsumed into machine learning or another area and called X algorithm.
Adding two numbers, each having 100 digits? Reciting the fractional part of Π on and on? I have only seen that done by talented people appearing in TV shows. Seems AI.
That's my point: legislation seldom defines AI rigorously enough to exclude work like OpenCV. I presume that leaves it to courts or prosecutorial discretion.
Be it "AI" or not, these mostly fall under "AI" legistlation, at least in the new EU AI Act. Which is IMHO a better way to legislate than tying laws to specific algorithms d'jour.
If Big AI lobbyists get their way, this is exactly the kind of warnings we'll get.
Flood users with warnings on everything and it'll get ignored. Especially if there's no penalty for warning when there isn't a risk.
Big Tobacco must love Prop 65 warnings, because by making it look like everything causes cancer, smokers keep themselves blissfully ignorant at just how large the risk factor is for tobacco compared to most other things.
I fear you’re right — cookie banners will soon also come with endless AI disclaimers that net net desensitize the end user to any consideration as they seek to skip poorly crafted regulation and get on with their lives.
Poorly enforced regulation. Most of the cookie banners are illegal but businesses, especially large ones, have too much power to be effectively regulated.
The nags are kind of malicious semi-compliance, partly in effort to make the regulation look bad.
This comment is known to the State of California to contain text that may cause you to ignore warnings which may lead to cancer, reproductive defects, and some other shit that I can't remember because it's been almost a decade since I lived in California and weirdly I can't easily find the full text of one of these online through a quick search (emphasis: quick)
A lot of states are working on legislation that includes requirements for watermarking AI generated content. But it seldom defines AI with any rigor, making me wonder if soon everyone will need to label everything as made with AI to be on the safe side, kinda like prop 65 warnings.