At the Biennale in Venice (one of the most important art shows there is) I saw a work which looked like this:
There was a metal frame holding two glas plates with ventian sediment inbetween (sand, soil, mud). In the center there was another metal frame which formed a hole. There also were to PCB boards with ATMEGA micro controllers.
In the text the artist claimed she controlled the biome of the soil with an AI using various sensors and pumps.
This was clearly a fake, as you could see nothing like that on the PCB.
Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm. AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen.
If even artists slap “AI” onto their works to sell it, you know we are past the peak now.
>> Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm.
Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".
So are many other AI algorithms, some of which are simple enough to be understood so well that most people don't recognise them as AI anymore: search algorithms like depth- breadth- or best-first search, game-playing algorithms like alpha-beta minimax, gradient descent/ hill climb, are the examples that readily come to mind.
I think the above article and your comment are assuming that, for an algorithm to be "AI" it must be very complicated and difficult to understand. This is common enough to have a name: "the AI effect". A few years down the line I bet people will say that "this is not AI, it's just deep learning".
There's no reason for AI algorithms to be complicated. Very simple algorithms can create enormous complexity, even infinite complexity. The state of deterministic systems with even a couple of parameters can become impossible to predict after a small number of steps if they have the chaos property. Language seems to be the application of a finite set of rules on a finite vocabulary to produce an infinite set of utterances. Complexity arises from very simple sources, in nature.
The point was that her PCB wasn’t connected to anything at all. She claimed there were pumps and sensors, but there was literally nothing. There were cables etc and it certainly would fool someone who has no idea of circuit design and electronics, but I happen to know a bit about it and the circuit almost certainly didn’t do what it claimed it did.
Ah, I see. I must have misread your comment. I thought you meant that the PCB didn't have anything like (a hardware implementation of?) an AI algorithm on it, not that it had nothing at all on it.
> Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".
As time rolls on and we see more articles like this one calling out the "AI BS" - which I agree should be called out...
I worry that a new "winter" will set in, and funding will be cut, and research towards how biological neural networks actually work vis-a-vis artificial neural networks will suffer.
Because from what I understand, we currently don't know how such biological networks actually "learn" - because there isn't a "mechanism" for backpropagation to occur.
IIRC, there's still questions on how information is propagated through biological networks; our artificial representations of them are constrained to approximately a single dimension of the real thing (and even that doesn't capture the biology, thus the idea of "spiking neural networks") - but there may be other avenues of information diffusion that are important as well, still to be revealed in the biological makeup.
We know for certain we are missing something fundamental, when even if you could scale up some of today's best deep learning systems to data center scale won't approximate anything close to what goes on in the human brain, given the size and power constraints.
Figuring this out could be set back, when funding becomes scarce once more.
a lot of people think it will mirror how the internet was. Lots of hype, people threw money at it while not really understanding it but when the profits didn't roll in, the winter came. People forget how tech was desolate, and it wasn't just 2001. probably from 2001 to somewhere in 2005? Anyhow, those who figured out how to make use of the internet well were incredibly successful. Once the winter was over tech has been among the hottest industries for a long time.
AI might end up the same, enter a winter. people will talk about how silly endevours into AI where, but a few companies will really figure it out and a huge explosion will occur and people will wonder how anyone did anything without AI. something like that
Sure, backprop is fairly simple, but the thing it produces is somewhat complex, and we seem to find it hard to explain “why” the weights it finds work (even though we understand clearly how it finds weights that do work), right?
Without meaning to sound patronising, I believe I understand your confusion. Allow me to explain.
My comment is making an entirely uncontroversial statement: that "backpropagation is an AI algorithm". Not that "backpropagation is AI". The latter could be taken to mean that backpropagation is itself artificially intelligent, that it exhibits some kind of intelligence (leaving aside for the moment the fact that we have no agreed upon definition of "intelligence", artificial or otherwise). If I understand your comment correctly, this is the interpretation you make of my comment.
However, what my comment says, and this should be clear from the context ("most researchers will agree"), is that backpropagation is an algorithm from the field of research that is known as AI.
In that context, "AI", "Artificial Intelligence", is the field of research that investigates methods to construct "AI", "Artificial Intelligence(s)". Backpropagation is a component of one such method, neural networks.
I think then that the confusion, which is also discussed, and exhibited, in the article, stems from the fact that the same word is used to describe both "artificial intelligence" and the field that researches artificial intelligence.
This is not serious : back-propogation is the origin point of modern AI. This is the algorithm that powers the deep network revoultion. It's not the end point, it's not a magic box, but it is fundamentally an AI algorithm.
Just saying "no it isn't" is just not helpful or useful.
You are missing the entire point of the article if you continue to call these algorithms "AI". Inflating simple things like this to mean "AI" has led to the term being meaningless.
I thought one of the key principles in this discussion is that the goalposts keep moving: before it's a solved problem, it requires Artificial Intelligence; once solved, it's just basic algorithms. The bar for what constitutes 'AI' keeps getting raised.
People have often believed some narrow tasks require general AI (AGI), however it turns out that almost any specific task can be solved without building an AGI first. This does not change the meaning of "AGI" - a system that is able to perform any mental task as well as an average human.
There are no accepted definitions of these terms. Are they meaningless?
AI that does not include backpropagation or logical deduction or GA's or optimisation is... is... magical thinking. AI without the nuts and bolts from the last 50 years of work is meaningless. The article is heartfelt, and we all agree with the tenant that people pretending that they are using AI when they are really using a database isn't a good thing, but if you take any current system look right down inside it all you will find is a Shannon type implementation of church-turing.
There are tonnes of other approaches to machine learning that don't involve backprop! There are also other approaches to neural netwroks that don't use backprop (have a look at Numenta's stuff for example). I suggest you watch Pat Winston's brilliant MIT AI lectures to see how huge the range of techniques is.
If this is what you are talking about, I'm disappointed that the work is crap. She also won the Hugo Boss prize in 2016. This is a highly respected award in the art world.
The art world isn't doing a good job understanding AI. Part of this is media hype causing people to be misled. Another part is the cognitive tools artists have developed to understand the world are inappropriate for understanding these technologies. For this work to be believable by technologists there would need to be some kind of demonstrable scientific rigor involved. There is none here, but people from the art world who are evaluating these things don't notice it is missing.
You should track down the artist and ask them whether this was intentional, and if not, point out the unintended, poignant commentaty it represents on the state of AI in the industry. I bet they’d love it!
With 90% of modern art, the "point they're trying to make" is whatever is the most convenient at the occasion. For the artists themselves rarely know as modern art is as much a BS-industrial complex as the AI.
Art is not complicated at all. Art marketers and pseudo-artists like to make it complicated to keep the "contemporary art" scam alive and profitable.
That's because real art, like anything (engineering, piano, sports) takes years and years of hard work to perfect the skill which gives the ability to translate your ideas into something beautiful and self-explanatory (when you don't need to be expert to "understand the meaning" as it is with masters like Michelangelo).
A great example is when Henry Moore saw Michelangelo and started to cry and confessed the only reason he makes such statues is because he knows "he will never be able to create anything as beautiful as that"
Perhaps he would if instead he worked on things that are hard.
It's hilarious how modern art lives on creating documentaries, popularizing an "artist" then selling art for investment while fooling people by using shaming tactics like "you just don't understand art" or "that was not the point artist was trying to make"
If the artist has to explain her work in words he should have chosen literature as a form of expression for clearly she failed using her current means to do so.
There's a lot of bullshit around art in general; it's not limited to modern art. If we only cared about the quality of the work, then a perfect forgery of da Vinci's style would be valued exactly as much as an original da Vinci.
I can't agree with the idea that art has to be technically difficult to be "real." If a simple, abstract sculpture gives me more joy than a portrait from a great master, that doesn't mean my understanding is defective. (Nor is yours if you disagree.) Meaning isn't limited to what the creator says it is, either.
People are obsessed with nailing down a precision definition of "art," and it always turns out to be "the stuff I like, but not that crap you like."
I take your point on joy, and art perhaps has a broader spectre, like music. I can enjoy nice simple pop song but I don't confuse with Chopin.
On technical level. I disagree. There's a difference.
A piano player that plays a flawless Beethoven, is not Beethoven because there's a difference between the ability to compose and ability to play a composed piece.
And so it is with your analogy of Da Vinci vs. forgery.
> A great example is when Henry Moore saw Michelangelo and started to cry and confessed the only reason he makes such statues is because he knows "he will never be able to create anything as beautiful as that"
In the film „Carving a reputation” (BBC, 1998) , it is said that when he saw the Medici Tombs (1524-31, in the Medici Chapel in Florence) during his travel scholarship when he was a student of sculpture at the Royal College of Art, – he didn’t want to look at Michelangelo’s work at first, but finally he admitted that those figures posess “a tremendous monumentality. (…) a grandeur of gesture and scale that for me is what great sculpture is” – wrote Moore in his diary . The nobility and grandeur of the Italian tradition was a humbling experience that threw Moore into a profound depression. To be a great sculptor, this is what he had to compete with.
He later claimed the reason he's "inspired" by "sumerian" art is that "he feels greco-roman art is over-represented.
Dubiously claiming "I" have worked for many humans in many walks of life, I don't see dubiously claiming "AI" as much different!
A key problem is that AI is a very wide and vague concept.
What she might have is a simple "expert system", which a lot of systems called AI in the 80s were and many today probably are too, monitoring the inputs, and she may have programmed that initially in its knowledge acquisition phase with the aid of a machine learning arrangement. That would qualify as "using AI" depending on which interpretation of "AI" you are working by. An expert system can be called a rudimentary AI, and could be implemented as simple combinatorial logic if new learning during operation is not needed.
The thing is, her PCB wasn’t at all connected to the frame electrically. There were decoy cables but they were all connected back to the PCB as far as I could tell.
The remaining circuitry was way to simple for the claimed functionality. Basically a DIY arduino on a bigger PCB with a few leds.
Around the inner frame there was transparent silicone how her “sensors” should be able to get through that layer of see-through isolation without beeing seen themselves is beyond me.
This is why I think it is a work of fiction. I don’t know whether on purpose or because they thought nobody will notice. I certainly found it to be an interesting commentary on AI :)
While I don't doubt what you saw or your explanation/interpretation of the project (that is, what was likely a BS application of the terminology), it should be understood that such a system could in theory be constructed.
It is possible, for instance, to implement a small neural network on a regular Arduino; a simple google search for "arduino neural network" should yield some information. It would be a small step from there to interfacing such an implementation to something else as part of an art project.
Again, I am not doubting your story; I just wanted to point out that what appears to be a simple system could very well have an actual artificial intelligence aspect to it, even given the seemingly unreasonable processing constraints of an Arduino.
AI may have a definition that is a bit vague but I find it unfair to say that you can call any black box "AI". (Though I'll gladly grant artistic license to your specific example)
I was recently asked to make a very simple introduction to AI. It made me think a bit as I have been annoyed at the confusion between ML and AI that is so frequent nowadays.
I proposed that AI started with Turing's hypothesis that brains are Turing machines and that AI was the field that tries to bring human capacities to computers.
My next slide says "AI is not really a field. It is more of a goal and a theme. And now a set of techniques used in many different fields."
Not necessarily. The main example (that stuck with me) of AI in books I read as a kid 35 yrs ago was the Sirius Cybernetics Corporation. Now I think that was eerily prescient.
Nah, I prefer having something to keep me cynical and disappointed every time another "cutting-edge AI" application comes out. As a research community, we should be held accountable for our failures to achieve the real shit, the full-on Charles Stross vision.
>all that counts is that you call it AI even if it is a simple algorithm
Is that incorrect though? I think a lot of people here are making a lot of assumptions about a very vague and ill-defined term. To me it seems like AI is just a way of saying that a program is capable of making decisions. There is no implicit tie-in to machine learning or neural networks or anything like that. These decisions can come from 5 lines of "if" statements (as is the case with most video game AI).
"AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen."
I think you're right. I'm hoping that "show me your qubits" will put a stop to that. It's easy to see qubits because they're hardware; it's a little harder to see a local minima-avoiding heuristic search algorithm.
We might be headed for the trough of disillusionment, but in terms of technology and industry applications, we are still in the early phase of modern AI (big data + ML).
Exactly, we are in a very early phase. However, the hype wave is driving a dangerous level of premature deployment of half-baked parlour tricks or the over reliance on simple algorithms where there are real consequences. i.e. prison sentences and insurance eligibility and yes, autonomous vehicles.
I don't know if this is against HN "rules" - but I'm going to give you an explanation anyhow.
I downvoted one of your comments that was essentially a cut-and-paste of this comment here; I have noticed that you have done this more than once here within the comments.
Please refrain from doing this; it seems to add nothing but noise to the conversation. If you believe your commentary has merit within the context of the thread at hand, do your best to restate that opinion in an original manner, not by simply copying and repeating the exact same statement.
I hope this explanation clarifies why I downvoted you; I believe your comment does have merit to the discussion as a whole, but posting it once should be enough to reach the audience with it's intended message.
OK, under most circumstances that is a fair point, and I personally hate astroturfers, but in this case I feel the comment was equally valid as an answer to two threads that were physically separate - and could have ended up in vastly different places. If there is a way to link to an existing comment then let me know and I'll use that method in future :-)
I'm doing a bit of hiring at the moment. It's hard to find a single CV which doesn't have some kind of machine learning slant. As much as I think there are plenty of advances in ML left to take, I doubt every graduate will step out into a good application for existing machine learning tools.
Just like not all astrophysicists reach Nobel prize level achievements not all machine-learning graduates will innovate in their field. But to help you out, in order to find people who are likely to innovate, find out if that is their goal. If it is not then you know not to hire them. But if that's their goal, maybe start some kind of "program" where you give these people a chance. 6 months. A year maybe. If the are close to something or if you're pleased with them even though they aren't really innovating, then give them a "tenure" so to speak.
An evaluation process, it's very common here in Sweden. I mean how much money can you really loose in six months? You can kick that shithole straight out day one if that's your perogative. It's a good deal for both parties.
> There is little room for the application of machine learning
You see people trying to shoehorn ML into many such system though, and there is money in it, which is why people are chasing it to have it on their CVs.
Like the noSQL hype of a some years ago it'll settle down and people will gravitate back more towards the right tool for the job (which will sometimes be ML based, but often not, just as "noSQL" is sometimes the right tool or right enough). ML will survive where it is the best tool for the job, or at least where it can be genuinely useful and not significantly sub-optimal.
> our problems are CRUD at modest scale.
I see some of our client base looking into ML and "Big Data", and I despair a little because they often fail badly at getting "little data" correct. It is actually part of the sales pitch for ML: let the AI filter out the crap in your inputs and give you something approximating a decent answer as output. They'd be much better served working on fixing the data sources or using more traditional cleansing methods, but that seems like harder work compared to the new magic some consultant is extolling. ML isn't a magic bullet, but it is currently being sold as one.
Okay, but as much as we believe profitable companies are run by idiots, they arent. They need to stay profitable or create profit, so they hire technical managers that know that AI is really just finding uses for data.
There was a metal frame holding two glas plates with ventian sediment inbetween (sand, soil, mud). In the center there was another metal frame which formed a hole. There also were to PCB boards with ATMEGA micro controllers.
In the text the artist claimed she controlled the biome of the soil with an AI using various sensors and pumps.
This was clearly a fake, as you could see nothing like that on the PCB.
Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm. AI is the phrase behind which magic hides and people love magic. Everything that has the aura of “humans don’t fully understand how it works in detail” will be used by charlartans, snake oil salesmen and conmen.
If even artists slap “AI” onto their works to sell it, you know we are past the peak now.