The background of most everyone in Brussels seems so wrong for the technological realities nowadays. I believe this sentiment is shared by a lot of people, and now it unfolds in Europe plainly lacking behind in technology. Which is such a shame given history of discoveries and advancement that was going on on the continent for centuries.
The whole European political elite and ruling class feels like a quasi-aristocracy (something the US is slowly moving into as well, with political dynasties and such) that is used to go to some big-name art/humanities place and then slide into the bureaucracy ladder. Totally detached people, and it's a pity because we really need Europe to be better.
weakest point is fire guns on one side and no guns to speak of on the other side. To understand the scale of Iranian killing - for 23 days of protests up to 20K people killed - that is half of the killing rate in Ukraine war which is a full scale war with a 1000km battle line and more than 1M of soldiers shooting at each other.
Just to be clear, I think you meant to say it's half the civilian casualty rate in Ukraine. Aside from guns, it seems like the Iranian government also pulled in foreign mercenaries to shoot on their own citizens, geez.
No, fortunately civilian casualties in Ukraine are significantly less than that (except for Mariupol where 20-50K civilians were killed during 2 months of fighting in 2022). It is the soldiers deaths, 500-1500/day each side.
1) it becomes increasingly more dangerous to dl stuff from the internet and just run it, even its opensource, given normally people don't read all of it. for weird repos I'd recomment to do automated analysis with opus 4.5 or the gpt 5.2 indeed.
2) if we assume adversaries are using LLMs to churn exploits 24/7, which we should absolutely do, perhaps the time where we turn the internet off whenever is not needed, is not far.
...well, just dont download random stuff from the internet and run it on your important machines then? :-))
You are right:
30 years ago, it was safe to go to vendor XY page and download his latest version and it was more or less waterproof.
Today with all these mirror sites, very often better SEO ranking than the original, its quite dangerous:
In my former bank we had a colleague who installed a browser add-in that he used for years (at home and in the bank); then he got a new notebook, fresh browser, he installed the same extension - but from a different source than the original vendor:
unfortunately, this version contained malware and a big transaction was caught by compliance in the very last second, because he wasnt aware of data leakage.
> 30 years ago, it was safe to go to vendor XY page and download his latest version and it was more or less waterproof.
You _are_ joking, right? I distinctly remember all sorts of dubious freewarez sites with slightly modified installers. 1997-2000 era. And anti-virus was a thing in MS-DOS even.
we have some markup for architectures like - d2lang, sequencediagram.org's, bpmn.io xmls (which are OMG XMLs), so question is - can we master these, and not invent new stuf for a while?
p.s. a combination of the above fares very well during my agentic coding adventures.
after 25 years wikipedia showed what it truly was created for, by selling the content for training. otherwise - okay, this was a cool project, perhaps we need better. like federated, crypto-signed articles that once collected together, @atproto style, produce the article with notable changes to it.
Their enterprise offering is more for fresh retrieval than training. For training, you can just download the free database dump — one you would inadvertently end up recreating if you were to use their enterprise APIs in a (pre-)training pipeline.
I’m using a harsh allegory to express massive discontent by the fact that someone was catering to user content for 25 years only for this content to become training corpus.
It is perhaps not that Wikipedia in particular been created for this, that much we hope for, but nowadays it seems such public services are best monetised in this way. I have an actual memory from when Wikipedia started and the enthusiasm of millions of people for it.
And no, I’m not alright with the fact so many people contributed effort AND money to this project only for Jimmy to figure how to sell it better to big corpo.
Seems unfair, as it seems unfair to get these downvotes. Like nobody liked the fact MS bought and used all of GitHub to create copilot, so how is this different?
Nobody cares until is too late. And it is also very hard to get it right, given most p2ps eventually become centralized, or depend on a centralized mesh of hosts. Otherwise I totally agree with the statement, not sure whether is practically possible.
What applications are base on this? I mean it sounds super charming and nostalgic to drop a line or two which runs on WinXP, but is this actually useful?
Mostly legacy industrial machines that need some additional software for telemetry, scheduling, automation etc.
These machines are likely to live at least another 10-15 years and even the brand new ones being sold today uses Windows 7.
Modern languages and frameworks proceed and leave these old systems behind, but everything from our infrastructure to manufacturing capacity that exists runs on legacy systems, not modern computers. The cost of replacing the computers is usually more than the machine itself.
Precisely - every reverb is impulse response, lots of other effects are effectively some sort of convolution with neural networks that we otherwise call AI. Arpegiattors are AI and the random jumps between patterns in Ableton are a Markov Chain.
What does Bandcamp really mean? Perhaps sampling others voices and music is barred, not these mini-AIs that are everywhere ?
Yeah, and oscillators ringing together in an FFT choir based on notes from a diffused image is absolutely, totally not an AI, just algorithms. Really, why be so rude, given you understand the math behind it? Obtuse is not a nice word, not something I would say to people at random. Because, you see, back in the day generative grammars were called AI, so were so many other discreet structures which are employed in music generation, sorry production, on an everyday basis.
Algorithmic progression generation IS IN USE for years, sorry you didn't mention, or perhaps you don't listen that much to everyday radio. Markov chains, constraint solvers, and rule-based harmony live in many VSTs... the fact there are so many "experimentors" out dare winding knobs to match a pleasurable pattern, does not change the fact they be 100% ignorant about the 'deux ex machina'.
I'm surrounded by producers having absolutely no clue about the vast amount of actual AI and actual probabilistic algorithms that make their "unique" sounds possible. And all of them are 100% ignorant of what AI means when they say it, because they don't mean a specific thing.
How is this not AI? Or one needs an transformer-based model to call it AI? This whole story did not start an year or two ago, you may be late for history class though. The fact there's been this moving marketing concept of what "AI" actually is, does not change the reality of most modern music (including acoustic) at some point of the production process getting artificially enhanced by honestly super-complex systems that are intelligent enough to do what otherwise would take 20x more effort to get right.
Author fails to recognize the fact that CLI agents make all kind of hoisting easier and fun. Like publishing to CloudFlare Pages which costs close to nothing and now takes seconds, while previously could taker days.
reply