If the company is half baked, those "two dudes" will become indispensable beyond belief. They are the ones that understand how Excel works far deeper, and paired with Claude for Excel they become far far more valuable.
At my org it more that these AI tools finally allow the employees to get through things at all. The deadlines are getting met for the first time, maybe ever. We can at last get to the projects that will make the company money instead of chasing ghosts from 2021. The burn down charts are warm now.
Plus, a lot of people are generating hallucination and believing that is invoking creativity. I contend the outputs/generations are junk, but human creativity and human comprehension step in and create meaning to the hallucination.
Until there is a formal and accepted definitive distinction between intelligence, comprehension, memory, and action all these opinions are just stabs in the dark. We've not defined the scene yet. We currently do not have artificial comprehension. That's what occurs sorta during training. The intelligence everyone claims to see is a pre-calculated idiot savant. If you knew it was all a pre-calculated domino cascade, would you still say it's intelligent?
Execute actions and cognition that pay back the cost of said actions, and support the next generation. No intelligence can appear outside social bootstrapping, it always needs someone pay the initial costs. So the cost of execution drives a need for efficiency, which is intelligence.
Current AIs cannot comprehend on the fly, meaning if they are presented with data outside of their training, the reply generated will be a hallucination interpolated off the training data into unknown output. Yet, a person in possession of comprehension can go beyond their training, on the fly, and that is how humans learn. AI's cannot do that, which is critical.
I agree with you, current models can't work totally outside their training set. An example of AI that trained with environment and feedback/outcome learning is AlphaZero, and it totally beat us at our own game. Even so, DeepMind seems not to care to pay the costs of further development, so we see LLMs need to make themselves useful to people to survive. It's a "pay your costs or stop executing" situation.
Fuck man, I worked on the original Mac OS back in '83, when all the work was in assembly. Know what happened? Apple happened. That company is fucked up something supreme. The entire premise behind that original graphical UI was never user experience, it was 'the users are idiots, we have to control them'.
I was a teen game developer with my own games in Sears & K Mart nationwide for the US, for the Vic-20 and the C-64, and was invited as a representative of the independent games industry. When my involvement was ending, Apple then told me they changed their mind and was not going to support independent games for the Mac at all. But offered to waive that restriction if I paid them $30K and gave them full editorial control over what I published. Nope.
It's wild how fast Apple pivoted from Woz just wanting to make a PC anyone could write and play their own video games on to "Nah we want full control of every last bit, fuck your indie games".
I think Apple marketing understands human motivation and the rarely acknowledged super strength of prestige marketing. Apple's marketing very much leans into every one of their products must be perceived as a high prestige item to own, or they will not release it. When the Mac was brand new, they cultivated and guarded that prestige like a hawk.
I'd wonder if especially at that point in history, trying to appear as a gaming platform would be a liability.
An early Mac was not a great gaming computer even by 1980s standards, and the last thing you want is Commodore or Atari running an ad saying "Apple's $2500 black-and-white prestige piece doesn't play games as well as a $299 C64/800XL". Not to mention the stink of the US video game market crash hovering around anything game-related.
If they pivot directly towards more professional workstation/publication/art department positions, nobody's making that point. (Now I'm thinking of the time "Boot", or maybe it was already "Maximum PC" by then, reviewed a SGI O2 and said it was impressive, but had limited game selections.)
Quickdraw was revolutionary, it had all the optimizations, and then all the code was in assembly. Things like classic arcade games were very much possible. I had a lunar lander game with side scrolling landscape, a dig dug clone, a variation on donkey kong, and a variant of Robotron. Apple thought that would attract the wrong impression, they wanted the design and typography crowd. Can't say they were wrong, to be honest.
Sounds to me like the classic everywhere communications problems: 1) people don't listen, 2) people can't explain in general terms, 3) while 2 is taking place, so is 1, and as that triggers repeat after repeat, people frustrate and give up.
This reminds me of in The Tick series. A villain named Chairface Chippendale, a sophisticated criminal mastermind with a distinctive chair for a head. Chairface decided to leave his mark on history - literally - by carving his entire name into the surface of the moon. Using incredibly powerful Geissman Lenses that could focus candlelight into an intense heat ray, he managed to carve out "CHA" before being stopped by The Tick and his allies. Musk is a comic book personality.
Astute points. I've worked on an extremely performant facial recognition system (tens of millions of face compares per second per core) that lives in L1 and does not use the GPU for the FR inference at all, only for the display of the video and the tracked people within. I rarely even bother telling ML/DL/AI people it does not use the GPU, because I'm just tired of the argument that "we're doing it wrong".
How are you doing tens of millions of faces per second per core, first of all assuming a 5ghz processor, that gives you 500 cycles per image if you do ten million a second, that's not nearly enough to do anything image related. Second of all L1 cache is at most in the hundreds of kilobytes, so the faces aren't in L1 but must be retrieved from elsewhere...??
You can't look at it like _that_. Biometrics has its own "things". I don't know what OP is actually doing, but it's probably not classical image processing. Most probably facial features are going through some "form of LGBPHS binarized and encoded which is then fed into an adaptive bloom filter based transform"[0].
Paper quotes 76,800 bits per template (less compressed) and with 64-bit words it's what, 1200 64-bit bitwise ops. at 4.5 Ghz it's 4.5b ops per second / 1200 ops per per comparison which is ~3.75 million recognitions per second. Give or take some overhead, it's definitely possible.
Correct, it’s probably distance of a vector or something like that after the bloom. Take the facial points as a vec<T> as you only have a little over a dozen and it’s going to fit nicely in L1.
Back in the old days of "Eigenfaces", you could project faces into 12- or 13-dimensional space using SVD and do k-nearest-neighbor. This fit into cache even back in the 90s, at least if your faces were pre-cropped to (say) 100x100 pixels.
> assuming a 5ghz processor, that gives you 500 cycles per image if you do ten million a second
Modern CPUs don't quite work this way. Many instructions can be retired per clock cycle.
> Second of all L1 cache is at most in the hundreds of kilobytes, so the faces aren't in L1 but must be retrieved from elsewhere...??
Yea, from L2 cache. It's caches all the way down. That's how we make it go really fast. The prefetcher can make this look like magic if the access patterns are predictable (linear).
The keyword is CAN, there can also be huge penalties (random main-memory accesses are over a cycles typically), the parent was probably considering a regular image transform/comparison and 20 pixels per cycle even for low resolution 100x100 images is way above what we do today.
As others have mentioned, they're probably doing some kind of embedding like search primarily and then 500 cycles per face makes more sense, but it's not a full comparison.
I don't know the application, but just guessing that you don't need to compare an entire full-resolution camera image, but perhaps some smaller representation like an embedding space or pieces of the image
You can handle hundreds of millions of transactions per second if you are thoughtful enough in your engineering. ValueDisruptor in .NET can handle nearly half a billion items per second per core. The Java version is what is typically used to run the actual exchanges (no value types), so we can go even faster if we needed to without moving to some exotic compute or GPU technology.
I have to say, the structure of that article is a perfect example of elitist exclusionary literature. Hang on: the introduction is wonderful, accessible language that most people can read. Then, the very first sentence of the actual meat of the article:
"On a sun-drenched weekday in August, Bletchley Park is the soul of pleasantness: a stately home flanking a lake codebreakers skated on in winter between battling a constantly evolving phalanx of electromechanical encryption machines used to scramble messages between leaders of the Third Reich."
That's a masterwork of elitist language. Drive away every non-explicit intellectual or specific to this topic participate (such as software engineers.)
What a failure of publication. I guess the initial click is all they really care about, because such language drops off 99% of those interested clicks. Fools? Shortsighted? WTF
If you made it to the end, you may have noticed that the bulk of the story is lifted from a book, which accounts for a change in tone. If someone buys or borrows a book and sits down to read the whole thing, they're expecting a different style of writing from a newspaper article. Also, this is very obviously a weekend magazine article aiming to satisfy a combination of intellectual interest, reader vanity, and curiosity.
most people who click a random article from an unknown author don't sign up to read an essay worth of purple prose to get the (actually tiny amount of) useful information teased in the headline.
the OP is right - (virtually) no one is reading all that shit.
I maintain my point: it's prose, not even too complicated at that. The fact that many people have a hard time understanding it, is understandable, but not a good state of affairs.
I will agree that the first sentence is ill-considered (if nothing else it could've been broken up with one extra phrase), but it's really the worst offender. In my opinion the prose in rest of the article is perfectly reasonable. For anyone interested in the subject matter but discouraged by this awkward beginning, I would urge you to press on.
For different reasons I agree the article is trash. There is far too much about Turing and Enigma, perhaps because they are better known, but Enigma is to Colossus as the first airplanes were to orbital rockets, but just years apart, not decades, and without thousands learning from each other.
I know, it's not a great analogy, but what the Tunny team did was so far beyond the Enigma team in terms of a) no prior knowledge of the actual system, b) the development of new cryptanalytic techniques, c) the educated guesses as to how the OKW system probably worked, and d) the sheer brilliance and vision of building a bleeding edge electronic system to do the heavy lifting, that its story, and Flowers', do deserve to be better known.
And the article hasn't yielded the style all the way to the end. It was not fun to read with English being my second language; but I think now that's expected of The Guardian - little information spread out extensively with flowery language.
reply