Does anyone know what assets Greenpeace USA has? I imagine Greenpeace international will set up Greenpeace USA 2.0, all the volunteers/employees will move over, and the original will just go bankrupt.
> Imagine being a vim or emacs user and have those replaced by something you have to type entire sentences for functionality.
As a former vim user who uses cursor, I've found that as the models get better I'm typing less and less. I appreciate the vim key bindings, but eventually I can imagine not missing them.
Yes, ARM did lose the case, but I attribute that to an ignorant judge as the license wasn’t transferable to Qualcomm from Nuvia. Regardless, ARM will have their day when it comes time to license the next ARM architecture version.
Qualcomm also had their own architectural license, but Arm claimed that neither of the 2 licenses is applicable.
I have read at that time the documents presented by both parties, but essential details of the contracts were missing from the public documents, so it was impossible to know which of Qualcomm and Arm was right.
Nevertheless, the argumentation presented by Qualcomm seemed far more plausible, unless it would have been contradicted by some of the redacted out contract details.
Arm has not shown in public any information that could have proven that they are right, so they, or their supporters, may not say that the judge has made a wrong decision.
Supposing that the decision was wrong, Arm has preferred to keep secret their arrangements with the customers, instead of proving that they were right, presumably because they might lose more money if other customers learned the exact details of the contract with Qualcomm, so they could request similar terms.
While I believe that it is right for Qualcomm to have won the trial, I strongly dislike the fact that Qualcomm now designs their own cores instead of licensing them from Arm.
The reason is that the Arm cores have excellent documentation, on par with that of the Intel-AMD CPUs, while the Qualcomm cores, like the Apple cores, have non-existent documentation. Moreover, Qualcomm is so silly that they have always obfuscated even what Arm cores they used in their older products. Whenever I evaluated some smartphone with Qualcomm SoC it was impossible to find any useful information on the Qualcomm site, but I had to go to various third parties to learn what is actually inside the Qualcomm SoC, to be able to compare it with alternatives.
In theory you could generate a bunch of code that seems mostly correct and then gradually tweak it until it's closer and closer to compiling/working, but that seems ill-suited to how current AI agents work (or even how people work). AI agents are prone to make very local fixes without an understanding of wider context, where those local fixes break a lot of assumptions in other pieces of code.
It can be very hard to determine if an isolated patch that goes from one broken state to a different broken state is on net an improvement. Even if you were to count compile errors and attempt to minimize them, some compile errors can demonstrate fatal flaws in the design while others are minor syntax issues. It's much easier to say that broken tests are very bad and should be avoided completely, as then it's easier to ensure that no patch makes things worse than it was before.
That's a fair point. Normally if you injected the "dog" token, that would cause a set of values to be populated into the kv cache, and those would later be picked up by the attention layers. The question is what's fundamentally different if you inject something into the activations instead?
I guess to some extent, the model is designed to take input as tokens, so there are built-in pathways (from the training data) for interrogating that and creating output based on that, while there's no trained-in mechanism for converting activation changes to output reflecting those activation changes. But that's not a very satisfying answer.
But LLMs have been measured to have some theory of mind abilities at least as strong as humans: https://www.nature.com/articles/s41562-024-01882-z . At this point you either need to accept that either LLMs are already conscious, or that it's easy enough to fake being conscious that it's practically impossible to test for - philosophical zombies are possible. It doesn't seem to me that LLMs are conscious, so consciousness isn't really observable to others.
That's still using language. My dog has theory of mind in the real world where things actually exist.
Also, those results don't look as strong to me as you suggest. I do not accept that an LLM is conscious nor could I ever unless I can have a theory of mind for it... Which is impossible given that it's a stochastic parrot without awareness of the things my five senses and my soul feel in reality.
I'm mildly amused by this. It's an open air environment, did someone go stand over one of the crashed drones as it burst into flames and just, breathed deep? Glad they got treatment, plastic smoke is gross.
Also wow, the drones are massive, and apparently flying so low they will hit cranes putting things on single story buildings? That's so stupid.
Dear tech world: Please do not fly 80 pound projectiles just a few feet above my head at speed. Jeeze.
The video also includes a video clip of package delivery, where drone would drop package to the ground, which worked. But then propeller blew the package right into the bush was lmao.
I expressed myself wrong. I do purchase tickets online. Then I just remember the day. No calendar. I don’t take advantage of the digital assets (email confirmation, etc)
reply