I have something that both Gemini (via GCA) and CoPilot (Claude) analyzed and came up withe the same diagnosis. Each of them made the exact same wrong solution, and when I pointed that out, got further wrong.
I haven't tried Chat GPT on it yet, hoping to do so soon.
I used Cursor and Chat GPT 5 last night for the first time. Before I could even ask Chat GPT 5 about my issue it had scanned the .cpp file in question (because it was open in the editor) and had discovered some possible issues, one of which was the issue in the code. I confirmed that and gave it more description of the error behavior. It identified the problem in the code, and suggested two different CORRECT solutions (one simple, one more complex but "perfect"). I opted for the simple one. It did it. One tiny problem remained, I pointed it out, it fixed it.
This was much better than Gemini or CoPilot on the exact same issue and the exact same commit pointer in my repo. Both of them suggested the same wrong solution and got themselves further and further wrong as they went.
I always had my reservations about the whole AGI thing. And while I'm mightily impressed by Chat-GPT and friends, it's even clearer to me that AGI is not, and will never be, an emergent property of LLM, no matter how large the neural net. And that was likely true for Cyc as well.
I had a particular Cyc success story relayed to me years ago by a customer (not a Cycorp employee), the details of which I cannot divulge, but it was a pretty whopping success and the customer was quite happy with what Cyc had been able to do for them.
So while no AGI, it definitely seems like there was value to be had.
What do you think of the recent anthropic research on how LLMs reason? It is clear from their shallow analysis that LLMs have very serious reasoning weaknesses. I wonder if those weaknesses can be "addressed" if we build LLMs that can do deeper analysis and use RL to self-improve. LLMs improving LLMs would be a very impressive step towards AGI.
I think we are careless in how we use terms. We often say "intelligence" where me mean "sentience". We have studied intelligence for a long time and we have IQ tests that can measure it. The various LLMs (like Chat GPT and Gemini) are scoring pretty well on the IQ tests. So given that, I think we can conclude that they are intelligent, as we can measure it.
But while we have measurements for "intelligence" we don't for "sentience", "agency", "consciousness" or these other things. And I'd argue that there are lots of intelligent life on earth (take crows as an example) that are sentient to a degree that the LLMs are not. My guess is this is because of their "agency" - their drive for survival. The LLMs we have now are clearly smarter than crows and cats but not sentient in the way those animals are. So I think it's safe to say that "sentience" (whatever that is) is not an emergent property of neural net/training data size. If it were, it'd be evident already.
So Gemini/Chat GPT seem to be "intelligence", but in tool form. Very unexpected. Something I would not have believed possible 5 or 10 years ago, but there it is.
As to whether we could create a "sentient" AI, an AGI, I don't see any reason we shouldn't be able to. But it's clear to me that something else is needed, besides intelligence. Maybe it's agency, maybe it's something else (the experience of times passage?). We probably need to ways of measuring and evaluating these other things before we can progress further.
If this were aimed at Polymarket and their betting activities, then their lawyers would be getting subpoenas and the like, and a raid on their president would most likely be in concert with raids on their offices. AFAICT, it was only his person targeted.
That the FBI raided the home of an individual most likely means a criminal investigation of that person, for a federal crime or a crime that crosses state boundaries.
Bloomberg explicitly says it is part of an investigation of Polymarket allowing US users[1].
I guess it follows the pattern of the Binance investigation where they were able to show the Binance CEO instructed people to make sure the technical measures they implemented to ensure only non-US people were on their exchange were easy to bypass.
It is 2024... how are people still credulously using DiscloseTV as a source?
It's not about political inclination but rather that
there's no reason to keep trusting a source that has lied repeatedly, sensationalized repeatedly, and seems to have a very loose relationship with being "journalism" rather than entertainment (bait).
> Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people:
> First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.
> Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.
> The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.
I have no idea why your comment was killed, I resurrected it. Apparently people don't want to confront reality.
In my experience, this is accurate, but in the case of the FBI and the DOJ, they are much more interested in making a name for themselves on an individual level.
A supervisory special agent can push an investigation very far and has a lot to gain from it in terms of credibility among peers and career gains. Investigations into low profile targets are heavily deprioritized in favor of high profile targets.
The same pattern extends through to the US Attorney’s Office. Prosecutors are highly motivated to target high profile individuals and organizations.
I am not aware of another lang/platform that can offer this kind of flexibility except may be Smalltalk or Erlang, but then they don't have the homoiconicity.
This article just kind of did that for me. Loved that they likened Lisp to Lego blocks a few paragraphs after I had the exact same thought (when they mention that "everything is an expression").
Even after having read Hackers and Painters and some of Clojure for the Brave and True, this is the article that makes the power of Lisp click the most for me.
I think the "winners game" vs. "losers game" insight is interesting and there is some value in applying it to software development. But I don't see it as a revolutionary insight, anything that's going to radically change anyones understanding. The article has other problems, but overall it's just lukewarm.
To my mind, the main issues in software development are complexity and imperfect knowledge. We've developed a lot of practices like unit tests and code reviews which help us defend ourselves, but, ultimately, for any non-trivial software it seems like a losing battle, or if not losing, then the progress is slow and difficult and tenuous. (like trench warfare).
A very important aspect that Parry and Murko uncovered is that _music_ was the thing that helped them memorize these incredibly long works. The guslari played a one string "gusle" (iirc) and recited the work along with a song.
Some of the guslari bards could not perform the recitation without their instrument.
And, fwiw, I recall that some assert that the guslari bards were all illiterate. It has been asserted (not by me) that literacy interferes with the guslari activity of memorizing and replaying enormous epics.
Check out Ted Goia "A Subversive History of Music" for an overview - but there is a lot of other scholarship on this.
This is very exciting. The growth from Eve 0 to Eve 0.2 is remarkable - it's clear you have not been afraid of starting over as you've made realizations.
On one day one of my relatives shared a completely false meme about Obama. And a day later a different relative shared a completely false one about Palin. What kind of discourse can we have where all these lies just get propagated 24/7? I'm tired of having to look up everything on snopes.com or politifact.
I read the article and thought a lot about it, but I'm not buying it. The acceptance problems for Lisp haven't been because it is "too powerful" or that lone wolf hackers won't work together.
The author makes an example of the many Object Oriented (OO) systems, but he performs some bait-and-switch there. Those many OO systems were for _Scheme_, not Common Lisp. And Scheme is intentionally a tiny Lisp. For a long time, Scheme was focused on being the smallest possible Lisp. Common Lisp on the other hand, while it briefly went through an OO experimentation period, really only has one OO system: CLOS.
Also, the whole Emacs line is off target too. What has that to do with the expressive power of the language? And why ignore the two extremely powerful commercial Common Lisp IDE's out there? So is the point that Common Lisp isn't successful because there isn't a better free IDE?
And the "lone wolf/80%" isn't doing it for me either. The Common Lisp specification was the work of many bright minds and is brilliant. And it stands in complete opposition to the situation the author attempts to describe.
I'm not saying that Lisp in general (Scheme, Common Lisp, and Clojure) has been successful, or that Common Lisp in particular has been. If the standard is mindshare and acceptance they have not been successful. There are histories and causes aplenty, but being too powerful is not one of them.
I haven't tried Chat GPT on it yet, hoping to do so soon.