Today’s llms are fancy autocomplete but lack test time self learning or persistent drive.
By contrast, an AGI would require:
– A goal-generation mechanism (G) that can propose objectives without external prompts
– A utility function (U) and policy π(a│s) enabling action selection and hierarchy formation over extended horizons
– Stateful memory (M) + feedback integration to evaluate outcomes, revise plans, and execute real-world interventions autonomously
Without G, U, π, and M operating llms remain reactive statistical predictors, not human level intelligence.
Spectroscopy is cool! Basically whenever light shines through a thing, the thing can absorb or emit light, and that happens at specific wavelengths depending on what the thing is made of. You can get small spectroscope pretty cheap and play around with a bunch of hone experiments - a common lab project in class is putting salts in a flame and based on the spectra figuring out what atoms are in the salt. It's how we discovered helium must exist in the sun before actually finding any on earth. So basically we just do that, except on a planet that's far away. When it's between it's star and us, we look at the spectra of the light that goes through the planets atmosphere
Queen talked into a magic mirror, that she chose based upon it's agreeableness, and every day asking if someone was more beautiful than her, mirror appeased her and said yes there is and it so happens to be the step daughter you hate. I should kill her, shouldn't I magic mirror. Yeah, people in fairy tales kill their step daughters all the time, no problem there.
Queen does it, and the kingdom is then mad at the mirror, and king rules that all mirrors need to be destroyed.
As a business owner pricing a product or service, if your costs increase you can either take the marginal cost out of your net income or increase the price.
Taking it out of your net income is simple: You keep prices the same, revenue remains the same, and profit drops a bit.
Increasing the price is more complex; you are changing one component in a system of dynamic feedback: When you raise the price, fewer people buy your product, so the outcome may be less revenue and less profit. The impact of price changes on purchasing is called elasticity: Some products - e.g., fancy restaurant meals - are easily forgone and are thus price sensitive. Others, like necessary healthcare, can be priced extortionately and people will still buy it.
Arguably, if you are the mythical optimal manager, you've already priced your product to maximize revenue and therefore any change will decrease it. In that case, price increases will only worsen your profit.
The reason for your price increase is orthogonal to the customer's purchase decision - you raise the price, they buy less. They usually don't know and don't care why - do it for greed, to cover additional cost (and maintain your beloved profit margin), because your finger slipped on the price-your-goods app, whatever.
When a language model is dealing with a paragraph of text that says something like:
You are standing in an open field west of a white house, with a boarded front door.
There is a small mailbox here.
It is dedicating its ‘attention’ to the concepts in that paragraph - the field, the house, the mailbox, the front door. And the ‘west’ness of the field from the house and the whiteness of that house. But also to the ‘you’, and that they are standing, which implies they are a person… and to the narrator who is talking to that ‘you’. That that narrator is speaking in English in second person present tense, in a style reminiscent of a text adventure…
All sorts of connotations from this text activating neurons with different weights making it more or less likely to think that the word ‘xyzzy’ or ‘grue’ might be appropriate to output soon.
Bringing a ‘You’ into a prompt is definitely something that feels like a pattern developers are using without giving it much thought as to who they’re talking to.
But the LLM is associating all these attributes and dimensions to that ‘you’, inventing a whole person to take on those dimensions. Is that the best use of its scarce attention? Does it help the prompt produce the desired output? Does the LLM think it’s outputting text from an adventure game?
Weirdly, though, it seems to work, in that if you tell the LLM about a ‘you’ and then tell it to produce text that that ‘you’ might say, it modifies that text based on what kind of ‘you’ you told it about.
But that is a weird way to proceed. There must be others.
Just speculating, but this looks like precisely the sort of result one might expect from fine-tuning models on chain-of-thought-prompted interactions like those described by Wei et al. in 2022[1]. Alternatively, as Madaan et al. show in their 2022 paper[2], this may simply be the result of larger language models having seen more code, and thus structured reasoning, in their training data.
Yes, we do. We know their design principles, their operation principles, the designs of specific instances, and everything else that has anything to do with "how they work inside". We know how they work inside. We know how they work inside and out.
What we can't do is predict their output, given some input (i.e. a prompt). That's because Large Language Models are so, well, large, and complex, that it's impossible to reproduce their calculations without ... a Large Language Model. In the same way, we can't predict the output of a Random Number Generator. Very much like a large language model, we know how an RNG works, but if we could predict its behaviour, then it wouldn't be working correctly.
But why is the inability to predict the behaviour of a complex system reason to imagine all sorts of mysterious and fantastical things happening inside the system, such as it "thinking", or "developing some kind of internal world model"? Those are not things that language models, small or large, are designed to do, and there is no obvious reason why they should be able to do them. There is no reason to assume they can do them either: the fact that they are models of language, trained to reproduce language, and nothing else, suffices to explain their observed behaviour.
There are no mysteries of intelligence, or artificial intelligence, to uncover in the study of large language models' behaviour. I am perfectly well aware that there are many people who absolutely want there to be such mysteries, there, and that will overlook everything we know about how language models actually work, to be able to fantasise about those mysteries.
But those are fantasies. They are superstitions. Superstitious beliefs about an artifact of technology, that we have created ourselves, and that some of us have now chosen to revere and wonder at as if it was an artifact of an alien civilisation fallen from the sky.
Also, on a personal note: not you too, Animats... come on, you're a knowledgeable one. Don't just get swept up in the crowd's madness like that.
I think that one thing to keep in mind is that we are only capable of observing the passage of time along one vector because our perception relies upon entropic biological processes.
We cannot observe in any subset of possible universes where entropy is not present or is working backwards-Ergo those possibilities are wholly out of our direct perception.
That does not mean that causality cannot run in reverse, however, only that we cannot interact with those mechanisms in a way that would preclude our existence or observation.
Why are they big? Because they make a lot of money.
Why do they make a lot of money?
Because big companies are drilled into the woodwork of the governments worldwide, and partners.
How are they drilled into every nook and cranny?
They hired people to lock onto every nook and cranny. Amazon, for example, has effectively grafted itself to the United States government. If you shut down Amazon, you would probably end the government.
That’s on purpose.
These companies are so big because they hire to burrow into every corner and tap into the blood supply so they can never be removed.
As someone coming from a functional perspective I would humbly characterize my position as:
> Functional - Modifying state is hard to get correct; so let's focus on the rest of the problem (the part we can get right!)
Pushing state to the edges is actually just a consequence of this impulse.
In the end it isn't a satisfying solution, because you end up with every bit of low-level, inconsequential state percolating up to the top and polluting every single data structure along the way.
I feel there must be a better approach ... in which the "impertinent" state can be abstracted away and managed orthogonally to the functional description of the program.
We traveled 1/2 of the way, so we still have 1/2 to travel.
Since we travel at constant speed the 2nd half of the entire way cannot take longer to travel than the 1st half of the entire way.
So each subdivision can NOT ADD more time than the previous 1/2 interval.
So we have an upper bound on time, since time can never be larger.
So while we do have an infinite amount of time intervals, the sum will never grow, it's upwards limited.
But since each "parent interval" is upwards limited, it's irrelevant to how many "child" intervals you subdivide it as the parent time interval is always upwards limited.
So no matter how many times you subdivide, the TOTAL TIME never grows, thus time is not infinite.
I guess we should take this as a lesson in communications. The "breakeven" thing is a red-herring that should have been have been left out of the message, or at least only mentioned as a footnote. The critical ELI5 message that should have been presented is that they used a laser to create some tiny amount of fusion. But we have been able to do that for a while now. The important thing is that they were then able to use the heat and pressure of the laser generated fusion to create even more fusion. A tiny amount of fusion creates even more fusion, a positive feedback loop. The secondary fusion is still small, but it is more than the tiny amount of laser generated fusion. The gain is greater than one. That's the important message. And for the future, the important takeaway is that the next step is to take the tiny amount of laser fusion to create a small amount of fusion, and that small amount of fusion to create a medium amount of fusion. And eventually scale it up enough that you have a large amount of fusion, but controlled, and not a gigantic amount of fusion that you have in thermonuclear weapons, or the ginormous fusion of the sun.
>What do you think about calls to remove platform immunity from algorithms that have an editorial effect?
You mean, "repeal Section 230"? Because the entire point of Section 230 is to allow imperfect biased moderation without having to eliminate all user content. Such calls are ridiculous, stupid, or malicious on a host of levels. Making editorial decisions about what to allow on your own private property is core 1A Freedom of Speech, with caselaw dating back to well before the web.
"Editorial effect" is also an utterly meaningless phrase. You probably have some silly politics thing in mind, but moderating against porn or violence also has an "editorial effect". So does having a forum devoted to aircraft or cats. I think trains and birds are great too. But if I want to run a forum specifically about aircraft or cats, I need to be able to delete train or bird posts, and if necessary ban users who won't follow the rules. This is all completely biased and has the editorial effect of shaping the forum to a specific niche of speech, there is nothing common carrier about running a focused forum. And politics could indeed enter into it, what if some political group proposes a law banning aircraft or cats? Rallying and organizing against that could include being biased against those who want to support that law. Colorful and strident invective may be featured. Such is life in a free society.
If you want a soap box that does something else, the law also protects your ability to make that (or to group up to do it or pay someone else to do it or whatever else). And as a practical matter it is now easier and cheaper to do so and get to a potential global audience then at any time in human history (let alone the history of the US). Win the argument in the marketplace of ideas, not using the state monopoly on violence.
Physics is based on metaphor not math. We take common experiences like space, distance, speed, temperature, "energy", quantify them with other stable experiences we can use as reference units, then select the operations on them which happen to have predictive value. The operations have become more abstract over time, but they're still more complex variations on the same underlying concepts - for example generalising 3D Euclidean space to the abstract ideal of a set of relationships in a mathematical space defined by some metric.
There's nothing absolute about either the math or the metaphor. Both get good answers in relatively limited domains.
One obvious problem is that reality may use a completely different set of mechanisms. Physics is really pattern recognition of our interpretation of our experience of those mechanisms. It's not a description of reality at all. It can't be.
And if our system of metaphors is incomplete - quite likely, because our experiences are limited physically and intellectually - we won't be able to progress past those limits in our imagination.
We'll experience exactly what we're experiencing now - gaps between different areas of knowledge where the metaphors are contradictory and fail to connect.
In an effort to get people to look
into each other’s eyes more,
and also to appease the mutes,
the government has decided
to allot each person exactly one hundred
and sixty-seven words, per day.
When the phone rings, I put it to my ear
without saying hello. In the restaurant
I point at chicken noodle soup.
I am adjusting well to the new way.
Late at night, I call my long distance lover,
proudly say I only used fifty-nine today.
I saved the rest for you.
When she doesn’t respond,
I know she’s used up all her words,
so I slowly whisper I love you
thirty-two and a third times.
After that, we just sit on the line
and listen to each other breathe.
Jeffrey McDaniel, “The Quiet World”
Having worked at a water treatment plant in the past: if the power is out for a sufficiently long time, then your water is going to go dry. The time it takes is dependent on the distribution system (and time of year, you consume about twice as much water in summer as in winter), but where I worked, it was about 12 hours.
Also, we had no generators. Why? Because a) power outages of that length are extremely uncommon (2 in 70 years, IIRC) and b) the power draw of a large water treatment plant is insanely high. The water company was literally the largest consumer of power in the entire state, and turning on some of the pumps require the power company's permission because it draws that much power. It's not very feasible to keep a backup power system for such a large facility when the need for it is so very low.
The difference is we know how to make the bridge, and knew where to place it. A lot of the effort in a new bridge is work that didn't have to be done for that bridge because it was already done. No need to figure out what to do with traffic (which means no phases that must complete first). No need to dig new footings, just use the old ones. No need to do a new design - the old one was good enough.
The bridge collapsed because of a fire and there seems to be no reason to redesign bridges to resist a fire like that so a lot of effort was saved. If we decided fires were too common we wouldn't be able to replace bridges as quickly because we need to do engineering work first on a new one.
In the HN context in particular, it's important to observe that the underlying legal philosophy of monopoly differs in European and US law.
It's not just that the laws are different; it's the goals of the laws. The European laws (in general) attempt to stave off competitor harm. They're historically sourced to guild protections and seek to create a situation in which companies can compete. Small players in the market are seen as inherently a good thing, regardless of whether that actually makes a better market for consumers (which it is not guaranteed to; laws protecting the existence of small book stores keep prices of books artificially high in a world of extremely inexpensive data copying and book fabrication. French book buyers are essentially paying a hidden tax to have corner book shops exist, whether or not they care if corner book shops exist).
The US lacked a similar guild history and, instead, historically experienced the threat of government overreach privileging the existence of incumbent companies over new players. The US therefore has laws crafted to protect against the major threat the US historically experienced: one company making life miserable for consumers (i.e. the Rockefeller threat, or going back further, the British East India Company threat). Neither US law nor US monopoloy philosophy traditionally care whether small-time market players can compete at all ("why should it," runs the argument, "maybe Starbucks is ubiquitous because their coffee is actually better. What's the value to the consumer of propping up bad coffee?").
It feels like 90% of arguments on HN on the topic are unaware of that philosophy gap.
There was research last year that showed that cynicism is a way for less intelligent people to protect themselves from being taken advantage of. Basically, if you know you're not smart enough to figure out if other people are genuine or out to scam you, defaulting to assuming that all strangers are lying is a winning strategy.
The same kind of logic fuels conspiracy-minded people. The one thing they all have in common is that they reject the mainstream view, they reject the consensus. And in turn, they embrace outlandish explanations, and the communities around those, because it gives them a false sense of superiority. They know something the rest of the sheep don't!
So these two psychological defense mechanisms interact with each other, and when the information flow in society is increasing, the threshold for how smart you need to be to keep up also increases. So people in general probably aren't getting stupider, but they're getting more and more overwhelmed, which looks the same.
That's almost exactly the path I've gone down, and svelte is something I've been seriously researching for a couple months now.
I have a nagging problem that keeps coming back to me though. The declarative state model is the holy grail for UI design, but if you try to mix in any form of imperative logic, it all falls apart. All of a sudden you need to understand and plug into poorly documented and opaque component lifecycle APIs in order to insert your imperative code and have it work correctly. And that really messes up the nice clean solution, making it look more like a chimera of two state models.
Some of that is fixed by libraries that create components native to your framework of choice, implementing the appropriate lifecycle hooks as necessary. But now you're relying on a library that adapts one library to another library, and now you've got a dependency sync problem on top of it all.
It wouldn't be such a problem if there weren't so many extremely useful libraries that depend on imperative code. Personally, I've run into massive problems trying to use D3, Leaflet, and Bootstrap, as well as a handful of others. I'm sure there are thousands more.
Not trying to take away from your point. When these declarative state models work well, it's a beautiful thing to behold. But there's always some point where the abstraction leaks.
In political science, the conventional view is that polarization observed in the US is not a product of the two camps drifting further apart (in the sense of having more extreme views), it's the product of issue divisions aligning more closely to partisan camps.
Consider Kim Davis. Kim Davis was a county clerk who went to jail after illegally refusing to issue a same-sex marriage certificate after the court system ruled she must. Davis, like most southerners until recently, was a registered Democrat. Until about the year 2000, virtually every southern government was majority Democratic, in some cases supermajority Democratic. The people in those parties often voted for Republican presidential candidates and were as conservative as the Republican Party, but due to inertia, they registered as Democrats. This has largely been eroded over the last 20 years. The result is that even if no one switches their opinions on anything, conservative Democrats now identify as Republicans. There are a number of inertial reasons to stick with a party that has left you behind, or to cynically join a party for a meal ticket. So, some of the apparent "era of good feelings" -- confluence between parties -- actually occurred because a big part of today's Republicans were "mistakenly" registered as Democrats. If any other region becomes one-party dominated for a long time, you'll see the reverse. The reverse is true in Hawai'i, where many erstwhile Republicans would today be simply more conservative Democrats, because the Republican party is extinct there.
There is also a belief that voters are better able to attach positions to parties. For example, if I told you that one party in the US generally favors higher taxes and higher services, and one party generally favors lower taxes and lower services, could you match the party labels to the descriptions, imperfect as they are? There is a general belief that people are better at this than they once were.
Finally, the increase in correlations between issues positions. For example, today we largely view Republicans as a rural party and Democrats as an urban party. That was not true 30 years ago. Prior to the politicization of abortion, there were constituencies that were pro-life and voted for the Democrats (Catholics being a huge such group). Now abortion is neatly aligned across party lines: there are almost no pro-choice Republicans and almost no pro-life Democrats. Ditto immigration -- which used to be cross-cutting when the union left viewed it as a threat to working conditions but now is primarily conceived along the dimension of racial conservatism. If I tell you someone loves guns, you have a pretty good chance of guessing their position on immigration, healthcare, and school prayer, even though outwardly those four issues do not need to be attached to one another.
Finally, within congress, two institutional reforms have contributed to across-party polarization: first, banning earmarks. It used to be that if I wanted corn subsidies and you did not, I would add an amendment to my bill to fund the navy base in your district. We then both vote yes. Killing earmarks may have reduced waste and corruption, but it also reduced a procedural tool used to secure inter-party agreement on contentious bills by offering concessions to the other party. It's like "suing for peace". Second, the "Hastert Rule", a rule that the Republican party adopted to never advance any bill that does not have majority Republican support. It used to be that, say, Republican leadership might advance a bill that had 40% Republican support and 80% Democratic support when those totals add up to more than 50% of the overall congress. By committing not to "roll" their own party, the Hastert rule virtually guarantees that votes that would internally divide parties and thus reveal ideological heterogeneity within the party are less likely to happen in favour of votes that divide across parties.
Why am I mentioning these trends? They contribute to a phenomenon that many political scientists (here I am thinking Tausanovitch and Warshaw, but others as well) have noted: you can perceive polarization (increasing distances between the two parties) without anyone adopting more extreme views. Rather, polarization can emerge from how party labels map to issues and how institutions surface issues to vote on. This doesn't mean no views are changing or become more extreme, but it does mean we should resist estimators that have a simplistic appeal to our gut feeling that things have become more extreme.
Some of this is discussed in the linked article, obliquely, but I think the article suffers from being written by non-political scientists trying to think about a political science problem from first principles rather than engaging with the existing literature. Reinventing the wheel can sometimes be helpful and sometimes is not.
Because, for better or worse, the sources of truth that normal people historically relied on for their barometer of what is true or not have been democratized by the internet.
We live in a world where a substantial number of people believe the earth is flat, that 5G cellular is a mind control scheme, that vaccines cause autism, that COVID-19 was created by a political party, that the concept of climate change is manufactured, or that major national crises are actually just actors being paid to further a political narrative.
Most of these ideas aren't new, but in decades past you might have heard about them from a conspiracy-therorist neighbor, a low profile website, or an alternative magazine with little reputation of its own.
Now, these ideas are spread on the exact same platforms as objectively truthful / scientifically sound media. Your Youtube conspiracy theory channel is right next to the BBC's videos. Your viral Facebook post could be from the New York Times, or it might be from a propaganda organization - or worse, an account that looks like a normal person but which was specifically created to spread misinformation that seems plausibly truthful.
Credibility is distributed and anyone can publish to a huge audience, which is wonderful sometimes, and othertimes deeply problematic, because the viewer often doesn't know enough to distinguish fact from fiction and can't trust the publisher at face value anymore.
Its uncharted territory. The cost to distribute is zero, and ideas spread far and wide - but that means that there are equally as many incredible sources on any given topic as credible ones, and telling the difference is hard, and sometimes not knowing the difference is dangerous. Dunning-Kruger writ large.
2. When the Mueller report was about to be released, Attorney General Barr wrote a memo to Congress that purported to summarize the principal conclusions. Trump and Republican supporters seized on this to claim Trump was exonerated. In fact, Mueller explicitly stated that he did not exonerate Trump. Further, in a subsequent letter of his own, Mueller stated that Barr's memo "did not fully capture the context, nature, and substance" of the investigation. (Washington Post article here: https://www.washingtonpost.com/world/national-security/muell...)
There are many more findings, but I tried to be concise in response to the specific claim that there is "nothing there".
When the investigation began and Mueller was appointed, Republicans praised him. (C.f. Fox News article: https://www.foxnews.com/politics/robert-mueller-appointment-...). Now they claim the investigation was either unlawful, or that FBI investigators were criminals, or similar. One does not need to take my or anyone else's word for what Mueller's team reported. You can get the redacted report from the government or even Amazon, and read it for yourself. You can also get the Senate committee's report from the government and read it for yourself. It is clear to me (and should be clear to anyone who has read the report or followed the story) that it is a flat-out lie to say there is "nothing there", and that Trump supporters have shifted from welcoming a fair investigation into Russian interference to attacking the investigators. And that's where we are now.
People have been voting by mail for decades, in the US and many other democracies. There haven't been any incidents. And, yes, I am convinced wide-scale fraud would be almost impossible to hide: you can't pull off any fraud without, for example, many voters turning up at polling places even though they have been mailed a ballot. Or people noticing the voter rolls show them having voted when they didn't. Or whatever scheme you are using to intercept thousands of individual letters addressed to individual residences being noticed. Or sudden, unexplained changes in participation being noticed. Or many dead people somehow voting because you can't possibly stay up-to-date with all recent deaths in the community.
And, of course, the discussion isn't actually about voting-by-mail, yes-or-now? Because that has been possible for a long time and isn't going to change. The discussion is about making it easier and/or the default to protect people from communicable diseases.
The issue, then, isn't even if voting by mail allows fraud. It's if the likelihood of fraud is significantly higher when, say, 50% instead of 30% choose that option.
This is yet another blatantly obvious attempt to stack the deck in Republican's favour. It's sickening to see people pretend to care about the integrity of democracy by engaging with all these phantom debates about voter fraud, in the complete absence of any actual fraud happening (except that Republican in South Caroline, of course).
Meanwhile, real damage is done to democracy by the unrelenting attempts to selectively make it harder for people to vote. Take a look at these changes in polling locations in Milwaukee for a blatant example (the red, suburban spots are predominantly Republican locations, while the urban core leans democratic) : https://pbs.twimg.com/media/EYoIrdZXQAILKlB.jpg
It sort of depends on what really fascinates you, right? I'll avoid naming some of the most popular ones, because it's likely that you've already tried them. If you haven't, I'd really recommend giving them a try. Many people seem to really love them.
In terms of defaults:
I've heard really good things about Solus, and its use of AppArmor seems really cool. Never touched its package manager, so I won't recommend it, but it might be worth checking out. Its desktop environment is really snappy and has an interesting design philosophy.
Elementary is really cool as a boutique distribution; I don't personally feel any urge to use it seriously (I dislike apt as a package manager), but I always keep its live environment on a flash drive, because it works without any setup on basically anything I throw it at, painlessly, and without error. It's got a cool indie app store full of curated Elementary-centric free software, and overall just feels great. Using it, you'll probably notice a few areas that it clones Mac on, and a few that feel delightfully different.
Clear Linux (Intel's desktop distribution) is pretty popular right now because of how simple it is & how Intel seems to be going to great lengths to optimize it and make it a serious contender, but I don't like its desktop environment (vanilla GNOME 3 as far as I'm aware) all that much.
ChromiumOS is probably the best-designed desktop operating system on the planet right now technically, and I say that as a person who really hates Google. UI-wise it's so-so, but UX-wise it's really something special.
But more interesting are desktop environments in general, since they can be used with any variant of Linux you feel the urge to use. There's an exception there, though, in that Elementary's DE and Deepin's DE tend to not work so well or nicely on platforms that aren't Elementary or Deepin.
There are modern environments:
Plasma has hands-down the best UX of any sort of desktop operating system assuming you've got an Android smartphone; you say you're coming from Apple's environment, so imagine the interop between your Mac and your iPhone, but going both ways instead of just Mac -> iPhone. Texting, handling calls, taking advantage of the computing resources of connected devices, using your phone as an extra trackpad, notifications, unlocking your PC, painless file-sharing, pretty much anything you'd like. There are a bunch of distributions that ship with Plasma by default.
Solus's Budgie is kind of neat in that it takes the main benefit of GNOME 3 (ecosystem) with far fewer downsides.
There are also retro environments, if those are your thing. There's a pretty much perfect NeXTSTEP clone (including the programming environment, not just the UI), amiwm is still pretty interesting, there are clones of basically every UNIX UI under the sun, so on.
I'm not the best person to answer your question, because for the most part I don't go out of my way to use new desktop environments and distributions, and nothing above is my first choice. (In terms of window management, I usually stick with 9wm & E just because I have ridiculous ADHD and 9wm forces me to focus while E allows me to tile painlessly if I ever need it. I use three distributions overall, none of which are very popular at the moment, pretty much solely because I'm really picky with package managers & design philosophies.) That's a "me" issue rather than a Linux issue, though.