Anyone who anthropomorphizes LLM's except for convenience (because I get tired of repeating 'Junie' or 'Claude' in a conversation I will use female and male pronouns for them, respectively) is a fool. Anyone who things AGI is going to emerge from them in their current state, equally so.
We can go ahead and have arguments and discussions on the nature of consciousness all day long, but the design of these transformer models does not lend themselves to being 'intelligent' or self-aware. You give them context, they fill in their response, and their execution ceases - there's a very large gap in complexity between these models and actual intelligence or 'life' in any sense, and it's not in the raw amount of compute.
If none of the training data for these models contained works of philosophers; pop culture references around works like Terminator, 'I, Robot', etc; texts from human psychologists; etc., you would not see these existential posts on moltbook. Even 'thinking' models do not have the ability to truly reason, we're just encouraging them to spend tokens pretending to think critically about a problem to increase data in the recent context to improve prediction accuracy.
I'll be quaking in my boots about a potential singularity when these models have an architecture that's not a glorified next-word predictor. Until then, everybody needs to chill the hell out.
>Anyone who anthropomorphizes LLM's except for convenience [...] is a fool.
I'm with you. Sadly, Scott seems to have become a true AI Believer, and I'm getting increasingly disappointed by the kinds of reasoning he comes up with.
Although, now that I think of it, I guess the turning point for me wasn't even the AI stuff, but his (IMO) abysmally lopsided treatment of the Fatma Sun Miracle.
I used to be kinda impressed by the Rationalists. Not so much anymore.
> Even 'thinking' models do not have the ability to truly reason
Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...
The problem of other minds is real, which is why I specifically separated philosophical debate from the technological one. Even if we met each other in person, for all I know, I could in fact be the only intelligent being in the universe and everyone else is effectively a bunch of NPCs.
At the end of the day, the underlying architecture of LLMs does not have any capacity for abstract reasoning, they have no goals or intentions of their own, and most importantly their ability to generate something truly new or novel that isn't directly derived from their training data is limited at best. They're glorified next-word predictors, nothing more than that. This is why I said anthropomorphizing them is something only fools would do.
Nobody is going to sit here and try to argue that an earthworm is sapient, at least not without being a deliberate troll. I'd argue, and many would agree, that LLMs lack even that level of sentience.
You do too. What makes you think the models are intelligent? Are you seriously that dense? Do you think your phones keyboard autocomplete is intelligent because it can improve by adapting to new words?
How much of this is executed as a retrieval-and-interpolation task on the vast amount of input data they've encoded?
There's a lot of evidence that LLMs tend to come up empty or hilariously wrong when there's a relative sparsity in relevant training data (think <10e4 even) for a given qury.
> in seconds
I see this as less relevant to a discussiom about intelligence. Calculators are very fast in operating on large numbers.
When I ask an LLM to plan a trip to Italy and it finishes with with "oh and btw i figured the problem you had last week with the thin plate splines yoi have to do this ...."
> Note that password-based Bitlocker requires Windows Pro which is quite a bit more expensive.
Given that:
1. Retail licenses (instead of OEM ones) can be transferred to new machines
2. Microsoft seems to be making a pattern of allowing retail and OEM licenses to newer versions of Windows for free
A $60 difference in license cost, one-time, isn't such a big deal unless you're planning on selling your entire PC down the line and including the license with it. Hell, at this point, I haven't purchased a Windows license for my gaming PC since 2013 - I'm still using the same activation key from my retail copy of Windows 8 Pro.
This amounts to a difference of 114€ or 135$ at the current exchange rate which is significantly more. Also surprised that Windows Pro is 189% of the price of the Home edition in France but 143% in the USA.
I initially bought the Home edition but could not upgrade to pro without buying a full license so I had to bear the full cost of the French Pro license, which lead to an upgrade cost of 259€ instead of just $60. (basically I had to buy the pro version to get password unlock with Bitlocker since TPM unlock was broken with dual boot, needed to enter the recovery key after every boot to Fedora). If it was possible to only pay for the difference they did not make it obvious.
And in general paying this much for an OS that still pushes dark pattern and ads onto me leaves quite a bad taste in my mouth; I wouldn't mind paying a subscription if I could get an OS that does what I want and gets fully out of my way. (but I guess subscription would come with mandatory online accounts which is part of the problem at hand here).
NAT-PMP, UPnP, PCP, et. all primarily exist because consumer networks that have to share a public IP face more issues than simply opening a port up to the internet. Destination port conflicts, port remapping, discovery of your public IP, are huge fucking headaches that these protocols also assist with.
Given most consumer routers these days can be configured with a mobile app, I could easily foresee a saner alternative where devices could simply ask the gateway if they could open up a port and have a notification sent to a mobile app to allow it.
But, that said, given how many devices are mobile these days I think the benefit of endpoint firewalls shouldn’t be underplayed either.
NAT gateways that utilize connection tracking are effectively stateful firewalls. Whether a separate set of ‘firewall’ rules does much good because most SNAT implementations by necessity duplicate this functionality is a bit ignorant, IMO.
Meanwhile, an IPv6 network behind your average Linux-based home router is 2-3 nftables rules to lock down in a similar fashion.
It's also trivial to roll your own version of dropbox. With IPv6 it's possible to fail to configure those nftables rules. The firewall could be turned off.
In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address. That makes it functionally impossible to misconfigure. I inadvertently plugged the WAN cable directly into my LAN one time and my ISP's DHCP server promptly banned my ONT entirely.
> In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address
So, I randomly discovered the other day that my ISP has given me a full /28.
But I have no idea how to actually configure my router to forward those extra IP addresses inside my network. In practice, modern routers just aren't expecting to handle this, there is no easy "turn of NAT" button.
It's possible (at least on my EdgeRouterX), but I have to configure all the routing manually, and there doesn't seem to be much documentation.
You should be able to disable the firewall from the GUI or CLI for Ubiquiti routers. If you don't want to deal with configuring static IPs for each individual device, you can keep DHCP enabled in the router but set the /28 as your lease pool.
In the US many large companies (not just ISPs) still have fairly large historic IPv4 allocations. Thus most residential ISPs will hand you a single publicly routable IPv4 regardless of if you're using IPv6 or not.
We'll probably still be writing paper checks, using magnetic stripe credit cards, and routing IPv4 well past 2050 if things go how they usually do.
Went to double check what my static IP address was, and noticed the router was displaying it as 198.51.100.48/28 (not my real IP).
I don't think the router used to show subnets like that, but it recently got a major firmware update... Or maybe I just never noticed, I've had that static IP allocation for over 5 years. My ISP gave it to me for free after I complained about their CGNAT being broken for like the 3th time.
Guess they decided it was cheaper to just gave me a free static IPv4 address rather than actually looking at the Wireshark logs I had proving their CGNAT was doing weird things again.
Not sure if they gave me a full /28 by mistake, or as some kind of apology. Guess they have plenty of IPs now thanks to CGNAT.
More like even if they looked at the logs they aren't about to replace an expensive box on the critical path when it's working well enough for 99% of their customers.
I once had my ISP respond to a technical problem on their end by sending out a tech. The service rep wasn't capable of diagnosing and refused to escalate to a network person. The tech that came out blamed the on premise equipment (without bothering to diagnose) and started blindly swapping it out. Only after that didn't fix the issue did he finally look into the network side of things. The entire thing was fairly absurd but I guess it must work out for them on average.
Did you even read the second paragraph of the (rather short) comment you're replying to? In most residential scenarios you literally can't turn off NAT and still have things work. Either you are running NAT or you are not connected. Meanwhile the same ISP is (typically) happy to hand out unlimited globally routable IPv6 addresses to you.
I agree though, being able to depend on a safe default deny configuration would more or less make switching a drop in replacement. That would be fantastic, and maybe things have improved to that level, but then again history has a tendency to repeat itself. Most stuff related to computing isn't exactly known for a good security track record at this point.
But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
> But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
And, my argument, is that the only substantial difference is the action of a netfilter rule being MASQUERADE instead of ALLOW.
This is what literally everyone here, including yourself, continues to miss. Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
There's no need to do address and port translation with IPv6, so the only difference to secure an IPv6 network is your masquerade rule turns into "accept established, related". That's it, that's the magic! There's no magical extra security from "NAT" - in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
> Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
Yes, and that _provides security_. Thus NAT provides security. You can say "well really that's a stateful firewall providing security because that's how you implement NAT" and you would be technically correct but rather missing the point that turning NAT on has provided the user with security benefits thus being forced to turn it on is preventing a less secure configuration. Thus in common parlance, IPv4 is more secure because of NAT.
I will acknowledge that NAT is not the only player here. In a world that wasn't suffering from address exhaustion ISPs wouldn't have any particular reason to force NAT on their customers thus there would be nothing stopping you from turning it off. In that scenario consumer hardware could well ship with less secure defaults (ie NAT disabled, stateful firewall disabled). So I suppose it would not be unreasonable to observe that really it is usage of IPv4 that is providing (or rather forcing) the security here due to address exhaustion. But at the end of the day the mechanism providing that security is NAT thus being forced to use NAT is increasing security.
Suppose there were vehicles that handled buckling your seatbelt for you and those that were manual (as they are today). Someone says "auto seatbelts improve safety" and someone else objects "actually it's wearing the seatbelt that improves safety, both auto and manual are themselves equivalent". That's technically correct but (as technicalities tend to go) entirely misses the point. Owning a car with an auto seatbelt means you will be forced to wear your seatbelt at all times thus you will statistically be safer because for whatever reason the people in this analogy are pretty bad about bothering to put on their seatbelts when left to their own devices.
> in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
There are ways to bypass the physical lock on my front door. Nonetheless I believe locking my deadbolt increases my physical security at least somewhat, even if not by as much as I'd like to imagine it does.
The difference is that with IPv4 you know that you have that security because there is no other way for the system to work while with the IPv6 router you need to be a network expert to make that conclusion.
Look at this nftables setup for a standard IPv4 masquerade setup
table ip global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
}
Note, we have explicit rules in the forward chain that only forward packets that either:
* Were sent to the LAN-side interface, meaning traffic from within our network that wants to go somewhere else
* Are part of an established packet flow that is tracked, that means return packets from the internet in this simple setup
Everything else is dropped. Without this rule, if I was on the same physical network segment as the WAN interface of your router, I could simply send packets to it destined to hosts on your internal network, and they would happily be forwarded on to it!
NAT itself is not providing the security here. Yes, the attack surface here is limited, because I need to be able to address this box at layer 2 (just ignore ARP, send the TCP packet with the internal dst_ip address I want addressed to the ethernet MAC of your router), but if I compromised routers from other customers on your ISP I could start fishing around quite easily.
Now, what's it look like to secure IPv6, as well?
# The vast majority of this is the same. We're using the inet table type here
# so there's only one set of rules for both IPv4 and IPv6.
table inet global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept;
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, echo-reply, destination-unreachable, packet-too-big, time-exceeded } accept;
# We will allow access to our internal web server via IP6 even if the traffic is coming from an
# external interface
ip6 daddr 2602:dead:beef::1 tcp dport { 80, 443 } accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
}
Note, there's only three new rules added here, the other changes are just so we can use a dual-stack table so there's no duplication of the shared rules in separate ip and ip6 tables.
* 1 & 2: We allow ICMPv6 traffic in the forward and input chains. This is technically more permissive than needs to be, we could block echo-request traffic coming from outside our network if desired. destination-unreachable, packet-too-big, and time-exceeded are mandatory for IPv6 to work correctly.
* 3: Since we don't need NAT, we just add a rule to the forward chain that allows access to our web server (2602:dead:beef::1) on port 80 and 443 regardless of what interface the traffic came in on.
None of this requires being a "network expert", the only functional difference in an actually secure IPv4 SNAT configuration and a secure IPv6 firewall is...not needing a masquerade rule to handle SNAT, and you add traffic you want to let in to forwarding rules instead of DNAT rules.
Consumers would never need to see the guts like this. This is basic shit that modern consumer routers should do for you, so all you need to think about is what you want to expose (if anything) to the public internet.
With partitioning? No you don't. It gets a bit messy if you also want to partition a table by other values (like tenant id or something), since then you probably need to get into using table inheritance instead of the easier declarative partitioning - but either technique just gives you a single effective table to query.
If you are updating the parent table and the partition key is correctly defined, then an update that puts a row in a different partition is translated into a delete on the original child table and an insert on the new child table, since v11 IIRC. But this can lead to some weird results if you're using multiple inheritance so, well, don't.
I believe they were just pointing out that Postgres doesn't do in-place updates, so every update (with or without partitions) is a write followed by marking the previous tuple deleted so it can get vacuumed.
There's a huge divide between abusing rebase in horrible ways to modify published history, and using it to clean up a patch series you've been working on.
Oops, I made a mistake two commits ago, I'd really like to get some dumb print statements I added out before I send this off to get merged is perfectly valid, I just did it yesterday. A quick `git commit --fixup` followed by `git rebase -i --autosquash HEAD^3` and I had some dumb debugging code I left in stripped out.
Then, there's other perfectly valid uses of rebase, like a simple `git rebase main` in an active development branch to reparent my commits on the current HEAD instead of having my log messed up with a dozen merge commits as I try to keep the branch both current and ready to merge.
So, yes, I do think editing history is a grand idea that should be used regularly. It lets me make all the stupid "trying this" and "stupid bug" commits I want, without polluting the global history.
Or, are you telling me you've also never ended up working on two separate tasks in a branch, thinking they would be hard to separate into isolated changes, and they ended up being more discrete than you expected so you could submit them as two separate changes with a little help from `git cherry-pick` and `git rebase` too?
Editing history isn't evil. Editing history such that pulls from your repository break? That's a different story entirely.
Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them.
What's it matter if you have an extra commit to remove a file before merge? Perfectly valid, and doesn't hide anything.
Caring more about a "visually pleasing log" when you can care about an information rich log doesn't jive with me. Logs aren't supposed to be "clean"
If I want features in two branches, I make two branches. Cherry pick also is bad for most people, most of the time.
I care about having a commit log that's useful and easy to scan through, it's not about it being "visually pleasing". Having a dozen "oopsie" commits in the log doesn't make my life any easier down the road, all it does is increase noise in the history.
Again, once something hits `main` or a release/maintenance branch then history gets left the hell alone. But there really is no context to be gained from me fixing stupid things like typos, stripping out printf() debug statements, etc. being in the commit logs before a change gets merged.
> Editing history let's people hide information, intentionally or not. You are bold to claim you know what future people need information wise better than them.
You're already deciding what information is important to the future when you decide at which points you commit.
Reductio ad absurdum: why not commit every keystroke, including back spaces? By not including every key stroke, you are hiding information from future people!
It is used for tracking, that's the whole point of the header. "Who's sending me all of this traffic" is a useful, non-invasive thing for websites to have access to. You can use rel="noreferrer" on a link to disable the header on a specific link, as well as the `Referrer-Policy` header and `<meta name="referrer" />` to have some additional control (the 'origin-when-cross-origin' value can be useful in some cases, so destination sites can attribute what origin traffic came from, but not the specific page, while still being able to track it on your own origin - I think this is actually the default behavior in browsers these days).
"Simple" VPS providers like DigitalOcean, etc. really need to get the hell onboard with network virtualization. It's 2026, I don't want to be dealing with individual hosts just being allocated a damned /64 either. Give me a /48, attach it to a virtual network, let me split it into /64's and attach VM's to it - if I want something other than SLACC addresses (or multiple per VM) then I can deal with manually assigning them.
To be fair, the "big" cloud providers can't seem to figure this shit out, either. It's mind boggling, I'm not saying I've gone through the headache of banging out all the configuration to get FRRouting and my RouterOS gear happily doing the EVPN-VXLAN dance; but I'm also not Amazon, Google, or Microsoft...
Do you think anything other than trivial internal networking is a common requirement on DO? I’m not saying it’s not, I really don’t know— I haven’t been in the production end of things for a while and when I was, everyone was basically using AWS et. al. for non-trivial applications. They make it easy enough to set up a private ipv4 subnet to connect your internal services. Does that not satisfy you use case or are you just avoiding tooling that might be obsolete sooner than ipv6?
We can go ahead and have arguments and discussions on the nature of consciousness all day long, but the design of these transformer models does not lend themselves to being 'intelligent' or self-aware. You give them context, they fill in their response, and their execution ceases - there's a very large gap in complexity between these models and actual intelligence or 'life' in any sense, and it's not in the raw amount of compute.
If none of the training data for these models contained works of philosophers; pop culture references around works like Terminator, 'I, Robot', etc; texts from human psychologists; etc., you would not see these existential posts on moltbook. Even 'thinking' models do not have the ability to truly reason, we're just encouraging them to spend tokens pretending to think critically about a problem to increase data in the recent context to improve prediction accuracy.
I'll be quaking in my boots about a potential singularity when these models have an architecture that's not a glorified next-word predictor. Until then, everybody needs to chill the hell out.
reply