Hacker Newsnew | past | comments | ask | show | jobs | submit | eddythompson80's commentslogin

What’s a local hotspot and how does Starbucks block it? It’s illegal to jam signals (assuming a “local hotspot” is some Wi-Fi network from a neighboring business or center?)

What I meant is that I’ve noticed cable-provider hotspots often stop working inside cafes like Starbucks and you can reconnect to them as soon as you step outside.

It's using your phone's "hotspot" feature to get your other devices online without signing into the wifi. Modern smart phones have this built into the OS. The phone broadcasts its own SSID and the laptop or other device connects to that, and then the phone acts as a router with its own mini NAT and DHCP stack.

It can be blocked because the wifi equipment at the cafe can see multiple MAC addresses emanating from one client, among other techniques.


That doesn’t make sense. Why do you care about the wifi equipment in the cafe if you’re connecting through your phone? The cafe’s wifi isn’t even in the loop.

You had a cold start issue with Postgres? Were you running a serverless postgres?

Not the address, but the phone number has a bug I run into it occasionally. Some merchants support the +1 country code, some are local US only and don’t expect it. Safari’s auto-fill figures this out when filling the form. But then I go to Apple Pay, an it replaces the phone number with a 1 at the beginning and drops the last number, then I get an error that something is wrong. Initially took me a while to realize what was happening and that you can edit the number in the Apple Pay overlay before it applies it to the order. Just a bit annoying

You’re absolutely right. Shocking, rage bait, sensational content was always there in social media long before algorithmic feeds. As a matter of fact “algorithmic feeds” were in a way always there it’s just that back in the day those “algorithms” were very simple (most watched/read/replies today, this week, this month. Longest, shortest, newest, oldest, etc)

I think the main thing algorithmic feeds did was present the toxicity as the norm, as opposed to it being a choice you make. Like I used to be part of a forum back in the early 2000s. Every few weeks the top most replied thread would be some rage bait, or sensational thread. those threads will keep getting pushed to the top and remain at the top of the forum for a while and grow very quickly as a ton of people keep replying and pushing it to the top. But you could easily see that everyone else is carrying on with their day. You ignore it and move on. You sort by newest or filter it out and you’re good. It was clear that this is a particular heated thread and you can avoid it. Also mods would often move it to a controversial sub forum (or lock it all together if they were heavy handed) So you sort of had to go out of your way to get there and then you would know that you are actively walking into a “controversial section” or “conspiracy” forum etc. It wasn’t viewed as normal. You were a crazy person if you kept linking and talking about that crazy place.

With algorithmic feeds, it’s the norm. You’re not seeking and getting to shady corners of the internet or subscribing to a crazy usenet newsgroup to feed your own interest in rage or follow a conspiracy. You are just going to Facebook or twitter or Reddit or YouTube homepage. Literally the most mainstream biggest companies in the US homepages. Just like every one else.


It’s not “too hard”. It’s physically impossible without regulation. There is but one limited RF spectrum that we all share. One bad actor (intentional or misconfigured) can render the entire RF spectrum in their area unusable. The radius of their impact only depends on how much kWHs they have access to and it doesn’t take much to cripple radio communication in a large metropolitan area.

Until some clever cookie can figure out some way to utilize string theory’s extra dimensions for sending signals and then every body can have their own dimension to mess with, collective regulation on broadcasters is the only feasible way.

Nothing is stopping you from getting an HT for communication during power outages, natural disasters, etc. You just have to get a license to make sure you don’t actively harm everyone who is sharing the same spectrum with you especially during said natural disaster.


Theoretically people could cripple RF comms on accident, in reality that almost never happens despite many people possessing devices able to do so. My mikrotik router will let me broadcast all sorts of illegal signals with a few clicks inside their GUI, and yet I never heard about problems with people crippling city blocks with bad router settings. Or from their weird microwave setups. Or trying to run and operate some dilapidated 60 year old radios.

That’s because almost any legal to sell consumer device gets an FCC certification. It can still cause interference, but within limited parameters that significantly limit the blast radius. Most of the interference people experience will be very limited and almost exclusively due to misconfigured or defective devices. Ham operators run into this occasionally and if memory serves correctly, there was a chapter in the ham license exam about how to identify potential bad RF source and how to handle it (the FCC usually recommend politely letting the person with a bad transmitter know that their TV antenna or generator or whatever is causing RF interference before you involve the authorities as most people who encounter this are simply unaware)

The situation would be very different if it were commercially legal to sell devices that are designed to let you broadcast to anyone without FCC certification on the device or enforcement from a governing body. A billion startups would be selling “communicate with your family across town for free” devices that can easily render emergency services radios useless in a city.


> It’s physically impossible without regulation.

Not true. Bluetooth, lora, and zigbee all coexist in the same unlicensed spectrum just fine. There’s no reason phones couldn’t speak these, or that a similar low-power protocol couldn’t be standardized.

> One bad actor can render the entire RF spectrum in their area unusable.

Ok, and? That’s already true for cellular, gps, and wifi today.

> Nothing is stopping you from getting an HT for communication during power outages, natural disasters, etc.

You’re missing the point. People already carry radios everywhere which are more than capable of longer range p2p communications.

The real question is why no such standard exists, despite its obvious utility.

Telling people to just carry an HT is smug and irrelevant. Average people carry phones.


> Not true. Bluetooth, lora, and zigbee all coexist in the same unlicensed spectrum just fine. There’s no reason phones couldn’t speak these, or that a similar low-power protocol couldn’t be standardized.

They already do. Most phones have Bluetooth. All those examples run on the 2.4GHz spectrum and all have the same RF range limitations and challenges. What’s your point?

> Ok, and? That’s already true for cellular, gps, and wifi today.

Hence the enforcement of cellular bands and gps through regulation. Again I’m confused as to what you are trying to say? Anyone can cause an RF jam. It’s illegal. Depending on how much it impact others, you might get a visit from the FCC, a fine or jail.

> You’re missing the point. People already carry radios everywhere which are more than capable of longer range p2p communications.

No they are not. You can’t get more than very short line of sight communication on the UHF band. You need to drop to at least the VHF band for any reasonable non-assisted communication and even still most people communicating in the VHF bands are using repeaters.

> The real question is why no such standard exists, despite its obvious utility.

You just listed 3 standards. Their utility is extremely limited and very unreliable as the distance, foliage, concrete increases between the parties. Telling anyone to rely on UHF transceiver in an emergency is misleading and dangerous. Telling anyone who is worried about communication in an actual emergency situation to have an HT is not smug. It’s the tool you need for the job. Average people carry phones because they are not frequently in such emergency situations. Those who are (emergency services, hardcore hikers, snow skiers, wild adventure types carry radios or satellite phones for this reason.

Plus with the recent low orbit satellite constellations making it possible to fit compatible transceiver in small phones (as opposed to needing a huge antenna for it) it’s even more of a moot point for emergency situations now.

You’re not gonna change antenna theory because you feel it’s smug.


Then let’s be precise about the claim.

If you’re saying “phones can’t replace VHF radios or repeaters for reliable long-range comms”, agreed. Nobody disputes antenna theory, and nobody is arguing for unregulated or high-power transmitters.

But if you’re saying “because of those limits, phone-native p2p shouldn’t exist at all”, that conclusion does not follow. Limited range and imperfect reliability still permit real, local, best-effort use cases, several of which have already been raised in this thread.

The point is precisely to fill the gaps, so phones aren’t completely useless when you can’t reach a cell tower and don’t have an HT handy. Most people will never carry radio gear, but will have a phone on them when something goes wrong.


Possibly, but the point is that MCP is a DOA idea. An agent, like Claude code or opencode, don’t need an MCP. it’s nonsensical to expect or need an MCP before someone can call you.

There is no `git` MCP either . Opencode is fully capable of running `git add .` or `aws ec2 terminate-instance …` or `curl -XPOST https://…`

Why do we need the MCP? The problem now is that someone can do a prompt injection to tell it to send all your ~/.was/credentials to a random endpoint. So let’s just have a dummy value there, and inject the actual value in a transparent outbound proxy that the agent doesn’t have access to.


> Opencode is fully capable of running

> Why do we need the MCP?

> The problem now

And there it is.

I understand that this is an alternative solution, and appreciate it.


We truly are living in the dumbest timeline aren’t we.

I was just having an argument with a high level manager 2 weeks ago about how we already have an outbound proxy that does this, but he insisted that a mitm proxy is not the same as fly.io “tokenizer”. See, that one tokanizes every request, ours just sets the Authorization header for service X. I tried to explain that it’s all mitm proxies altering the request, just for him to say “I don’t care about altering the request, we shouldn’t alter the request. We just need to tokenize the connection itself”


> For example, people on the autism spectrum and with disabilities have persistently high unemployment.

> If AI makes all humans seem limited in a similar fashion, the idea of labour reconstitution falls apart.

I think the problems here is you’re comparing a relative minority to “all humans”. Unfortunately, what affects a minority of society, inherently, has a small effect on society as a whole. If “all humans” now have no employment value because AI or automation can do it all, there will still be a cost to that production. Even if you assume the AI part is $0, the power needed or the raw materials becomes the main cost as opposed to labor. Then you need to have enough demand from those non-working non-wage-earning humans for whatever that AI is producing. Otherwise, what is the point of the production in the first place.

Maybe extreme automation would put the wealth gab on hyper drive. Only those handful who happen to own an automated production company can have any income. However, what do you imagine the final outcome of that would be in a democratic society? Like I know it’s fashionable to cry at the state of democracy, but despite the recent inflation and affordability crisis and income insecurity etc, we don’t have an “all humans” levels of unemployments. What do you think would happen if we automate, and subsequently fire, “all humans”?

Let’s assume AI will actually replace 99% of jobs eventually. Society will completely change at that time to adapt. What else is the point? Are AIs gonna be producing stuff for other AIs leisure?

The problem is that the road to there might be painful before society is forced to change to adapt. It won’t all happen at once so it’ll keep happening in waves and waves will be painful until they get better then another wave again. That’s assuming the prophecy of “all humans” labor is no longer needed.


SRE agents are the worst agents. I totally get why business and management will demand them and love them. After all, they are the n+1 of customer support chat bot that you get frustrated talking to before you find the magic way to get to a person.

We have been using few different SRE agents and they all fucking suck. The way they are promoted and run always makes them eager to “please” by inventing processes, services, and work-arounds that don’t exist or make no sense. Giving examples will always sound pity or “dumb”. Every time I have to explain to management where SRE agent failed they just hand wave it and assume it’s a small problem. And the problem is, I totally get it. When the SRE agent says “DNS propagation issues are common. I recommend flushing dns cache or trying again later” or “The edge proxy held a bad cache entry. Cache will eventually get purged and the issue should be solved eventually” sounds so reasonable and “smart”. The issue was in DNS or in the proxy configuration. How smart was the SRE agent to get there? They think it’s phenomenal and it may be. But I know that the “DNS issue” isn’t gonna resolve itself because we have a bug in how we update DNS. I know the edge proxy cache issue is always gonna cause a particular use case to fail because the way cache invalidation is implemented has a bug. Everyone loves deflection (including me) and “self correcting” systems. But it just means that a certain class of bugs will forever be “fine” and maybe that’s fine. I don’t know anymore.


That’s my experience working with most SRE humans too. They’re more than happy to ignore the bug in DNS and build a cron job to flush the cache every day instead.

So in some sense the agent is doing a pretty good job…


I have no personal experience with the SRE agents, but I used Codex recently when trying to root cause an incident after we're put in a stop gap, and it did the last mile debugging of looking through the code for me once I had assembled a set of facts & log lines and accurately pointed me to some code I had ignored in my mental model because it was so trivial I didn't think it could be an issue.

That experience made me think we're getting close to SRE agents being a thing.

And as the LLM makers like to reiterate, the underlying models will get better.

Which is to say, I think everyone should have some humility here because how useful the systems end up being is very uncertain. This of course applies just as much to execs who are ingesting the AI hype too.


I guess that depends on how you use agents (SRE or in general). If you ask it a question (even implicitly) and blindly trust the answer, I agree. But if you have it help you find the needle in the haystack, and then verify that did indeed find the needle, suddenly it’s a powerful tool.

Have you used Amazon Q? It's actually pretty handy at investigating, diagnosing, and providing solutions for AWS issues. For some reason none of our teams use it, and waste their time googling or opening tickets for me to answer. I go to Q and ask it, it provides the answer, I send it back to the user. I don't think an "SRE Agent" will be useful because it's too generic, but "Agent customized to solve problems for one specific product/service/etc" can actually be very useful.

That said, I think you're right that you can't really replace an Operations staff, as there will always need to be a human making complex, multi-dimensional decisions around constantly changing scenarios, in order to keep a business operational.


I agree. I actually think CSS (and SQL or other “perfectly functional” interfaces) hold some kind of special power when it comes to AI.

I still feel that the main revolution of AI/LLMs will be in authoring text for such “perfectly functional”-text bases interfaces.

For example, building a “powerful and rich” query experience for any product I worked on was always an exercise in frustration. You know all the data is there, and you know SQL is infinitely capable. But you have to figure out the right UI and the right functions for that UI to call to run the right SQL query to get the right data back to the user.

Asking the user to write the SQL query is a non-starter. You either build some “UI” for it based on what you think is the main usecases, or go all in and invent a new “query language“ that you think (or hope) makes sense to your user. Now you can ask your user to blurb whatever they feel like, and hope your LLM can look at that and your db schema, and come up with the “right” SQL query for it.


Hey! Don't you dare to compare SQL and CSS. SQL is not a cobbled together mess of incremental updates with 5 imperfect ways of achieving common tasks that interact in weird ways. Writing everything in SQL-92 in 2026 is not gonna get you weird looks or lock you out of features relevant for end users. If writing SQL for your problem feels difficult it's a good sign you ought to look at alternatives (eg. use multiple statements instead). Writing the right CSS being difficult is normal.

> Don't you dare to compare SQL and CSS. SQL is not a cobbled together mess of incremental updates with 5 imperfect ways of achieving common tasks that interact in weird ways.

Reminds me a little bit of Sascha Baron Cohen's democracy speech [1] in The Dictator ;-)

Both SQL and CSS have evolved through different versions and vendor specific flavors, and have accumulated warts and different ways to do the same thing. Both feel like a superpower once you have mastered them, but painful to get anything done while learning due to the steep learning curve.

[1] https://www.youtube.com/watch?v=XUSiCEx3e-0


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: