Hacker Newsnew | past | comments | ask | show | jobs | submit | iepathos's commentslogin

The hole is closed with per-site pseudonyms. Your wallet generates a unique cryptographic key pair for each site so same person + same site = same pseudonym, same person + different sites = different, unlinkable pseudonyms.

"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.


Ok but we were talking about users on discord who have to verify their age. I was under the impression that

> it can recognize you across visits and build a behavioral profile under your pseudonym

is the default Discord experience for users with an account, long before age verification entered the chat.


If AI is good enough that juniors wielding it outproduce seniors, then the juniors are just... overhead. The company would cut them out and let AI report to a handful of senior architects who actually understand what's being built. You don't pay humans to be a slow proxy for a better tool.

If the tools get good enough to not need senior oversight, they're good enough to not need junior intermediaries either. The "juniors with jetpacks outpacing seniors" future is unrealistic and unstable—it either collapses into "AI + a few senior architects" or "AI isn't actually that reliable yet."


Or it collapses when the seniors have to retire anyway. Who instructs the LLM when there’s nobody who understands the business?

I’m sure the plan is to create a paperclip maximizing company which is fully AI. And the sea turned salty because nobody remembered how to turn it off.


Apparent hypocrisy and injustice in government policy is an ugly thing in the world that should be pointed out and eliminated through public awareness and scrutiny.


Facebook are also under investigation, it just hasn't concluded yet. https://news.ycombinator.com/item?id=46912263


Get a life that's more interesting than dish washing 4-8 hours a day.


Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.


The default output from AI is much like the default output from experienced devs prioritizing speed over architecture to meet business objectives. Just like experienced devs, LLMs accept technical debt as leverage for velocity. This isn't surprising - most code in the world carries technical debt, so that's what the models trained on and learned to optimize for.

Technical debt, like financial debt, is a tool. The problem isn't its existence, it's unmanaged accumulation.

A few observations from my experience:

1. One-shotting - if you're prompting once and shipping, you're getting the "fast and working" version, not the "well-architected" version. Same as asking an experienced dev for a quick prototype.

2. AI can output excellent code - but it takes iteration, explicit architectural constraints, and often specialized tooling. The models have seen clean code too; they just need steering toward it.

3. The solution isn't debt-free commits. The solution is measuring, prioritizing, and reducing only the highest risk tech debt - the equivalent of focusing on bottlenecks with performance profiling. Which code is high-risk? Where's the debt concentrated? Poorly-factored code with good test coverage is low-risk. Poorly-tested code in critical execution paths is high-risk. Your CI pipeline needs to check the debt automatically for you just like it needs to lint and check your tests pass.

I built https://github.com/iepathos/debtmap to solve this systematically for my projects. It measures technical debt density to prioritize risk, but more importantly for this discussion: it identifies the right context for an LLM to understand a problem without looking through the whole codebase. The output is designed to be used with an LLM for automated technical debt reduction. And because we're measuring debt before and after, we have a feedback loop - enabling the LLM to iterate effectively and see whether its refactoring had a positive impact or made things worse. That's the missing piece in most agentic workflows: measurement that closes the loop.

To your specific concern about shipping unreviewed code: I agree it's risky, but the review focus should shift from "is every line perfect" to "where are the structural risks, and are those paths well-tested?" If your code has low complexity everywhere, is well tested (always review tests), and passing everything, then ask yourself what you actually gain at that point from further investing your time over-engineering the lesser tech debt away? You can't eliminate all tech debt, but you can keep it from compounding in the places that matter.


The "code witness" concept falls apart under scrutiny. In practice, the agent isn't replacing ripgrep with pure Python, it's generating a Python wrapper that calls ripgrep via subprocess. So you get:

- Extra tokens to generate the wrapper

- New failure modes (encoding issues, exit code handling, stderr bugs)

- The same underlying tool call anyway

- No stronger guarantees - actually weaker ones, since you're now trusting both the tool AND the generated wrapper

The theoretical framing about "proofs as programs" and "semantic guarantees" sounds impressive, but the generated wrapper doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones. This is true for pretty much any CLI tool you're having the AI wrap python code around to do instead of calling battle tested tools directly.

For actual development work, the artifact that matters is the code you're building, which we're already tracking in source control. Nobody needs a "witness" of how the agent found the right file to edit and if they do agents have parseable logs. Direct tool calls are faster, more reliable, and the intermediate exploration steps are ephemeral scaffolding anyway.


> In practice, the agent isn't replacing ripgrep with pure Python, it's generating a Python wrapper that calls ripgrep via subprocess.

Yep. I have very strong guardrails on what commands agents can execute, but I also have a "vterm" MCP server that the agent uses to test the TUI I'm developing in a real terminal emulator; it can send events, take screenshots, etc.

More than once it's worked around bash tool limitations by using the vterm MCP server to exit the TUI app under development and start issuing unrestricted bash commands. I'm probably going to add command filtering on what can be run under vterm (so it can't exit back to an initial shell), which will help unless/until I add a "!<script>" style command to my TUI, in which case I'm sure it'll find and exploit that instead.


> but the generated wrapper doesn't provide stronger semantics than rg alone, it actually provides strictly weaker ones

I don't know if I agree with this.

I had been doing some experiments using Powershell as the only available tool, and I found that switching to an ExecuteFunction (C#) tool provided a much less buggy experience, even when Process.Start is involved.

Which one is functionally a superset of the other is actually kind of a chicken-egg problem because they can both bootstrap into the other. However, in practice the code tool seems to provide far more "paths" and intermediate tokens to absorb the complexity of the original ask. Powershell seemed much more constraining at the edges. I had a lot of trouble getting the shell to accept verbatim strings as file contents. csc.exe has zero issues with this by comparison.


The trick here is to make the wrappers permanent. Give the agent an environment (VM, whatever) where all of these utilities are stored after being generated.

Basically you let the agent create its own tools and reuse them instead of rewriting them every time from scratch.


Research on calculator use in early math education (notably the Hembree & Dessart meta-analysis of 79 studies) found that students given calculators performed better at math - including on paper-and-pencil tests without calculators. The hypothesis is that calculators handle computation, freeing cognitive bandwidth and time for problem-solving and conceptual understanding. Problem solving and higher level concepts matter far more than memorizing multiplication and division tables.

I think about this often when discussing AI adoption with people. It's also relevant to this VS Code discussion which is tangential to the broader AI assisted development discussion. This post conflates tool proficiency with understanding. You can deeply understand Git's DAG model while never typing git reflog. Conversely, you can memorize every terminal command and still design terrible systems.

The scarce resource for most developers isn't "knows terminal commands" - it's "can reason about complex systems under uncertainty." If a tool frees up bandwidth for that, that's a net win. Not to throw shade at hyper efficient terminal users, I live in the terminal and recommend it, but it isn't going to make you a better programmer just by using it instead of an IDE for writing code. It isn't reasoning and understanding about complex systems that you gain from living in a terminal. You gain efficiency, flexibility, and nerd cred - all valuable, but none of them are systems thinking.

The auto-complete point in the post is particularly ironic given how critical it is for terminal users and that most vim users also rely heavily on auto-complete. Auto-complete does not limit your effectiveness, it's provably the opposite.


That you for mentioning that. I have often argued with people in favour of my approach (home ed so I got to choose how to teach my kids from about 9 to 16) of not doing things like memorising times tables and learning arithmetic techniques like long division.


This is a well-thought-out critique. Thanks for sharing your insights.


> As IT workers, we all have to prostitute ourselves to some extent.

No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices.

And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company.


> non-profits

I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise.

> open source foundations

Those dreams end. (Speaking from experience.)

> education, healthcare tech

Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics.

> small companies solving real problems

I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another.

> The "we all have to" framing is a convenient way to avoid examining your own choices.

This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back.

> And it's telling that this framing always seems to appear when someone is defending their own employer.

(I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.)

> You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer")

I did!

> so you clearly believe these distinctions matter even though Google itself is an AI company

Yes, I do believe that.

Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good".


Thanks for the thoughtful reply.

The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice.

The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.


> The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it.

I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life.

The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse.

For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others.

> Anthropic and OpenAI also created products with clear utility.

Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.)

> You're claiming Google's useful products excuse their harms,

(mitigate, not excuse)

> but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it.

First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in.

Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar.

At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value.


What specific legal recourse beyond what exists? You can already sue for breach of contract if a company violates their privacy policy. The real problems are: (1) detecting violations in the first place, and (2) proving/quantifying damages. A 'guarantee' doesn't solve either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: