Hacker Newsnew | past | comments | ask | show | jobs | submit | diwank's commentslogin

opus 4.6 gets it right more than half the times


Working on Memory Store: persistent, shared memory for all your AI agents.

https://memory.store

The problem: if you use multiple AI tools (Claude, ChatGPT, Cursor, etc.), none of them know what the others know. You end up maintaining .md files, pasting context between chats, and re-explaining your project every time you start a new conversation. Power users spend more time briefing their agents than doing actual work.

Memory Store is an MCP server that ingests context from your workplace tools (Slack, email, calendar) and makes it available to any MCP-compatible agent. Make a decision in one tool, the others know. Project status changes, every agent is up to date.

We ran 35 in-depth user interviews and surveyed 90 people before writing a line of product code — 95% had already built workarounds for this problem (custom GPTs, claude.md templates, copy-paste workflows). The pain is real and people are already investing effort to solve it badly.

Early users are telling us things like one founder who tracked investor conversations through Memory Store and estimated talking to 4-5x more people because his agents could draft contextual replies without manual briefing. It helped close his round.

Live in beta now. Would love feedback from anyone who's felt this pain! :)


I implemented a process. - First snapshot. Generated day by day files based on last 2years commit, used Claude to enrich every commit message (why,how,where and output). Also create File.java.md file enriched commit message per file. - I have the history. Next I embedded all of this to Postgresql database. - Implemented an MCP to query anything. - Create whatchdogs to followup projects that I setup, git hook creates stub files per commit. - And then process enriches new stub files. then indexes it to Postgresql.

I did for all internal projects. And first rule of CLAUDE.md is ask MCP for any information. Now Claude knows what other related projects did, what should adopt for.


I dont think this is Cerebras. Running on cerebras would change model behavior a bit and it could potentially get a ~10x speedup and it'd be more expensive. So most likely this is them writing new more optimized kernels for Blackwell series maybe?


Fair point but it remains to answer - why isn’t this speed up available in ChatGPT and only in the api?


yeah I agree. this is really unfortunate because it seems that there is something systemic here at play which has become twisted up in a cult of personality and that's made a rigorous scientific investigation very difficult



Just in time UI is incredibly promising direction. I don't expect (in the near term) that entire apps would do this but many small parts of them would really benefit. For instance, website/app tours could be just generated atop the existing ui.


I am a bit mind boggled by the pricing lately, especially since the cost increased even further. Is this driven by choices in model deployment (unquantized etc) or simply by perceived quality (as in 'hey our model is crazy good and we are going to charge for it)?


Remarkable is really lagging behind on this. I was thinking of finally biting the bullet and writing an app for the Paper Pro. Any ideas/takers?


I’ve even thought about building my own DIY e-ink reader/note-taker for this.

How does apps for Paper Pro works?


Google's response:

"Read our statement on today’s decision in the case involving Google Search."

https://blog.google/outreach-initiatives/public-policy/doj-s...


Agreed. The fact that it has any structure at all is fascinating (and super pretty). Could signal at interesting internal structures. I would love to see a version for Qwen-3 and Mistral too!

I wonder if being trained on significant amounts of synthetic data gave it any unique characteristics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: