Hacker Newsnew | past | comments | ask | show | jobs | submit | aschobel's commentslogin

Can you really register iMessage on an emulated MacOS these days? I'd love to learn more, the AIs I asked say it doesn't seem possible in VMs anymore.

I think you need to register on a real Mac (2 of 3 of my MBPs use OCLP), but then can use an emulated one if you add it to your Apple account. Either way, I don't recommend to use a protocol behind such a moat. Probably better to use Signal or Threema.

Moltbot is supposed to be a 'personal AI assistant'

with >60% market share in US, you can't really expect people to just 'not use iMessage'. It's what the messages are going to be coming in on


It’s been amazing for me for Go and TypeScript; and pretty decent at Swift.

There is a steep learning curve. It requires good soft eng practices; have a clear plan and be sure have good docs and examples. Don’t give it an empty directory; have a scaffolding it can latch onto.


Just a few ancestors up:

> AI is pretty bad at Python and Go as well.

I guess there's probably something other than which language you're using that's affecting this. Business domain or code style? No idea.


there is are skills / subagents for that

something like code-simplifier is surprisingly useful (as is /review)

https://x.com/bcherny/status/2007179850139000872


i've never hit a limit with my $200 a month plan


Agreed and skills are a huge unlock.

codex cli even has a skill to create skills; it's super easy to get up to speed with them

https://github.com/openai/skills/blob/main/skills/.system/sk...


For coding I don’t use any of the previous gen models anymore.

Ideally I would have both fast and SOTA; if I would have to pick one I’d go with SOTA.

There a report by OpenRouter on what folks tend to pay for it; it generally is SOTA in the coding domain. Folks are still paying a premium for them today.

There is a question if there is a bar where coding models are “good enough”; for myself I always want smarter / SOTA.


FWIW coding is one of the largest usages for LLM's where SOTA quality matters.

I think the bar for when coding models are "good enough" will be a tradeoff between performance and price. I could be using Cerebras Code and saving $50 a month, but Opus 4.5 is fast enough and I value the piece-of-mind I have knowing it's quality is higher than Cerebras' open source models to spend the extra money. It might take a while for this gap to close, and what is considered "good enough" will be different for every developer, but certainly this gap cannot exist forever.


I just use a mix of Cerebras Code for lots of fast/simpler edits and refactoring and Codex or Claude Code for more complex debugging or planning and implementing new features, works pretty well. Then again, I move around so many tokens that doing everything with just one provider would need either their top of the line subscriptions or paying a lot per-token some months. And then there's the thing that a single model (even SOTA) can never solve all problems, sometimes I also need to pull out Gemini (3 is especially good) or others.


I logged into my PGE and saw this

https://www.pge.com/en/newsroom/currents/energy-savings/pg-e...

shrug they claim prices re going down?


I’m basically only using the Codex CLI now. I switched around the GPT-5 timeframe because it was reliably solving some gnarly OpenTelemetry problems that Claude Code kept getting stuck on.

They feel like different coworker archetypes. Codex often does better end-to-end (plan + code in one pass). Claude Code can be less consistent on the planning step, but once you give it a solid plan it’s stellar at implementation.

I probably do better with Codex mostly due to familiarity; I’ve learned how it “thinks” and how to prompt it effectively. Opus 4.5 felt awkward for me for the same reason: I’m used to the GPT-5.x / Codex interaction style. Co-workers are the inverse, they adore Opus 4.5 and feel Codex is weird.


Grüezi! Is there a way to re-generate my wrapped?

https://hn-wrapped.kadoa.com/aschobel


I had a similar experience but overall the idea is super charming. I do like the personalized HN for 2035. Thank you for building it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: