Hacker Newsnew | past | comments | ask | show | jobs | submit | romanovcode's commentslogin

> the trackpad will be excellent

Nope. It is mechanical.


The mechanical trackpad of my 2007 Macbook (the first unibody) is still better than any PC trackpad I've ever used.

What do you mean, "mechanical"?

A mechanical trackpad is like an unpowered treadmill iirc. Sometimes they ship with a gimbal mount so you can scroll more than one direction.

This explicitly says "Multi-Touch trackpad for precise cursor control and support for gestures", so at most it's the clicking action that is mechanical (rather than the click being faked with haptic feedback, as it is on the current models)

Their mechanical trackpads were excellent too. It's only their keyboard which they messed entirely up.

Most seniors are hired for their code readability and real-life experiences with real products and problems. Not for code writing ability.

> Nobody cares about you

This is such a low-iq argument I cannot even. Yes, nobody cares about OP, you, me, whatever - until they do. Not to mention general harvesting for profiling and propaganda reasons.

General: What do people in this city/country/region/etc are thinking - This is the main one where the data is used and collected, then grouped. It is extremely powerful information for targeted agenda whichever it might be.

Targeted: Oh, you or someone from your close ones went to a political protest? Too bad we have all this information to put you and your family in jail - This is where suddenly they will care about you, even when it is NOT YOU but someone from your close circles were the ones upsetting them.


It doesn't seem to work if you use Claude plugins like feature-dev and commit-push-pr.

I tested it with multiple PRs and I see nothing in GH nor Entire dashboard.


This does not address it one bit. Everyone has to install it, authorize themselves with said GH repo etc.

And since you mentioned "enterprise", they won't be able to use it if they do not use GH, and a lot of enterprise cropos use azure devops.

Also there is no user management etc, if anything this is ANTI-enterprise.


Same, then I pasted URL to Claude and asked it to explain the product to me: Blocked. Irony is palpable.


For a solo developer this is pretty useless. For a team where everyone uses same coding tool this might be useful.

I am afraid however that with these tools Claude Code will just copy this in 3 months and have it as standard functionality within itself as a plugin.

Also I find it ironic that this domain is blocking all AI tools to access it, I tried to ask AI to explain what is the product and it is blocking Claude/GPT access to the website.


Would not be surprised if the whole post is ai-generated. I mean it is Microslop after all.


Jeffrey Snover is a person not a company, his blog is private not corporate, and he works for Google.

Stop posting human slop


I mean, it's a very nice speech but why is Microsoft failing everywhere except Azure? Doesn't seem like outstanding results to me.


Microsoft's revenue was $281.7 billion for the last year, up 15% on the previous year. Revenue from Azure was > $75 billion. Doesn't seem like a failing company to me.

https://www.microsoft.com/investor/reports/ar25/index.html


Surely Azure is failing too, it's easily the worst cloud option. We have a few clients who run APIs on Azure and for all of them I've had to write special systems to monitor and handle the API falling over.


Agreed, but for some reason they get a lot of enterprise clients and make millions of them.


> it's easily the worst cloud option.

are you including Oracle in that?


I stand corrected. I'd forgotten they even have one!


> failing everywhere except Azure

Azure is a failure once you factor in the concentration risk of their customer portfolio. Most of the revenue comes from maybe 10 companies. OpenAI alone is ~50% of future commitments.


Well this is extremely disappointing to say the least.


It says "subscription users do not have access to Opus 4.6 1M context at launch" so they are probably planning to roll it out to subscription users too.


Man I hope so - the context limit is hit really quickly in many of my use cases - and a compaction event inevitably means another round of corrections and fixes to the current task.

Though I'm wary about that being a magic bullet fix - already it can be pretty "selective" in what it actually seems to take into account documentation wise as the existing 200k context fills.


Hello,

I check context use percentage, and above ~70% I ask it to generate a prompt for continuation in a new chat session to avoid compaction.

It works fine, and saves me from using precious tokens for context compaction.

Maybe you should try it.


How is generating a continuation prompt materially different from compaction? Do you manually scrutinize the context handoff prompt? I've done that before but if not I do not see how it is very different from compaction.


I wonder if it's just: compact earlier, so there's less to compact, and more remaining context that can be used to create a more effective continuation


Is this a case of doing it wrong, or you think accuracy is good enough with the amount of context you need to stuff it with often?


In my example the Figma MCP takes ~300k per medium sized section of the page and it would be cool to enable it reading it and implementing Figma designs straight. Currently I have to split it which makes it annoying.


I mean the systems I work on have enough weird custom APIs and internal interfaces just getting them working seems to take a good chunk of the context. I've spent a long time trying to minimize every input document where I can, compact and terse references, and still keep hitting similar issues.

At this point I just think the "success" of many AI coding agents is extremely sector dependent.

Going forward I'd love to experiment with seeing if that's actually the problem, or just an easy explanation of failure. I'd like to play with more controls on context management than "slightly better models" - like being able to select/minimize/compact sections of context I feel would be relevant for the immediate task, to what "depth" of needed details, and those that aren't likely to be relevant so can be removed from consideration. Perhaps each chunk can be cached to save processing power. Who knows.


lmao what are you building that actually justify needing 1mm tokens on a task? People are spending all this money to do magic tricks on themselves.


The opus context window is 200k tokens not 1mm.

But I kinda see your point - assuming from you're name you're not just a single purpose troll - I'm still not sold on the cost effectiveness of the current generation, and can't see a clear and obvious change to that for the next generation - especially as they're still loss leaders. Only if you play silly games like "ignoring the training costs" - IE the majority of the costs - do you get even close to the current subscription costs being sufficient.

My personal experience is that AI generally doesn't actually do what it is being sold for right now, at least in the contexts I'm involved with. Especially by somewhat breathless comments on the internet - like why are they even trying to persuade me in the first place? If they don't want to sell me anything, just shut up and keep the advantage for yourselves rather than replying with the 500th "You're Holding It Wrong" comment with no actionable suggestions. But I still want to know, and am willing to put the time, effort and $$$ in to ensure I'm not deluding myself in ignoring real benefits.


I do not trust that, similar working was used when Sonnet 1M launched. Still not the case today.


They want the value of your labor and competency to be 1:1 correlated to the quality and quantity of tokens you can afford (or be loaned)??

Its a weapon who's target is the working class. How does no one realize this yet?

Don't give them money, code it yourself, you might be surprised how much quality work you can get done!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: