Hacker Newsnew | past | comments | ask | show | jobs | submit | siddboots's commentslogin

This ain’t what you asked but I’m using Claude Code with a pro subscription and I get about an hour use out of it before I run out of tokens. Then I spend 4 hours thinking about how to set up my context for the next session.

I have a very different experience. I have built https://github.com/pixlie/SmartCrawler almost entirely on Claude Code with some usage of Google Jules. Even all GitHub workflows are from CC. I still have tokens left since I try multiple other ideas in parallel.

I have coded a few landing pages and even a full React Native app with the same Claude Pro account in the last month. I do not consider this huge usage, but this is similar to a couple months of my own programming with AI assistance (like Zed + Supermaven).

Please note: SmartCrawler is not ready for usage, you can surely try it out though but I am changing the inner workings and it is half-complete.

Also, there are many branches of code that I have thrown away because I did not continue that approach. Example, I was trying a bounding box way to detect what data to extract, using the HTML element's position and size on browser. All coded with Claude Code. Generating such ideas is cheap in my opinion.


Gave it my first try yesterday burned trough the 20$ limit in maybe an hour and haven't hit the pro limit yet.

Guess I really have to look into making this more efficient.


More walking around the block thinking and more breaks for a cup of coffee.

I’d say this is firmly about knowing how colour mixing works, and not about memorising.

I think both approaches are useful. AI2027 presents a specific timeline in which a) the trajectory of tech is at least somewhat empirically grounded, and b) each step of the plot arc is plausible. There's a chance of it being convincing to a skeptic who had otherwise thought of the whole "rogue AI" scenario as a kind of magical thinking.


For what it's worth, I've read both Bostrom's Superintelligence and AI 2027. Reading Superintelligence was interesting and for me really drove home how hard setting aligned goals for an AI is, but the timelines seemed far enough out that it wasn't likely to be something that would matter in my lifetime.

AI 2027 was much more impactful on me. It probably helps that I read it the same week I started playing with agent mode on GitHub Copilot. Seeing what AI can already do, especially compared to six months ago, and then seeing their projections made AI seem like something much more worth paying attention to.

Yeah, getting from here to being killed by rogue AI nanobots in less than five years still seems pretty far fetched to me. But, each of the steps in their scenario didn't seem completely outside the realm of possiblity.

So for me personally, my 80% confidence interval includes both things stagnating pretty much where they are now, but also something more like AI 2027. I suspect we'll be fine, but AGI seems like a real enough possibility that it's worth working on a contingency plan.


I agree, but I think you're assuming a certain type of person who understands that a detailed prediction can be both wrong and right simultaneously. And that it's not so much about getting all the details right, but being in the right ballpark.

Unfortunately there's a huge number of people who get obsessed about details and then nit pick. I see this with Eliezer Yudkowsky all the time where 90% of the criticism of his views are just nit picking the weaker predictions he makes while ignoring his stronger predictions regarding the core risks which could result in those bad things happening. I think Yudkowsky opens himself up to this though because he often makes very detailed predictions about how things might play out and this largely why he's so controversial, in my opinion.

I really liked AI 2027 personally. I thought specifically the tabletop exercises were a nice heuristic for predicting how actors might behave in certain scenarios. I also agree that it presented a plausible narrative for how things could play out. I'm also glad they did wimp out with the bad ending. Another problem I have with people are concerned about AI risk is that they scare away from speaking plainly about the fact if things go poorly your love ones in a few years will probably either be either be dead, in suspended animation on a memory chip, or in a literal digital hell.


Maybe lack of empathy, although I don't think it's because managers are especially unempathetic people. We have a deeply encultured expectation that deference, gratitude, and loyalty are owed much more in the upward direction, than the downward direction. We expect employees to act as though they are subjugated, rather than merely subordinate.


I like to tell my colleagues that the Heaviside function is so named because it is heavier on one side than the other.


A Gaussian process fits a single high dimensional Gaussian, for example, by treating n observations along a single dimension as a n dimensional space.

Gaussian mixture models fit a large number of low dimensional Gaussians for example you might imagine 2D data generated by several 2D Gaussian superimposed.

This approach is just an example of the latter. It uses higher dimensional Gaussians to capture extra information from a scene, but not in the emulation of an infinite dimensional space in the way that defines Gaussian processes.


To add to a sibling comment, if you're interested in learning a bit about the both the Gaussian (as in a density estimator like Gassian Mixture Models, aka GMMs) vs Gaussian Processes (GP), I have some write-ups here: [1] and [2].

[1] Fun with GMMs https://blog.quipu-strands.com/fun_with_GMMs

[2] This is a larger article on BayesOpt, but I've a section dedicated to GPs: https://blog.quipu-strands.com/bayesopt_1_key_ideas_GPs#gaus...


It's just a contract - if both parties are happy to deal with the energy in bulk, then there's no problem.

However, if either the buyer or seller thinks there is a risk some of the generation won't be produced, there's still lots of ways to manage that risk in a relatively simple way. Probably the simplest is a variation on "take-or-pay" where the buyer holds some of the risk when the generator is curtailed, i.e., not dispatched by the system controller. Essentially that means that the buyer is committing to have load available to generate against.

I've also seen renewable PPAs with time of use specified either in volume for purchase, or in the price structure.


It resembles a pie chart, but if you think of it instead as like a stop watch, then it makes perfect sense.


It would make more sense if each time was an arc instead of a segment. And what does one revolution represent?


Amount of time to eat a pie while waiting for your numerical library of choice.

Conclusion: Polars gives you indigestion.


Just a data point: I've got access to plugins as of the last hour, but no access to web browsing yet.


And another: I did have access to plugins (but not web browsing), but the list of plugins would not load. I seem to have been reverted to what looks more like the May 3rd release now (other than it no longer has the deprecated GPT-3.5 model available)


I love that reddit now wants you to log in to view 18+ subreddits, and that the work-around is to replace "www" with "old" in the URL, asserting that you are indeed old enough.


Lol i noticed this. I can guarantee they're removing old.reddit soon, i can't see why else they'd do blocking on the front end this way.


I was under the impression that most high value content producers and moderation tools that make the bigger subs usable rely on old. They know they'll drive away all the power users if it goes.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: