Hacker Newsnew | past | comments | ask | show | jobs | submit | maxbaines's commentslogin

This is hardly surprising given - New partnerships with tech companies support Wikipedia’s sustainability. Which relies on Human content.

https://wikimediafoundation.org/news/2026/01/15/wikipedia-ce...


I agree with the dig, although it's worth mentioning that this AI Cleanup page's first version was written on the 4th of December 2023.

Thankyou


This should make you think..


Not seeing this in my day to day, in fact the opposite.


Can you be more specific? E.g. refute something specific that the article mentions. Or are you only reacting to the title, not the article's contents?


I think it should be on the article to prove its title. I hardly think presenting one test case to some different models substantiates the claim that "AI Coding Assistants Are Getting Worse." Note that I have no idea if the title is true or not, but it certainly doesn't follow from the content of the article alone.


With llms being hard to test objectively, any claim made about them has to be substantiated with atleast anecdotes. The article presented some backing, if you dont think its enough you gotta present some of your own, or people cant talk you seriously


I did present my own evidence to support _my_ argument that the article is woefully lacking data to support its conclusion. It's not on me to try to make the counterargument (that AI coding assistants aren't getting worse) because that's not my opinion.


I think as the article mentions garbage in garbage Out, we are more trusting and expect more. Coding assistants don't just need a good model, they need a good harness, these methods have also changed recently.


The article is ridiculous garbage. I knew the IEEE had fallen to irrelevance, but that their magazine now prints nonsense like this -- basically someone's ad wrapped in an incredibly lazy supposition -- is incredibly indicting.

The guy wrote code depending upon an external data file (one that the LLM didn't have access to), with code that referred to a non-existing column. They then specifically prompted it to provide "completed code only, without commentary". This is idiotic.

"Dear LLM, make a function that finds if a number is prime in linear time. Completed code only! No commentary!".

Guy wanted to advertise his business and its adoption of AI, and wrote some foolish pablum to do so. How is this doing numbers here?


I mean...the naive approach for a prime number check is o(n) which is linear. Probably u've meant constant time?


Couldn't agree more.

I would expect older models make you feel this way.

* Agents not trying to do the impossible (or not being an "over eager people pleaser" as it has been described) has significantly improved over the past few months. No wonder the older models fail.

* "Garbage in, garbage out" - yes, exactly ;)


I nearly always use Tailwind, had no idea there was even a Plus offering. Checking the site I see it now but it’s a subtle link. Also wonder if shad/cn had something to do with the reduced usage of plus.


shadcn/ui I'd argue is probably the single biggest factor in the declining Tailwind revenue more so than just LLMs in general.

As said is it is to say shadcn is what Tailwind should've created and maintained for a fee rather than some html/css templates that are easily replicated.

I say this as someone who bought Tailwind+ to support the project many years ago and still use Tailwind every single day.


Looks like a pretty useful offering, 128Gb Memory Unified, with the ability to be chained. IN the Uk release price looks to be £2999.99 Nice to see AI Inference becoming available to us all, rather than using a GPU ..3090etc.

https://www.scan.co.uk/products/asus-ascent-gx10-desktop-ai-...


All Sparks only have a memory bandwidth of 270 GB/s though (about the same as the Ryzen AI Max+ 395), while the 3090 has 930 GB/s.

(Edit: GB of course, not MB, thanks buildbot)


The 3090 also has 24gb of ram vs 128gb for the spark


You'd have to be doing something where the unified memory is specifically necessary, and it's okay that it's slow. If all you want is to run large LLMs slowly, you can do that with split CPU/GPU inference using a normal desktop and a 3090, with the added benefit that a smaller model that fits in the 3090 is going to be blazing fast compared to the same model on the spark.


I believe you mean GB/s?


Eh, this is way overblown IMO. The product page claims this is for training, and as long as you crank your batch size high enough you will not run into memory bandwidth constraints.

I've finetuned diffusion models streaming from an SSD without noticeable speed penalty at high enough batchsize.


At that price (roughly 4000 USD), one could build a full HBM powered Xeon system from the Sapphire Rapids generation.

Either build a single socket system and give it some DDR5 to work alongside, or go dual socket and a bit less DDR5 memory.


I would hold my horses and see if the specs are actually true and not overblown like for the spark otherwise there are better options.


This is a Spark, so it is not going to be any different.


And if waiting six months is possible, do that.

Asus make some really useful things, but the v1 Tinker Board was really a bit problem-ridden, for example. This is similarly way out on the edge of their expertise; I'm not sure I'd buy an out-there Asus v1 product this expensive.


I am very much pro AI and use it daily for everything from code to product photos, and games. But I feel these moments are one of the few you actually want to keep AI away from. Imagine looking back in hopefully 50years at an AI image.


Yes, that is a great point. Real moments and memories are irreplaceable. We don’t see AI as a replacement for traditional photos, but as an option for couples who may not have the budget, time, or resources for a professional shoot. We just want to have another option for people who would like to use AI to create some funny and low-cost photos to share in their social media( Funny but filled with love!)


There are plenty of slag heaps (spoil tips) near coal plants, wonder how it could work with those? I guess the specific heat capacity is greater..


This type of plant is generally used for emergency power to balance the grid, whilst other plant come online.


Really like the idea of this, and good timing for me. Will give this a try over the weekend.


Thanks! I hope this will be useful for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: