Hacker Newsnew | past | comments | ask | show | jobs | submit | more hereonout2's commentslogin

Huge respect for all your articles and work on llms, but this example should have been using AI to create a tool that uses AI to intelligently filter hacker news :)


Someone posted that last week.

https://www.hackernews.coffee/


Prompt engineering!


Been making things with wood for a while now, never used or heard of cut sealer! I usually put some type of finish on, maybe more on the ends as they absorb it better, but not just specifically on cuts.


I usually put an extra couple of coats of whatever the finish is on end grain. It does soak right in.


> Disclaimer: This is a fan-made website created by AI enthusiasts. We are not affiliated with, endorsed by, or connected to manus.im. This website is an independent project and operates separately from the official Agenttars

Loads of sites like this submitted, what's is the motivation I wonder?


Ai trains/farms/steals from the internet, internet releases/publishes/steals the AI back?


Are unit tests a shiny fad? Second time I've seen it mentioned in this thread. Is there some other type of testing I should be doing, or have I been doing it all wrong for the last two decades?


For unit testing to pay off, it requires having modular units to test.

Programmers coming up through frameworks or functional programming often don't have those, and so the techniques OO unit testers use don't translate well at all. If the first "unit" you build is a microservice, the first possible "unit" test is the isolation test for that service.

I have watched junior engineers crawl over glass to write tests for something because they didn't know how to write testable code yet, and then the tests they write often make refactoring a-la-Martin-Fowler's-book impossible.

(And that is leaving aside the consultancies that want to be able to advertise "100% test coverage!" but don't actually care if the tests make software harder to maintain in the long run because they aren't going to be there.)

Eventually we'll be able to acknowledge that there are a lot of different skills in our profession, and that writing good code isn't about being "smart": it's about knowing how to write code well. But until then people will keep blaming the tools they don't know how to use.


Integration testing?

Less mocking, more bang for the buck.


Back in the day when I used to make "webapps" - our integration tests were an epic faff.

Often times using things like cucumber to describe a set of interactions, in the days before headless chrome it'd drive selenium which would literally open Firefox on your desktop and start navigating the site.

It did it's job but the feedback loop was slow.

Today I mainly write libraries, cli tools and the occasional small HTTP API. I still have unit and integration tests but they are all just standard pytest functions. I can run each type individually, my definition of an "integration" test is either something that talks to the outside world in some capacity, or something that is really slow (mostly running an ml model these days).

I much prefer today's approach, but admittedly I work on things with far fewer points of interaction which I think greatly simplifies the work.


I am so mad that the mockists stole the word "unit test" for their thing. The original definition of a unit test was writing "integration" tests for each of the sub-components of a system.

(Mockist tests are fine for people who really want them, as long as you delete them before checking in the code.)


i thought mockist tests are written in a separate test module


Docker / containers are more than just that though. Using it allows your golang process to be isolated and integrated into the rest of your tooling, deployment pipelines, etc.


It's go; that could be trivially done with a script.

Heck, you can even cross compile go code for any architecture to another one (even for different OSes), and docker would be useless there unless docker has mechanisms to bind qemu-$ARCH with containers and binfmt.


I'd argue that having it in a Docker container is much easier to integrate with the rest of many people's infra. On ECS, K8s, or similar? Docker is such an easy layer to slap on and it'll fit in easily in that situation.

Are you running on bare servers? Sure, a Go binary and a script is fine.


Yep, it's using docker as a means of delivery really. Especially in larger organisations this is just the done thing now.

I understand what the OP is saying but not sure they get this context.

If I were working in that world still I might have that single binary, and a script, but I'm old school and would probably make an RPM package and add a systemd unit file and some log rotate configs too!


I've done both, tiny scratch based images with a single go binary to full fat ubuntu based things.

What is killing me at the moment is deploying Docker based AI applications.

The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.

Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.

I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.

Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.


PyTorch essentially landed on the same bundling CUDA solution, so you're at least in good company.


Yep, then I have some projects that have pytorch dependencies which use it's own bundled CUDA and non-pytorch dependencies that use a CUDA in the usual system wide include path.

So CUDA gets packaged up in the container twice unless I start building everything from source or messing about with RPATHs!


I think I agree. Objectively we do have it better than most and tech is generally an extremely cushy job.

Even here in Europe salaries can match Doctors and Lawyers but the barrier to entry is much lower and in my experience employment is still based on merit more than anything.

Perhaps there's some element of "don't rock the boat" but maybe some guilt too. We really have lucked out.

Not sure how comfortable I'd feel taking union action over my job that requires me to leave the house once a week but pays 3x a teachers salary.


Yep it's an odd take!

I was last a "web developer" almost two decades ago, but dipping back in on a few occasions I am always appreciative of how much innovation has happened since then.

The world before the huge investment in browser technology was dark. Tables and spacers for meaningful layout and flash or shockwave for anything interactive.

I remember a time when css based drop down menus were seen as some sort of state of the art.


> I remember a time when css based drop down menus were seen as some sort of state of the art.

They still are on mobile for navigation - full screen sans js


Yes, presenting a large catalog of products (a few hundreds), for discovery purpose an efficient menu is still a big challenge in term of UX and technical implementation all the more when portability, accessibility, and cross-devices is taken into account.

Things that definitely look like trivial banality at shallow level often end up to need a lot of attention on many concurrent details.


Uh, a guess is that 1+ billion people are already good at using "drop down menus" along with check boxes, radio buttons, single line text boxes, multiline text boxes, push buttons, links. So, when those user interface controls are sufficient for the purpose, using something else might reduce the collection of happy users. The Web site of my bank stays close to such now classic controls.


Maybe this misses the point slightly?

I'm talking about a time when investment in browser development and web standards was so lacking that being able to achieve things like this blew everyone's mind:

https://meyerweb.com/eric/css/edge/menus/demo.html

Hackernews, were it around back then, would've gone as crazy for this post as we do the latest AI model today.


> Maybe this misses the point slightly?

Maybe! My thoughts were, say, tangential or incidental.

A guess is that a central issue is how much in new features should we develop and use?

I see a dilemma: (A) I mentioned the old controls that go back to early Windows and even IBM's 3270 terminals. An advantage of these controls is that lots of software tools implement them and billions of people already understand them. (B) Being too happy with the old stuff or even the present risks progress that is possible and worthwhile.

Your post seemed to illustrate (B).

But generally in the industry, with smartphones, laptops, desktops, Apple, Google's Android, Windows, browsers, apps and extrapolating, we could have an explosion of new features that would complicate work for everyone and fragment the industry.

Ah, maybe Darwin would explain: Lots of mutations with only the best lasting??

For my work, I'm thrilled with the tools and technology available now that I get to exploit.


It shocks me that I remember css/edge so well after all these years.


True, it's a cliche but it's my dream also.

Except the dream includes first using a tech salary to pay off a mortgage or at least most of it, and then being able to comfortably work with my hands in some romantic artisanal manner.

AI probably gets me closer to spending 60 hours a week doing back breaking groundwork and struggling to pay the bills.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: