Hacker Newsnew | past | comments | ask | show | jobs | submit | bruh2's commentslogin

I can't recall the details, but this tool had quite some friction last time I tried downloading a site with it. Too many new definitions to learn too many knobs it asks you to tweak. I opted to use `wget` with the `--recursive` flag, which just did what I expected it to do out of the box: crawl all links you can find and download them. No tweaking needed, and nothing new to learn.


I think I had a similar experience with HTTrack. However, wget also needs some tweaking to make relatively robust crawls, e.g. https://stackoverflow.com/a/65442746/464590


> Windows are a failed analogy as files-and-folders, normal people do not understand them and software for normal people rightfully don't use them

Weird claim regarding files and folders. In my experience, my pretty tech illiterate relatives have a pretty strong grasp for them. Younger people do not, because they only use mobile computers that don't make frequent use of that abstraction.

Why are they a failed analogy? What are normal people doing instead of using them?


>What are normal people doing instead of using them?

They do things very simply.

Most people can not multi task, which means they only ever work with one window at a time. They get immediately confused with multiple windows. Likewise files and folders, most people can't grasp what they can't physically see so the very concept of files and folders inside a computer is pig latin and they just dump everything on their desktop which they can physically see.

A lot of tech nerd sensibilities are based upon very specific assumptions that just don't apply to most people, normal people. Anyone who wants to say anything about human interface design needs to first go out into the real world and see how real, normal people actually use computers.


> Likewise files and folders, most people can't grasp what they can't physically see so the very concept of files and folders inside a computer is pig latin and they just dump everything on their desktop which they can physically see.

What do they dump on their desktop? Surely it is files and folders!


Do you really think others are "just" not "go[ing] out into the real world and see how real, normal people actually use computer"?

I think we can afford to admit that this is a bit reductive and misrepresentative of the efforts required.


Considering how out of touch techies generally are, I don't think it's an unreasonable stereotype/argument to make.

It's easy to see here too, not the least the infamous DropBox comment.


UV creator's response on the concerns regarding VC money:

> I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).

> What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.

> An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]

> But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS.

https://hachyderm.io/@charliermarsh/113103564055291456


Prior art.

> Facebook's mission is to give people the power to build community and bring the world closer together.

> Our informal corporate motto is "Don't be evil." We Googlers generally relate those words to the way we serve our users – as well we should. But being "a different kind of company" means more than the products we make and the business we're building; it means making sure that our core values inform our conduct in all aspects of our lives as Google employees.

> OpenAi.


'I don't want to' is very different to 'I will never'



I thought this problem will disappear upon switching to Kagi, but it suffers from the same disease, albeit to a lesser extent.

I remember reading a Google Search engineer on here explain that the engine just latches on some unrendered text in the HTML code. For example: hidden navbars, prefetch, sitemaps.

I was kinda shocked that Google themselves, having infinite resources, couldn't get the engine to realize which sections gets rendered... so that might have been a good excuse.


Judging by blog posts on HN, I got the impression that these vulnerabilities are often not rewarded at all, or rewarded by a minuscule amount. It almost seems like companies are begging hackers to sell these exploits. Perhaps because they aren't penalized by the regulator for breaches?


They offer a low price because the risk of tanking your career, landing yourself in jail, and the fact that the researcher probably doesn't know how to line up a sale means the company is the only buyer.

I would go the other way, companies offer low bug bounties because they don't want researchers to discover them in the first place. This looks terrible for Arc despite the fact if left undisclosed it probably would have continued to be unexploited for years to come.


Incredible. How does that work? I thought Nitter was completely neutered a while ago.


Nitter worked by signing in with special "guest accounts" that I think were given to fresh downloads of the mobile apps.[0] Last year, X fully disabled that functionality, which left most Nitter instances broken. At that point, the developer publicly abandoned/ended the project.

The couple of instances floating around that still work are, to my knowledge, forks of the original Nitter that have been upgraded to work with a pool of manually created X accounts, which is a relatively expensive and fragile approach that most instances probably aren't up for taking.

[0] https://github.com/zedeus/nitter/issues/983#issuecomment-168...



Re: securing SSH keys; Nowadays most password managers can store SSH keys and integrate nicely with your SSH agent, making it essentially equivalent to logging in with a password. I use KeepassXC[1], and the workflow consists of opening the database using my master password, then just `ssh machine`, so in my book it's at the same level of comfort as a web interface for your cloud provider

[1] https://keepassxc.org/docs/KeePassXC_UserGuide#_setting_up_s...


Am I missing something? There seems to be very little substance here: No link to the presentation itself, only slides that lay a very ordinary introduction. Mentioning GitHub stars may have been the first red flag


Hi mate, the link is there in the blog text, just before the summary part.


There is a link to the presentation but yeah, there's not much substance in it either, a lot of nuance is skipped over, to the point where I'd consider it just wrong in places.

Some examples:

"Service object - Grouping object that gives you a stable IP (virtual IP) for the pods that have a certain LABEL": yes, you can use a Service to give your service a "stable IP", but that's just one type of Service; and yes, you can use a Service to target pods with a certain label, but selectors can do more than just that. (Also, what is a "virtual IP"?)

"Containers (Docker)" containers != docker

"Containers in a pod share the same IP address (localhost) and port space" this bit has been cribbed from "Kubernetes in action" (chapter 3 "Understanding how containers share the same IP and port space ": "they share the same IP address and port space") but for some reason they've inserted the "(localhost)" bit which is incorrect.

You can probably find better introductions on YouTube. The k8s docs are also quite understandable in themselves.


fair enough, but nothing actually clicked for me personally until I got my hands dirty, and actually used it....then all the components made sense......I tried to give a sense of it in the presentation....


This sounds exactly like what I was looking for. I settled on htbuilder[1], but it certainly does not feel right as it requires a fair bit of wrangling in order to fit with Django.

I'd love to help you with documentation and such; hit me up at smart.tent1246@fastmail.com if you'd like a partner(:

[1] https://github.com/tvst/htbuilder

EDIT: Actually, scrolling further in this thread, it looks like https://htpy.dev fits the bill? It has explicit integration with Django, which is what I was looking for.


Looking over your examples makes me think of sxml

https://en.wikipedia.org/wiki/SXML


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: