Hacker Newsnew | past | comments | ask | show | jobs | submit | dragonfax's commentslogin

1. Start with root to bind the port below 1024.

2. give up root because you don't need it any further.

3. Only accept non-root logins

4. when a user creates a session, if they need root within the session they can obtain it via sudo or su.


That still needs a way to change users, and OpenSSH already has privilege separation. That hardens the process somewhat to reduce the amount of code running in the process which can change the uid for a session but fundamentally something needs permission to call setuid() or the equivalent.

Yes, but changing users is a function of the shell (or maybe more specifically /usr/bin/login), not the SSH daemon.

Yea, but then we’ve recreated this CVE which is caused by calling login(1) unsafely. The point was that the person I was replying to misunderstood the problem and largely seemed to be conflating telnetd with OpenSSH.

Congratulations, you've created a server that lets people have shells running as the user running telnetd.

You presumably want them to run as any (non root) user. The capability you need for that, to impersonate arbitrary (non-root) users on the system, is pretty damn close to being root.


Well obviously each user just needs to run their own telnet daemon, on their own port of course.

You still need to have privileges to become the userid of the user logging in. Openssh does do privsep, but you still need a privileged daemon.

I'm not sure that you need root because of the port - I think login itself needs to run as root, otherwise it cant login to anything other than the account its running under.

HARD AGREE (to your disagree)


Claude Code has "plan mode" for this now. It enforces this behavior. But its still poorly documented.


They should add a “cmd-enter” for ask, and “enter” to go.

Separately, if I were at cursor (or any other company for that matter), I’d have the AI scouring HN comments for “I wish x did y” suggestions.


I've been thinking about this a lot recently - having AI automate product manager user research. My thread of thought goes something like this:

0. AI can scour the web for user comments/complaints about our product and automatically synthesize those into insights.

1. AI research can be integrated directly into our product, allowing the user to complain to it just-in-time, whereby the AI would ask for clarification, analyze the user needs, and autonomously create/update an idea ticket on behalf of the user.

2. An AI integrated into the product could actually change the product UI/UX on its own in some cases, perform ad-hoc user research, asking the user "would it be better if things were like this?" and also measuring objective usability metrics (e.g. task completion time), and then use that validated insight to automatically spawn a PR for an A/B experiment.

3. Wait a minute - if the AI can change the interface on its own - do we even need to have a single interface for everyone? Perhaps future software would only expose an API and a collection of customizable UI widgets (perhaps coupled with official example interfaces), which each user's "user agent AI" would then continuously adapt to that user's needs?


> 3. Wait a minute - if the AI can change the interface on its own - do we even need to have a single interface for everyone? Perhaps future software would only expose an API and a collection of customizable UI widgets (perhaps coupled with official example interfaces), which each user's "user agent AI" would then continuously adapt to that user's needs?

Nice, in theory. In practice it will be "Use our Premium Agent at 24.99$/month to get all the best features, or use the Basic Agent at 9.99$ that will be less effective, less customizable and inject ads".


Well, at the end of the day, capitalism is about competition, and I would hope for a future where that "User Agent AI" is a local model fully controlled by the user, and the competition is about which APIs you access through them - so maybe "24.99$/month to get all the best features", but (unless you relinquish control to MS or Google), users wouldn't be shown any ads unless they choose to receive them.

We're seeing something similar in VS Code and its zoo of forks - we're choosing which API/subscriptions to access (e.g. GitLens Pro, or Copilot, or Cursor/Windsurf/Trae etc.), but because the client itself is open source, there aren't any ads.


Claude Code denies that it has a plan mode...


Its a prequel to the novel actually. But I don't think the advertising makes that apparent enough.

Its a walking simulator for the most part. (For those that know what that means) Think of it as a journey you take part it. But there are a few choices you can make to change a bit of who dies, and a affect a slight change in the ending.

I enjoyed it thorougly. And felt it was a great representation of the retrofuturistic world the book presented, and stayed mostly in the style of that era.


May I ask that you please add a spoiler warning? :)


The spoiler is that non-zero characters die? Or did they edit their post?


Now you made that a spoiler, and a really easy one to catch the eye. Why?


It was already very obvious, so I don't take responsibility for that. But I remain unsure that's what foggyToads was talking about, since it's such a very silly "spoiler" to complain about, and I like to think better of people than that.


Knowing anyone dies would be a spoiler, no?


Technically, in that it reveals non-zero bits about the ending, but really it would be more surprising if no one died, given conventional story telling in that kind of setting.


[flagged]


Oh look who is the boss!

Tell us: when you bought hacker news where there a lot of negotiations or did you just tell them that you have it now and that was that?

Personally i could’t care less about spoilers even when they are actual spoilers. If the work is any good it will work with or without knowing details just the same. In this case there is so little information shared that we can’t even talk about it being a spoiler. Not worth wrapping ones mind about it.


> If the work is any good it will work with or without knowing details just the same

Counter example:

Only because the twist of the actual reveal of what project Horizon Zero Dawn really was (in the game of that name) did it become a very emotional moment for me. I had to stop the gameplay video (I never play games, I watch the stories of story-rich games on Youtube) and cry for a bit.

Surprises and "reveals" can be important.

I agree with spoilers found in discussions about a work though. One has to expect finding them there, and it is easily avoided by not reading a discussion about a work before you read or watched it.


Personally I agree. You only get one chance to learn something for the first time. But of course the central mystery of a whole game is a much more specific and interesting spoiler than "someone in the story dies, probably", which is the alleged "spoiler" I started by asking about.


Don’t go read random discussion about things you don’t want to be spoiled about. Nobody owes you anything with regards to what is or isn’t discussed.


You don't know what you are reading.. until you read it.

It takes only a second to be polite, apparently that's too much to ask.


If spoilers are so bad for you, you should probably look at the submitted link before reading the comments.


Knowing nothing about the game besides that website, I assumed from the pic that a non-zero amount of characters die.


The site for the game shows the skull of a human inside of a helmet.

Suggesting that your choices will impact who dies in the game is about as much a spoiler as knowing you shoot guns in Call of Duty.

Frankly, if you don't want to risk reading spoilers, don't engage in conversations about the medium at risk of being spoiled for you.


I think the "walking simulator" format sounds fitting here


I've seen UDP used for great effect in video streaming. Especially timely video streaming such as cloud gaming. When waiting a late packet is no longer useful.


Not as popular as it once was, but it is still in use:

https://en.wikipedia.org/wiki/Real-Time_Streaming_Protocol

Cheers =3


RTSP is the control protocol. Some other protocol is needed for the actual audio/video streaming. That's usually RTP, these days.

RTP is a core part of WebRTC, for example.

When you're doing a video call in a web browser, you're using WebRTC, including RTP. In fact, this RTP-via-WebRTC is the only way to send UDP packets from JavaScript!

RTSP is still used by older streaming systems and hardware ecosystems that are slow to change, such as network-connected security cameras. But in newer applications, WebRTC has mostly replaced it. Of course, the QUIC effort is in part an attempt to replace WebRTC, so the wheel continues to turn!


WebRTC still has its own set of issues, and I found it only slightly improved over other options compiling the ARM64 port:

https://github.com/mpromonet/webrtc-streamer.git

I remain unconvinced UDP based streams will ultimately remain in the long-term, but webRTC certainly made it easier to peer a connection. ;)


Do


Dew


  Location: Santa Monica, California
  Remote: Yes (and local hybrid)
  Willing to relocate: No
  Technologies: Golang, Python, Javascript, Ruby, Java, AWS, Kubernetes, Postgres, Mysql, CI/CD, Serverless, Microservices, Scalability, Distributed Architectures
  Résumé/CV: http://resume.jason-stillwell.com/
  Email: dragonfax@gmail.com
Senior Software Engineer with 25 years of experience at big names like Ebay, Paypal, Twitter, and Zendesk. But also with small startups such as Wheels (micromobility) and Buildzoom. Mostly in backend and feature development roles, and more recently, cloud and infrastructure.


I too couldn't recognize him. I had to dig around the main site and verify that this is indeed the Peter Gabriel that we all know.


Abuse was mostly written in Lisp. 2d side scrolling action game from the 90s that was unusual in being commercial with a brick and mortar release but also supporting Linux. It uses keyboard for direction mouse for aiming at the same time, which was also kind of unique at the time. The code is open source, complete with modernization, but you have to dig around for it nowdays.

https://en.wikipedia.org/wiki/Abuse_(video_game)


Any external API that could reasonably take a long time to return (because of the work its doing) should have an asynchronous API. I.e. you submit the request to start the work and can later do another request to check if its complete and get any results. Rather than waiting on a live request.

Good practice for using external APIs is to NOT use any default http client settings and always provide your own timeouts to what you consider reasonable for connections and responses, as well as using a context system with deadlines so you can time out any requests that are taking more than a reasonable time to complete. Making your described surprise long expensive requests into nothing more than short errors (which hopefully you'll pick up on after a while, as long as you've got your alerting system setup right).


> Any external API that could reasonably take a long time to return should have an asynchronous API.

I agree, but that doesn’t solve the problem here. The remote API was asynchronous, I was just waiting for it to ack my instruction. Because of issues out of my control (network congestion, maybe?) the time to get a 200 OK from the server shot up.

> always provide your own timeouts to what you consider reasonable for connections and responses

Agreed here as well, and I was providing my own timeout. The problem is that (cost-wise) it’s fine for 1% of requests to hit a 5-second timeout, but gets expensive when 100% of requests do. And lowering the timeout means that during normal times, requests that have latency in the tail of the distribution but ultimately go through would fail, which is undesirable.


I don't think that's what they meant by asynchronous api. More likely something along the lines of "schedule a job - 1 fast request" "call me back at my endpoint when the job is finished" (or variations, the gist is you are not stuck waiting for the server response)


How can you schedule a job within 100ms if it takes 3000ms to establish a connection with the server running the other api due to network congestion? Maybe you can send the request in UDP, but you'll get tons of situations where the secondary api never ran.


Right, but even in schedule a job - 1 fast request, there's still some waiting for the server to ack that it got the message. That's the request that became slow. I don't see how there's any getting around that, async API or not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: