Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm an oldskool dev who shys away from 'the new shiny' because I've learned the basics of JS and you can get pretty far with the fundamentals, despite the allure of these rather expressive frameworks that get released every week now.

Frankly I get more joy out of writing bookmarklets, Tampermonkey/Greasemonkey scripts and customizing websites with various CSS 'userstyles'.

I also prefer SFTP and still enjoy uploading PHP scripts with SFTP and then building out some barebones CRUD app in my free time. Again, I shy away from the new shiny like GraphQL and things like Docker or Kubernetes etc.




I'm also "oldchool" in a similar vein. But. Docker, dude. Docker.

For instance, yesterday a PHP tool made the HN frontpage [1] that seemed rather interesting. Problem is, it needs PHP 7.2+ and my app runs on 5.6.* What to do? (Bear in mind, 3 weeks ago I moved from Virtual Box to Docker for local dev, and my files are now in a regular folder in my machine).

In this case, I just need to tell Docker to fetch the image and run it pointing to the same folder where my app is. Just a 1 line command.

After that, I just need to remove the Docker image and my system is as pristine as before.

I think I have itches very similar to yours (Like, I'm learning Python and all things Data Science and Machine learning related, instead of virtualenv or even *conda, I'm separating my projects using Docker), and nowadays I'm using Docker for all of'em.

[1]https://news.ycombinator.com/item?id=23654973


Keep in mind you should still use a venv with docker, this removes the need for sudo and won't conflict with the image system python if it has one.


For the longest time I thought the standard was to install stuff on Docker as root and not worry about typical permission/user idioms you'd practice on a classic box.

But now I am seeing more of this. Do you have any good links to read more about why using venv/non-root makes sense for Docker?


In my case, I think Docker voids the need for virtualenv. But a quick google search returns interesting results [1]

I do know setting up a proper user in Docker is just a couple lines away in a Dockerfile (As a matter of fact, I did that for the main app I develop).

For my other use cases, I just don't care. I'm using Docker to quickly bootstrap a Jupyter Labs environment, and I do that by sharing some confs (Like the Pip cache folder).

The caveat to this is that files I create are owned by root, but that again is just a command away for fixing (If I need it, that haven't yet).

[1] https://stackoverflow.com/questions/27017715/does-virtualenv...


Don't have a link at hand, but main reasons out of my head:

- exploits for breaking out of the container are easier to pull off if you have root access

- if the image has a system python, installing with sudo will install things in the system site-packages directory, which can cause a lot of troubles


I understand the attitude, especially because most of the new shiny things tend to put a lot of polish and either no doc or overwhelming doc, instead of allowing smooth integration with a respectable learning curve.

The only item in your list I'm not sure fits is Docker. It's not a framework or a language, it's a complex-but-not-complicated tool that makes a lot of life fairly convenient, especially at small scale. Things like Kubernetes (and Terraform and so on) only should come into the conversation when you start having bigger questions about the scale of your project and your infrastructure, and even then aren't a given. But at a single-user local scale, Docker can be incredibly convenient to do local dev, have reliable behaviour, and avoid a lot of pitfalls of your own machine's configuration. Often in just a handful of lines (a few for the dockerfile and a one-liner to start or stop the whole thing). The docs are also fairly well-written as they offer basic options and allow for a lot of granularity when it becomes needed.

Docker isn't the only tool in the world to do what it does, but it's a very user-friendly tool that offers a lot of convenience with little overhead (at least at a non-"devops team" scale).


While I admire your forward thinking in not mentioning scp, I will fault you with sftp, as it requires an OS account on every supported platform.

This is overkill for moving files.

I have leaned both towards rfc-1867 transfers when I have a web server available, and stunnel configured to launch tar xf when I don't.

I'm also about to go to production with a tftp server as a bridge to smb3. The tftp client is so elderly that it is constrained to 512-byte UDP packets - a true museum piece (running VMS).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: