Hacker Newsnew | past | comments | ask | show | jobs | submit | hkstm's commentslogin

Do you think most of the HN demographic is actually running big IDEs, containers and VMs at the same time? I'm personally a CS student and never had to run more than a few lightweight docker containers + maybe one running some kind of database + VS Code and that has been working fine on a laptop with 8GB and pop_os. Could imagine that a lot of other people on HN are also just generally interested in tech but not necessarily running things that require a lot of memory locally.


CS PhD Student. Running a laptop with 16GB of RAM. I dont train ML models on my machine but whenever I have to debug stuff locally, I realize precisely how starved for RAM my computer is. I start by closing down FF. Just plain killing the browser. RAM down from 12GB to 7. Then I close the other IDE (usually working on two parallel repos). 7GB to 5. Squeeze out the last few megabytes by killing Spotify, Signal, and other forgotten Terminal windows. Then I start to load my model to memory. 5 times out of 7, its over 12-13 GB at which point my OS stops responding and I have to force reboot my system cursing and arms flailing.


> lightweight docker containers

If you're on macOS, there's no such thing as a “lightweight Docker container”. The container itself might be small, but Docker itself is running a full Linux virtual machine. In no world is that “lightweight”.


I was going to say, I'm on a 16gb 2015 macbook pro (not sure what to upgrade to) and Docker for Mac is _brutal_ on my machine, I can't even run it and work on other things at the same time without frustration


I run like 50 chrome tabs, a half dozen Intellij panes, youtube, slack, and a bunch of heavy containers at the same time, and that's just my dev machine.

My desktop is an ML container train/test machine. I also have ssh into two similar machines, and a 5 machine, 20GPU k8s cluster. I pretty much continuously have dozens of things building /running at once.

Maybe I'm an outlier though?


Yeah. I suspect most people here are software engineers (or related) and IDEs, Docker, and VMs are all standard tools in the SE toolbox. If they aren't using Docker or VMs, then they are probably doing application development, which is also pretty damn RAM hungry.

I do most of my development in Chrome, bash, and sublime text and I'm still using 28GB of RAM.


Depending on the requirements of your job—-just a single VS instance with chrome, postman and slack open takes around 5GB. Teams adds another GB or so. The rest probably another 2GB (SSMS and the like).

On my particular team we also run a dockerfile that includes elastic search, sql server, rabbitmq and consul—-I had to upgrade my work laptop from 16GB to 24GB to make it livable.


Wouldn't you just have all the heavy stuff on a server? I don't understand the goal of running something like sql server and other server type apps on a desktop/laptop.


Having a local environment speeds up development significantly.


When I’m developing I don’t want any network dependency. I love coding with no WiFi on planes, or on a mountain, etc.


If you are on linux, try out zram.


I could believe that for students because students are usually working on projects that are tiny by industry standards.

10 klocs is huge for a student project but tiny for a real-world project.


I do use intellij and similarly hungry IDEs all the time, with many other resource hungry processes without trouble on 8 GB of RAM.

Though truth is that I use zram, which everyone should who is not fortunate enough to have plenty of RAM, but does have a decent CPU.


It seems one of the major downfalls is that the user has to define all sources and sinks. I might have missed it but how do you systematically define/find these? Personally was interested in a similar topic for a thesis and stumbled upon deepcode.ai which started out of ETH Zurich (https://files.sri.inf.ethz.ch/website/papers/scalable-taint-...). Are there any plans or reasons why you would not want such a system?


The article briefly mentions this, although it might not be super clear from the short description - "We regularly review issues reported through other avenues, such as our bug bounty program, to ensure that we correct any false negatives." We rely on these mechanisms to find places where we're missing taint coverage and write sources and sinks as necessary. As of right now, all the annotations are manual.

I hadn't looked too deeply into the literature there, the paper looks really interesting! We don't have any concrete plans to implement such a system, but I don't think there's any fundamental reason we wouldn't want automatic taint model generation. I'll give the paper a read on Monday to learn more :)


Live Lite doesn't have max tho


Oh yeah, definitely get Suite if you're specifically interested in Max, but I got the impression OP was just generally interested in trying Ableton.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: