Hacker News new | past | comments | ask | show | jobs | submit login

"I was going to paste the strace output of what gdb is actually doing, but it is 20 megabytes of system calls."

I think this is why you shouldn't run this on production server itself. Each call is very resource intensive on the production server.

I believe the right way to analyze memory is to use "gcore", dump the memory, download it to the local machine's VM instance that's running the same OS as the production using scp. Also download the same ruby binary that production is running, and use gdb on the VM to analyze memory dump.




That may be "right" given the current options, but that doesn't mean we can't have better options.

And that's exactly what this blog post is about.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: