Hacker News new | past | comments | ask | show | jobs | submit login

I'm referring to an enterprise implementation comprising over 5000 nodes and somewhere in the neighborhood of 300-400 NFS mounts under typical use (~60-70 at boot, additional mounts via autofs under typical workloads).

Issues: ad-hocery, former Sun shop, growth-through-acquisition (including acquiring the application infrastructure of said firms), 20 years of legacy, much staff attrition (no more than normal, but even 5-10% means virtually complete turnover within that period), geographically distributed network (3-4 continents), etc.

Pretty much a worst pathological case, except that it's very standard in many, many established firms.




I see.

Will some kind of distributed file system work here? If you can clustered your NFS server you'd (hopefully) end up with more localised nodes for your client to connect to (a little bit like how a CDN works). For what it's worth, I wouldn't even run 5000 simultaneous HTTP requests on a single node in any of my web farms, let alone on a single NFS server.

Also have you looked into kernel level tweaks? I'm guessing you're either running Solaris (being a former Sun shop) and while I am a sysadmin for Solaris, I've not needed to get this low level before, but certainly on Linux, there's a lot of optimisations that can be made to the TCP/IP stack that would improve the performance in this specific scenario.

I do agree with you that 400->5000 active NFS connections does push the realm of practical use, but I don't think that dismisses NFS entirely; it still outperforms all other network mounted file systems.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: