Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Poll: Your preferred method for coding on a remote server?
30 points by kkovacs on Aug 17, 2011 | hide | past | favorite | 37 comments
It often happens that you're on a project that must be accessed over the 'net even for development. (Case in point: PayPal integration. Facebook app. Anything where a proprietary SaaS calls back your URLs.)

What's your or your team's preferred method to write code then? (The one you do most often, if you use more.)

I SSH into a server, and work there using VIM/EMACS/etc.
251 points
I mount the remote file system over SSHFS/FTPFS/CIFS, then work "locally" on that.
54 points
I use source control to push every minor change (as in, even one character edits) to the server.
49 points
I use a local VIM/EMACS/MC/etc to open files remotely over SSH/FTP.
42 points
I create a custom deploy script using rsync (or similar).
16 points
I'm a victim of Microsoft, I use Remote Desktop to log in to the remote Windows server.
16 points
I use Eclipse/Netbeans/etc, that can FTP the updated files over to the remote server.
15 points
Other: my own ingenious ninja method (please enlighten us in the comments!)
7 points
I use port forward from a server to a locally hosted environment.
5 points
I use a shared Dropbox to sync files to the server.
4 points


1). Net-accessible dev and staging machines. Net-accessible staging servers are fairly easy to set up if you have a repeatable way of doing deployments. Security can be a mite tricky. Net-accessible dev boxes are modestly more difficult than staging if you do them right. (Simplest way is to use reverse SSH tunnneling on a net-accessible box under your control to your dev box, which is a single trivially Googleable command. The reason this takes thought is that giving the world access to an app server on your machine is potentially a Very Bad Idea.)

If the idea of having HTTP on your laptop does not fill you with mortal dread, Twilio has an OSS called localtunnel which makes your laptop web-accessible in seconds.

2). Source control for every change, deploy scripts to get it to environments safely and repeatably. DVCSes handle lots of little changes fairly elegantly, and you can squash a range of them after you e.g. figure out the magic incantation to get a foreign API working right. (Turning thirty one-character commits and ten deploy tags into a single "Foreign API now works" commit.)

I have done cowboy coding in Putty in the past. There is no excuse for it.


+1 for localtunnel. It's a great service and really easy to use. I've seen similar ones that cost money, but as far as I know localtunnel is still free to use.

http://progrium.com/localtunnel/


I used to work solely on a remote Dev box, using ExpanDrive to edit locally. I would have an ssh session open to run commands/debug/shell.

I've recently starting moving towards local Vagrant instances. I don't like dealing with local library installs on my Mac, and Vagrant makes it stupid easy to repeat my dev setup. It also gives me a good starting point for provisioning servers (using Puppet), thought I haven't fully consolidated the local dev and server Puppet manifests, yet.


Well having done a Facebook app, step one set the URL to http://localhost:1234 or something on Facebook and you are away for local dev work. Unless you are working on something different to what I did, it just inspects the source URL.

Try using a web service mocking software and learning how to work your hosts file. Soap UI comes to mind... or whack together manual services on test boxes.

If you capture an coming response, maybe using fiddler or something, or if using visual studio its a breeze. Then just set up a basic web service that returns you data as you captured, no processing needed at all. Or even just set up some proper routing. There isn't really an excuse doing cowboy work will bite, maybe not today, but later.

I've seen to many huge f/k ups and been responsible for €50,000 mistake myself mucking with a live server with no testing environment.

My favourite for this is virtualisation BTW. Taking server snap shots of production, dumping them in a sand box where I have mocked third party set ups, and hack away. Makes releasing, testing, everything breeze. Big time cost up front, but it pays itself back ten fold on the first smooth release.


BCVI: http://sshmenu.sourceforge.net/articles/bcvi/

It's like a combination of "I SSH into a server, and work there using VIM/EMACS/etc." and "I use a local VIM/EMACS/MC/etc to open files remotely over SSH/FTP.".

You SSH in and type "vi foo.py" in the remote shell, but the file opens in your local Vim. When you save it's automatically transferred across.


None of the above. Google AppEngine does it all for me with the click of a 'deploy' button. Sysadmin work is so 80s.

To be honest, since GAE all I do now is open notepad2 and code, nothing else, then deploy and presto. I remember the old days when having apache, filezilla, svn, and a plethora of sys tools was the order of the day.

Coding is more fun when it is pure coding and nothing else than coding. Thanks GAE.


I want to get good enough at Vim to just do all my coding in it, but for now using an SFTP bookmark + Gedit in Ubuntu is convenient and feels like I'm editing the files locally. I still have an ssh console open for gitting / grepping etc.

Trying to do the same sort of thing with MacFuse in OSX was an absolute nightmare though (slowwww), but perhaps I was holding it wrong.


I do all development on remote servers because I like to code from many different machines. Setting up a LAMP stack on all my potential dev boxes would be painful and cumbersome, and may interfere with other functions that I intend those machines to perform.

This is very easy to do with EC2. A single micro instance is free for a year with a new EC2 account. These make great dev environments where you can setup the exact toolset and environment you need. THis is also makes for convenient deployment if you are ultimately deploying to EC2.

For coding, I prefer Coda if I'm on a Mac and Aptana on a windows box, both are chosen precisely for their excellent SFTP support -- I find mounting remote systems as local drives be very laggy since file loading and saving locally tend to be a locking operation for most editors, meaning the IDE freezes up for the 1-2 seconds it takes to upload. I use VIM for cowboy coding.


EMACS + tramp - http://www.gnu.org/s/tramp/ - works great.


I don't need to do this as much as I used to, so I normally use either a mounted ssh filesystem, or "sharing" through a DVCS these days.

A few years ago I worked at a place where they gave us a local windows box and a remote headless linux box where the code ran (this was before virtualization was ubiquitous). I used cygwin on windows and forwarded X11 sessions running on the remote linux box to my windows box over ssh.

windows w/putty: http://tldp.org/HOWTO/XDMCP-HOWTO/ssh.html OSX: http://dyhr.com/2009/09/05/how-to-enable-x11-forwarding-with...

It let me run eclipse and other GUI apps on the linux box, but have them mostly feel like native apps (well, native X11 apps) on the windows box.


Over the years I think I've done all of these. They're all a little painful. Being in Australia means that latency is an issue, I'm sure fast DSL in California is different.

If you're using SSH, an editor and terminal that supports GPM mouse over SSH (eg: iTerm2 on OS X and Vim on the backend) makes life easier.

It's rarely necessary. For Facebook in particular, 95% of the integration happens at the web browser level via signed URLs and/or Flash - Facebook's servers rarely talk directly to your servers or vice versa. Just run a web server locally and put 127.0.0.1 in your hosts file.


Facebook let you use localhost as the canvas page; no idea if they also let you local network addresses as well. So that's one problem solved :)

For debugging calls such as those received by PayPal, I start by writing a small script that logs the GET or POST request, then write a simple test page that POSTS to the local script that I'm developing.

At any rate, if the only way to test the majority of your code is via a POST, there's something wrong with the way you're writing code; the objects you've created you should be able to test independently, for a start :)


I use a cooperating set of Mac apps: connect to the server using Cyberduck, browse to a file and double-click to open it in TextMate, edit the file and type command-s to save, peripheral glance to the corner of the screen where Growl puts up a notification that the transfer completed successfully, then type command-shift-r to tell TextMate to tell the currently open web browser to reload its front-most window to see if the change had the desired effect.

A number of Mac file transfer clients and text editors support this type of integration.


we do remote coding mostly for the pairing and code reviews. two or more people connect to same server, first tmux (or screen, which we used in the past) and vim from inside tmux for editing.

to integrate Facebook and and PayPal remote coding usually not required, it is simple to configure a dyndns account and a port forwarding on the router to continue to work on your own local development machine. Any hassle of configuration pays off pretty fast due to increased pase of development and lower overhead.


I almost always SSH into the remote machine, but I use Emacs' TRAMP on occasion, when I know I'm only editing a couple of files.


For those who don't know, TRAMP allows you to open a remote file on a local emacs. I use this fairly extensively, since I may have a newer version of emacs or may be working on a server that doesn't have my .emacs files, etc. Another advantage is if the connection is laggy, that is only noticed on save, since otherwise it is all local.

It can use several different protocols to connect remotely, but I typically use ssh. I also use dired in that case, which is the Emacs directory editor to browse the remote directory structure and open other files. TRAMP can even launch remote compilation now, which is pretty great.


+1

I use TRAMP + EMACS for all server side development I do and love it. The only annoyance is the slight delay between hitting Ctrl-X-S and the time it takes to save. Then when I use EMACS on a local machine I'm just amazed at how fast it can save a file :)


Since I got an Imac at work, I mount the remote file system over SSHFS and it works quite good.

Before I used ZDE 5.5 for PHP dev. and opened the files remotely via SFTP. It was also not bad, though ZDE 5.5 is really buggy.

What I miss is an editor which supports opening files via SFTP and doing that fast (as ZDE) and is not that buggy (not as ZDE) and supports autocomplete.


For the case of Facebook app development, there are few great substitutes for an SSH tunnel routing traffic back to your local dev environment. All the local dev tools your used to, including TextMate if that's how you roll, with a little bit of extra latency.

You can roll your own, but I'm happy to use Tunnlr.


I vote for the custom rsync script, as it's really easy and I get to keep working locally:

http://www.exratione.com/2011/08/use-rsync-scripts-for-painl...


X exported via NX -- works awesomely fast especially over slow links, tethered smartphones, etc.


Can you post a link to NX? (I try to google "X NX" I get, well something very different :) )


I think he is talking about this - http://en.wikipedia.org/wiki/NX_technology



Push to a GIT server, we have a branch called 'live', then we run a script on the servers that pull down the deploy branch. We also have a branch called 'development' for our dev boxes


Similar here, I set up a git hook to detect commit message, if 'deploy' appears, then push the branch to live server.


git commit -m "Fix for issue #258 (sidebar widget breaks when deployed)."

;-)


I like this "remotely triggered deploy" idea :)


I generally use (a) my local Emacs install in tramp-mode, (b) Fetch—it has a feature that lets you edit a remote file using your default text editor, or (c) TextWrangler's SFTP mode.


vim + git + git repo hosting service + fabric

make changes locally, merge into remote environments branch, run fabric command that pushes change up to git hosting server, ssh's into remote server based on current branch then pulls changes from repo hosting servers & runs a few other bits remotely (migrations etc) depending on what has changed

I use codebasehq.com btw, been ace but i dont see a great need to use their deployment service.


Voted for a custom deploy script. However, I use unison rather than rsync as it runs on Windows without depending on cygwin.


sftp to the server with Coda, which puts your remote files in a sidebar; ftp+files you're working on in a single window (though you can break them into new windows if you want). Also has svn control baked in, but I've only used Versions for that.

I've been meaning to try TextMate, but love the built in ftp with Coda.


I like using ExpandDrive to mount remote filesystems, and then edit using TextMate or some other Mac editor.


i ssh into a server, but i learned about using a local VIM instance to edit remote files yesterday (on HN, of all places) and will be doing it that way from now on. no more need to spread my fine-tuned vimrc to every server i use!


I've shot myself in the foot enough times with "trivial" or even "one character" edits that everything goes through the normal source control + deploy process.


SSH, GNU Screen (byobu), and Vim.


I import the remote filesystem into Plan 9 and edit it there.

Which might be u9fs via ssh | ftp | drawterm | cpu | sshnet & AoE | import | sshnet & ftp | import /net & AoE | import /net & ftp

The possibilities are numerous




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: