Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Useful command-line scripts for web developers (or anyone else really) (emson.co.uk)
40 points by emson on June 5, 2009 | hide | past | favorite | 39 comments


I also use this natty little function in my .bash_profile to cd to the last opened finder location.

This is a script written by a colleague. http://www.visualcortex.co.uk/. Add the following function to your .bash_profile in Apple OSX. Now open a directory in Finder, then open terminal and type cdf, your terminal will now change directory to the same one as the last Finder opened.

    # function to change directory to the one set in the last opened finder.
    cdf () {
       currFolderPath=$( /usr/bin/osascript <<"			EOT"
           tell application "Finder"
               try
                   set currFolder to (folder of the front window as alias)
               on error
                   set currFolder to (path to desktop folder as alias)
               end try
               POSIX path of currFolder
           end tell
    			EOT
       )
       echo "cd to \"$currFolderPath\""
       cd "$currFolderPath"
    }


I've been using OpenTerminal for this purpose:

http://www.macupdate.com/info.php/id/23173

It gives you an icon in Finder that changes the directory in your terminal to the one in the Finder window. There are some options, like have it open a new terminal, or change the folder using pushd, so you can popd once your done and return to the folder it originally was in.

I like the script you provided, though, as it saves me from having to use the mouse (no button click) and I can easily select which terminal to do it in. Note that I had to remove the quotes around the first EOT to get it to work.


Conversely, running "open dir/" will open the dir in Finder. I tend to use that a lot.


yeah useful, also "open `pwd`" to open the current directory


What's wrong with "open ."?


erm... yes good point!


Nothing against this post in particular, but it's much more useful in the long term to learn how to write stuff like this yourself. Compare knowing how to cook with just having lots of recipes.

While there are certainly some people who will pick apart examples like these and learn new tricks, I've seen enough copy-and-paste programming to know that many won't.


Not to mention that many of the examples given are very inefficient ways to do what's needed. Specifically, looking for files with "5" in the name:

    $ ls | grep -e .*5.*
Would be better as:

    $ find . -name '*5*'
or if you insist on using a regex:

    $ find . -regex '.*5.*'


Am I missing something? What's wrong with:

  $ ls *5*
??

The find command is more powerful (descending into subdirs, etc.), but the original example in the link only specified the current directory.


I have used production batch processing systems that routinely store enough files that

  ls *5*
will fail because the argument list was too big for execve(2). Would I design it that way? No, but people do (sometimes without really thinking about it), and you still have to get work done.

New users should be taught find(1) because it always works.


Right, I understand the problem. Find and xargs can do wonders. However, that's usually an edge case (in my professional experience). I'm certainly not proclaiming the advanced-ness of 'ls' in this circumstance.

However, in the example in the article, he was using "ls" piped to "grep" in order to find all files with a 5 in the name. That would suffer from the same consequences you mentioned, on such systems, and is inefficient and verbose.

Note, I agree with your advice to teach "find" (and "xargs") to users.


True,

  ls | grep 5
is reliable where

  ls *5*
is not. (Well, except for awful names that require "find -print0", or preferably having the file taken out and shot along with its creator.)

I'm actually not a fan of xargs. It's more unixy, but somehow I find its options hard to remember. Off the top of my head I find it easier to do

  find -exec blah {} +


Sorry, I got so caught up the frustration that I forget to post that as my real frustration. :P That was exactly what I first thought when I saw that, and was posting the `find` calls as just a easier / more comprehensive method of looking, esp when you start wanting recursive searches. The fact that you can add tons of different search filters to `find` that you can't have with `ls` really make it the goto way of finding files, rather than piping ls to some form of grep or whatnot.


    $ ls -d *5*
Otherwise you get a listing of any directory with a 5 in the name.


I find it very much easier to just enter irb or run ruby with -e and hack around with backquoted strings instead of using bash to loop through and manipulate filenames.

a silly usecase would be reversing all the file names:

ruby -e '`ls -1`.each_line { |x| `mv #{x.chomp} #{x.chomp.reverse}` }'

pbcopy is pretty cool feature though.


Yes I like the idea of using Ruby I find it a little more intuitive. Thanks.

Also you may want to look at commandlinefu.com: http://www.commandlinefu.com/commands/tagged/34/bash


Also fun as far as ruby on the command line goes: Rush http://rush.heroku.com/


I often use this to watch youtube videos

mplayer `clive -e http://www.youtube.com/watch?v=SZRyry_B7TY | awk -F'","' '{print $2}'`

You would need 'clive' package.


Cool I like it.


> Bash script to append a .txt extension to a filename

Why?


And regardless of whether it's necessary in the first place, why is it a bash script rather than a sh script? Very few of these use bash-specific extensions, and the BSDs don't necessarily have bash installed.

Things like that make porting from Linux to BSD and other Unices tons of fun.


A more general case would be: How to change a pattern in a bunch of file names from X to Y:

  for i in *X*; do mv $i ${i/X/Y}; done


Or just use 'rename':

     $ rename s/foo$/bar/ *foo


I frequently need to create datestamped archives.

Here is a one-line example, given two parameters. $1 is the archive name prefix and $2 is the directory to archive.

  tar -cvzf ${1}-$(date +%Y%m%d-%H%M).tar.gz $2
Note, if $2 contains path information (such as absolute path), that path will be stored in the tar. So it is usually used with the working dir as the parent of $2.


Is there any way to give grep some context? It's great knowing what file a word came from, but I've never figured out how to get grep to spit out the leading and following lines too.


from `man grep`:

     -A NUM, --after-context=NUM
            Print  NUM  lines  of  trailing context after matching lines.  Places a
            line containing a group separator (--)  between  contiguous  groups  of
            matches.  With the -o or --only-matching option, this has no effect and
            a warning is given.

     -B NUM, --before-context=NUM
            Print NUM lines of leading context before  matching  lines.   Places  a
            line  containing  a  group  separator (--) between contiguous groups of
            matches.  With the -o or --only-matching option, this has no effect and
            a warning is given.

     -C NUM, -NUM, --context=NUM
            Print  NUM  lines  of output context.  Places a line containing a group
            separator (--) between contiguous groups of matches.  With  the  -o  or
            --only-matching option, this has no effect and a warning is given.


Not that obvious (even from this description): you can simply do a grep -3 to get three lines of context, etc.


I use a function that looks like this:

  find $1 -name $2 | xargs egrep -nC3 $3 | less
So, say:

  find ./ -name "*.rb" | xargs egrep -nC3 'gsub' | less
You'll get the matching line, three lines before and three lines after, as well as file name and line numbers.


awk and perl will let you do magic, my friend. Advanced "grep" features are only in GNU grep.


Some time ago I wrote this little script in order to track, among different virtual hosts, what is consuming too much CPU:

    #!/bin/sh
    cd /proc
    ps auxw | grep apache2 | grep 'Rl' | sed -e 's/ \+/ /g' | \
     cut -d' ' -f2 | egrep '[0-9]+' | xargs ls -l | grep cwd
Basically this will print a line for every instance of every apache virtual host that is 'running' right now.

It's Linux specific.


Very good, that looks really interesting.


http://www.commandlinefu.com

CommandLineFu is a much more generalized command-line collection, has a few clever ones. Not just web developers, but anyone who uses *nix in general. Specially the top rated ones

http://www.commandlinefu.com/commands/browse/sort-by-votes


Just a little warning: The "for i in..." paradigm breaks if the strings you're iterating over have spaces in them. The way around that is to either use find and/or xargs instead of "for i in", or to first set the IFS variable to a value that doesn't contain spaces.


I sometimes in interviews I ask how to find the top 10 IP addresses that have hit a web host in the last thousand hits.

tail -1000 access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn | head


Don't use grep. Use ack.


This comment is not nearly as useful as one that gives examples about your reasoning (or at least a link to such.)


Sorry, I thought people would be intrigued and google it.

http://betterthangrep.com/

A quote:

"ack is a tool like grep, aimed at programmers with large trees of heterogeneous source code.

ack is written purely in Perl, and takes advantage of the power of Perl's regular expressions."


Ack could have been done just as well with a 10 line shell wrapper around grep. It would have been much faster, on top of being simpler.


Put thine code where thy mouth is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: