NLTK is a toolkit which includes a bunch of different tokenizers, parsers, taggers, stemmers, lemmatizers, and corpora. The Stanford parser which he references in this post is a parser which can be integrated to work with NLTK, but it seems the methods which he discusses are more accurate and much faster than anything included in NLTK.
Also, the work he describes requires that the text already be tokenize and POS-tagged which will add some time, but not too much (and are also functions which can be performed by NLTK).
Also, the work he describes requires that the text already be tokenize and POS-tagged which will add some time, but not too much (and are also functions which can be performed by NLTK).