This was my first thought, but after researching for a while it seems pretty complicated. The tutorials I found literally have hundreds of steps to configure everything properly.
If you look under "Method", there are a bunch of different methods to use that I'm told correspond to various landmark papers in the field. If you know about community detection, you'll recognize which methods correspond to which papers, but if you don't, why do you care? At least, that's our philosophy for documentation, but I'm not sure I entirely agree with that philosophy.
I read this article and imagined computing in the first half of the 20th century. Undoubtedly when some programmer first managed to get his room sized computer to add, subtract and multiply someone was present to scoff about how he could already do such things on his slide rule.
It's not about where the technology currently is, but where it is going.
All true. With D-Wave, it's always been hard to separate the hype from the substance.
They certainly have some exceptional people (e.g., the "director of business development" mentioned in the post was a research assistant for Hawking (yes, that Hawking), and wrote one of the very first books on quantum computation). And, they're doing good work. It's just hard to tell how long the road is.
I'm curious as well. I'm able to do everything and more in Vim/MacVim that I was able to do in TextMate 1. Sure, it may not look like a "native" app, but I don't see how that should be an issue. You want to focus on the text, not the pretty box around the text.
Edited to add: Granted, a lot of the niceties have to be installed after the fact as a plugin, and are not built in. Where with Sublime Text 2, perhaps a lot of those things are there from installation.
Some Emacs extensions that implement Sublime Text features (and IMHO are better than their ST equivalents): mark-multiple[1] and CUA (builtin) for multiple cursors, helm[2] (formerly "anything") for file selection that allows quick previewing (C-z) and much much more, mini-map[3], and expand-region[4] (not sure if this is inspired by a ST feature but it is nice nonetheless).
Something like that is also VERY easily accomplished with macros. The added benefit of macros is that you can then store that "edit" in case you need to replay it later.
In VIM, you can edit multiple adjacent lines at the same time. You can place multiple cursors at random lines though and start editing. I think I saw a plugin for that once, though.
For my part, it wasn't so much that plugins weren't available as it was trying out multiple versions, installing them, configuring them, and troubleshooting them. Sometimes they'd make Vim much slower, or break altogether.
Much of what I wanted out of Vim worked right out of the box in ST2, and with minimal fuss.
Intellisense depends on the language. For C, C++ clang_complete is fantastic. For Javascript, Python, Ruby, there's also static analyzer based completion. IMHO Vim is pretty advanced there.
>I would be interested to hear what modern features are not possible in vim or emacs.
None technically.
There still isn't a very nice IDE/Analysis-refactoring engine for Java in Emacs, but there is one for Scala oddly enough.
So there are rough-edges, but the point is that Emacs is a programmable environment that belongs to you and the larger community, not to a solo absentee developer. You can make it do anything you want.
There is also one for OCaml and Haskell. (Well, maybe the Haskell one isn't as good, but I haven't tried it.) in my experience, far more functional programmers like Emacs/Vim and far more Java people like Eclipse/IDEA.
So in the Java version, the people who care over-much for IDE features already use an IDE and aren't going to spend time improving Emacs. In functional programming land, people are less worried about IDE features and so don't use IDEs. However, these features are nice, so people are willing to spend time writing backends like Ensime or TypeRex that can be plugged into Emacs.
Speaking as someone who develops addons for these, it's much easier to take my C libraries and wrap them for cinder/openframeworks, then it is to either JNI or create an IPC layer (usually via OSC, http://www.opensoundcontrol.org) into processing, too. Yay laziness. :)
Much in the same way that Processing can be compiled to work on Android, Cinder can be compiled to work on IPhones & Ipads.
Also, I think Cinder has a bit more of a direct approach to OpenGL, whereas Processing has built up a simplified layer on top of JOGL to make doing the more common things easier.
I attended various talks by Ben & Casey at Eyeo, and they mentioned the new 2.0 series of Processing including a lot of great new OpenGL improvements by Andres and others, and that the new GL framework shows performance near OF. In another talk, Ben mentioned prototyping something in Processing, and then porting it to OF for speed.
I'd say that in general we'll see this all matter less and less, what with more GPU type stuff happening, and with the advancements in the JIT and code translation in general.
even if Processing 2.0 does make perfect use of the GPU that distinction only covers the render phase. Calculating collisions for a large number of objects will still be significantly faster in C++. So like nearly every situation in software, it all depends what you're trying to do.
For computation-heavy apps, probably. For others it'd probably come down to something at the framework level, such as the efficiency of the OpenGL bindings or event/callback infrastructure.
MATLAB is a bad choice for science. It is proprietary, making it difficult to share work among people who do not own a license, therefore making it more difficult to reproduce results.
Not to mention the fact that the MATLAB language is extremely awkward anytime you want to work with something that isn't a matrix.
GNU Octave [0] is mostly compatible with MATLAB. GPLed and implemented in Fortran and C++, it's pretty much hack-able to your needs.
> MATLAB language is extremely awkward anytime you want to work with something that isn't a matrix.
From my limited experience, that's true but rarely of relevance. You can, and want to, use matrix operations all the way. Less bugs (simplier code), faster execution.
In my experience 90% of scientific data analysis work is: getting the data, cleaning it and transforming it into a form suitable for analysis. MATLAB fails miserably at these tasks.
> 90% of scientific data analysis work is: (...) and transforming it into a form suitable for analysis.
Is that infinite recursion or endless loop? Or is there an end to it, caused by quantum nature of work -- at some point the 90% becomes an undivisible unit? ;-)
((terribly sorry, couldn't help it))
At any rate, that sounds like you want to feed the input through a pipe of simple, programmable textual filters -- cue sed, awk etc. And then pipe into standard input of something -- I know Octave has standard input, MATLAB I wouldn't be so sure.
I like to take advantage of this by approaching the salesman while holding about $300 dollars worth of cables, negotiate a lower price on the tv and then return the cables the next day.