YCM is great. Depending on the language though you may need additional plugins to get "smart" completion into Vim. For instance, tern.js does this for Javascript:
I agree with Joel Spolsky on this subject [1]. From what I've seen when people write something monolithic from scratch they just end up making similar mistakes and introduce new problems that weren't there to begin with. Vim's codebase isn't terrible, it's been going strong for 23 years, but it's accrued a lot of technical debt that makes doing somethings difficult. It's also controlled by a single person, and don't get me wrong this isn't necessarily a bad thing, but it makes getting in the necessary changes to achieve the goals of neovim nearly impossible.
Why? Vim is a well tested codebase. It has many features. Starting over would produce better code, but in what timeframe? I think the neovim contributors made the right decision to refactor.
Tests are being added all the time, and patches from upstream Vim integrated. Moreover we do have a reviewing process in place (mostly 2 or more people looking over the code), travis runs, all vim tests + lua unit tests are run for each PR. Coverity scans every 2 day to see if nothing slips through. Most PRs are actually simplifications of old code, afforded by depending on libuv and using an abstraction of malloc that doesn't return NULL (note that vanilla Vim was never safe against malloc returning NULL, as was tested by @philix by using the fail allocator). Granted, the new features that have been introduced, like job control, see comparatively little use at the moment but do receive review and are used to replace other things that are used a lot (such as VimL system()). By the time we get a second release, I'm confident it will be pretty well-tested, we are preparing for it.
To add on what @aktau said: soon we will have much more robust UI tests.
The current UI tests are hard to write and based on the ASCII output [2] of the commands of the test [1]. By using the Neovim API we will be able to rewrite these tests to look more like automated Sellenium based web app tests and make more fine grained assertions.
Probably yes. I wondered this myself when I heard about this reengineering project which I agree needs to be done after delving into the vim source before.
However the time to market before it was even slightly usable would be rather large I imagine. I wrote a very basic clone of "ed" in Z80 assembly back in the 90s and it took me nearly 6 months before I trusted it with a file that wasn't disposable.
Perhaps there should be two projects starting at each end of the problem and see which one wins.
I think the fact that you can continually have a viable program while doing the sort of refactoring they are doing makes it a lot more easy for lots of people to work on the project. Starting from scratch would mean a large portion of time where only people with a very clear design vision could write components.
I think there was already some idea of how to make the changes they wanted to make, so it made sense to fork since the maintainer wouldn't integrate the changes.
That's not possible. Read the original text on Bountysource [1], the whole idea behind the project was to make a clean break from Vim and implement changes that were ignored when were brought up on the Vim mailing list.
The code is just being refactored so a lot of the code that needs to be patched still exists. The patches are all quite small as well; it just exists in different places.
1. The speed in the refactoring.
2. The (apparent) ability to come to a consensus quickly.
3. The quality and depth of the communication of the progress and ideas.
I wish them all the success.