It's also easy to explain to someone who has not studied parsing, and easy for them to implement. That was why I chose recursive decent when I needed to trick someone into writing a C compiler.
This was back in the early '80s, back when there was much more variety in personal computers both in CPU architecture and operating system, and C was fairly rare on such small machines.
A friend who was a really good programmer but who had not taken the compiler course in college (he was a math major, not a CS major) had started a company aiming to produce a C compiler for a new small computer that was coming out in in a few months, with the compiler meant to be available as soon as the computer came out. He'd hired another friend to write the compiler.
This was well before the internet, when advertising was a lot slower than it is now. The company had to buy its ads in the relevant computer magazines well ahead of time.
That guy writing the compiler was a genius, and was on track to produce an amazing compiler especially given the limited resources of the personal computers back then...but it turned out he was not a fast genius. The compiler would not be finished until long after the new computer launched.
The first friend was getting rather depressed, knowing he had to get someone faster to write the first version of the compiler but not knowing anyone else who could do it and was available.
So one evening I decided to convince him that he could do it himself. I sat him down in front of a whiteboard, and told him he needed three things: a lexical analyzer to tokenize the input, a parser to parse the token stream, and a code generator to turn the parse tree into assembly.
I then outlined recursive decent parsing on the whiteboard, giving examples of using it on C. I was able to convince him that parsing was not hard, and that he could do. I then moved on to code generation, and showed him that he could generate code for a simple stack-based virtual machine, and that could easily then be mapped to the real, register-based CPU--it would be inefficient, but I also went over how later he could add a simple optimizer that could remove a lot of the inefficiency.
Once I had him convinced that he could handle parsing and code generation and some optimization by their deadline, I pretended we were done and started to leave. He interrupted and asked about lexical analysis.
I acted embarrassed that I had forgotten that, and "admitted" that it was the hardest part, and said I'd try to think of something and went home.
Of course that was a lie. When I got home, I quickly wrote a lexical analyzer for C, and then the next morning went back and handed it to him.
He now actually believed that the hardest part was done, and was now sure that he could do the remaining easier parts himself. And he did. We both got a good laugh later when I finally told him that lexical analysis is the easiest part.
This was back in the early '80s, back when there was much more variety in personal computers both in CPU architecture and operating system, and C was fairly rare on such small machines.
A friend who was a really good programmer but who had not taken the compiler course in college (he was a math major, not a CS major) had started a company aiming to produce a C compiler for a new small computer that was coming out in in a few months, with the compiler meant to be available as soon as the computer came out. He'd hired another friend to write the compiler.
This was well before the internet, when advertising was a lot slower than it is now. The company had to buy its ads in the relevant computer magazines well ahead of time.
That guy writing the compiler was a genius, and was on track to produce an amazing compiler especially given the limited resources of the personal computers back then...but it turned out he was not a fast genius. The compiler would not be finished until long after the new computer launched.
The first friend was getting rather depressed, knowing he had to get someone faster to write the first version of the compiler but not knowing anyone else who could do it and was available.
So one evening I decided to convince him that he could do it himself. I sat him down in front of a whiteboard, and told him he needed three things: a lexical analyzer to tokenize the input, a parser to parse the token stream, and a code generator to turn the parse tree into assembly.
I then outlined recursive decent parsing on the whiteboard, giving examples of using it on C. I was able to convince him that parsing was not hard, and that he could do. I then moved on to code generation, and showed him that he could generate code for a simple stack-based virtual machine, and that could easily then be mapped to the real, register-based CPU--it would be inefficient, but I also went over how later he could add a simple optimizer that could remove a lot of the inefficiency.
Once I had him convinced that he could handle parsing and code generation and some optimization by their deadline, I pretended we were done and started to leave. He interrupted and asked about lexical analysis.
I acted embarrassed that I had forgotten that, and "admitted" that it was the hardest part, and said I'd try to think of something and went home.
Of course that was a lie. When I got home, I quickly wrote a lexical analyzer for C, and then the next morning went back and handed it to him.
He now actually believed that the hardest part was done, and was now sure that he could do the remaining easier parts himself. And he did. We both got a good laugh later when I finally told him that lexical analysis is the easiest part.