I guess I don't quite understand the root purpose of program synthesis. It's my impression, as a programmer, that the difficult task is defining ones program logic without ambiguity. If this is indeed the case, then program synthesis already exists in the form of Coq, but it doesn't mean we've made programming easier, it means we've created a tool that reveals all possible failure points of a program, forcing us to consider all possible outcomes. It makes for a provably un-crashable program, but we pay for it in the time it takes to create the specification/implementation (which at this point have converged into one). If I can write a specification in a language that some tool can transform into a source code implementation (which can then be compiled and executed), then I wouldn't call that tool a "program synthesizer" but rather a "compiler" (and the specification language would then be a programming language).
> It's my impression, as a programmer, that the difficult task is defining ones program logic without ambiguity.
Certainly the unambiguous specification of a program's behavior is a difficult task! Barliman's approach is to allow for the behavior of a function to be partially specified. That is, the specification of behavior is ambiguous, since the examples can only specify the intended behavior over a small number of inputs/expected outputs. Many other synthesis tools allow for ambiguous specification of behavior, and use heuristics such as parsimony (smallest program is assumed to be the correct choice) to pick one of infinitely many programs consistent with the partial specification.
The idea for Barliman is to make the writing/editing of the code interact with synthesis-by-example, in order to give the programmer continuous feedback on whether the current code is consistent with the tests, and hopefully to make a reasonable guess at any missing code. As the programmer fills in the code for a function, the search space shrinks. Still, there may be infinitely many definitions that satisfy the partial specification, so Barliman sometimes guesses a goofy definition!
I want to explore adding high-level properties, contracts, optional types, etc., to Barliman, to make it possible to specify the desired behavior in more detail. However, I want to make all of these specifications optional, to maintain the flexibility of a language like Scheme. I think this is still an under-explored area in program synthesis, especially given how much code is written in JavaScript, Python, Ruby, etc.
> If I can write a specification in a language that some tool can transform into a source code implementation (which can then be compiled and executed), then I wouldn't call that tool a "program synthesizer" but rather a "compiler" (and the specification language would then be a programming language).
It is true that if you completely specify/prove correct your program in Coq (for example), you can extract a running programming from that specification. This is an interesting and powerful technique, and is in many ways similar to a compiler, as you point out.
However, I don't think of this approach as program synthesis. In program synthesis I might give a non-constructive specification of a program's behavior, and ask the computer to find such a program. For example, I could completely specify what it means for a number `n` to be the square root of another number `m` without giving an algorithm to find `n`. If I know of such an algorithm, I may be able to prove it correct in Coq, and extract running code from Coq. However, I can't expect that Coq will find the algorithm for me. To me this is the difference between proving code correct and performing synthesis.
However, the division is somewhat murky--for example, some forms of synthesis can be used to prove correctness of the program. For example, a well-known example in miniKanren (the system Barliman uses for synthesis) is to generate a quine directly from the mathematical definition of a quine. A quine in Scheme is some expression `q` that evaluates to itself. In a relational Scheme interpreter implemented in miniKanren, we can generate quines by running the query:
(run n (q) (evalo q q))
Where `n` is the number of quines you would like to generate, `evalo` is the name of the relational Scheme interpreter, and `run` is miniKanren's interface operator. One of the quines generated is:
In the case of quine generation, we are proving that the generated program is a quine by synthesizing it from its definition. (Assuming that both miniKanren and the relational Scheme interpreter are correct, of course!) We have also synthesized fixpoint combinators from their definitions in a similar fashion. Note that this approach is different from, but complimentary to, the synthesis-from-example approach used by Barliman.
If the quines example interests you, you might look at our paper and code from the 2012 Scheme Workshop:
William E. Byrd, Eric Holk, and Daniel P. Friedman.
miniKanren, Live and Untagged: Quine Generation via Relational Interpreters (Programming Pearl).
Proceedings of the 2012 Workshop on Scheme and Functional Programming, Copenhagen, Denmark, 2012.