This is really cool. The AST transformation stuff here is neat, but relatively well-trodden ground.
The more impressive new science here is the lazy_function decorator, which is implemented as a bytecode transformer on the code object that lives inside the decorated function. The author built his own library for the bytecode stuff, which lives here: https://github.com/llllllllll/codetransformer.
You said in a comment that you're looking for a usecase for this technique, so I'll provide one for something similar, perhaps we'll get ideas.
I've been toying with something similar lately, as a caching framework for scientific computations. I will have something like:
x = load_big_file(filename) # takes 2 minutes
y = sqrt(1 / x ** 2) # takes 4 seconds
...
Then, as my work proceed, I will change and tweak and re-run in the same process...clearly a lot of my time would have been saved by caching (though a different part each time depending on what I tweak).
The way to go currently is use joblib, which provides a decorator to put on a function to do basic caching. However you have to take care to manually clear cache if a function you're dependent on changes, or sometimes it will clear cache itself because you changed something irrelevant to the computation.
So the lazy alternative idea I had is a lazily evaluated tree similar to this, where the purpose is looking up a cache using the AST as key. What I have now looks more like this:
@pure
def load_big_file(filename): ...
>>> x = load_big_file(lazy(fileref(filename))) # fileref is like a string but hashes by timestamp of file..
>>> y = sqrt(1 / x ** 2)
>>> print y
<lazy 23wfas
input:
v1: 32rwaa "/home/dagss/data/myfile.dat" @ 2015-03-03 08:43:23
program:
e0: 43wafa v1**2
e1: 4rfafq 1 / e0
e2: sqrt(e1)
>
The point is every node in the syntax tree gives a hash (leafs having their value hashed, inner nodes having a hash based on the operation and the inputs). Then implementing a cache is simply
NULL = object()
y = cache.get(y, NULL)
if y is NULL:
y = cache[y] = compute(y)
I'm leaning a bit towards explicit being better than implicit though (having to call compute(y) rather than it happening when you need it to be evaluated transparently..)
Currently, there is no way to clear an object's cache in lazy because it doesn't hold onto the function and arguments longer than needed to make sure that the gc has time to step in and clear unneeded objects. The built in memoization could help with this though.
Also, in lazy, you must call `strict` on things to get a result, it is not implicit. What is somewhat implicit is that calling `bool(somethunk)` (or any other converter implemented with a magic) will return a strict value because of the python standard.
The codetransformer that I worked on actually has some real use cases, namely exposing an object to a function at runtime without making it available to the calling code by name.
As far as lazy itself, it was purely developed for fun in my spare time; however, that does not mean it is not useful. I will say that there was no intended use case for this project and it was not designed to solve a particular problem.