everyone always says this as if it's some kind of galaxy brain epiphany. it's not because
1) on interviews they want you to explicitly construct the array very often i.e. it's the different between hire and a strong hire exactly because @lru_cache is much easier
2) you can easily blow the stack for real production grade implementations using recursion (think edit distance for genome sequences). you also lose time actually doing the recursion. on the other hand, it's easier to prune the search space with explicit recursion vs tabulation.
edit: btw i'll also say that dynamic programming proper, i.e. as a technique for solving optimization problems defined by recurrence relations, uses tables and recursion (fixed points). so good luck understanding something like the linear quadratic regulator if you think it's just @lru_cache
> on interviews they want you to explicitly construct the array very often i.e. it's the different between hire and a strong hire exactly because @lru_cache is much easier
are you serious? someone really asked you to "use array". Beyond stupid if they really did.
> you can easily blow the stack for real production grade implementations using recursion
recursive solution doesn't automatically mean using a the method stack. you can implement recursive solutions using explicit stack .
It's not stupid to use an array, because it's the most natural data structure for representing one, two or more variables that can have different ranges and capturing their values.
For example, if I have a function 'foo(a, b, c)' and I am using subproblems whose solutions use smaller values of a, b, and c, then the most natural solution is to tabulate the solutions in a 3-d array with the (fixed) values of a, b, and c indexing into the 3-d array to get at the (solved) subproblem.
It's of course also fine to use a hash table to construct the triplet (a, b, c) and use it as the key to store (and look up) the solution to the subproblem that corresponds to the particular value of (a, b, c), and most reasonable interviewers would be happy to accept this as a valid approach.
As for recursive solutions automatically using the method stack, the overwhelming majority of programming languages provide the ability to do this, and the generally accepted understanding is that if you use this facility, you are using automatic recursion, letting the compiler/machine/runtime maintain the stack for you. The generally accepted terminology when you explicitly allocate your own stack is 'iterative'.
Also, the generally accepted terminology is 'Dynamic Programming' == 'Bottom-up tabulation', and 'Memoization' == 'Top-down recursion with caching to avoid solving the same subproblem'.
Op's saying (I think) that the difference between brute force recursion and DP ( top down ) is memoization, and that the language / library construct lru_cache will perform that memoization for you.
Its not a fascinating insight. If you can express a problem as a recursive brute force solution and there's a least a little recalculation involved, you're one step from DP. The lru_cache just does that step.
One thing of note: The arguments to the method you are memoizing need to be hashable. For example, if you want to memoize a method call whose parameters include a list, you will get an error!
works for me every time on leetcode.