So now we talked about dynamic programming, and we showed how it, we can use it to solve the problem, the and the restructure problem efficiently. And we said that it gives us an advantage over recursive algorithm. In, algorithms, in terms of, of saving us computing solutions to subproblems that we had already computed. With dynamic programming, we store these values. With a recursive algorithm, a recursive algorithm, when it sees a, a new, when we make a new call to a function, when an instance of a problem, that algorithm doesn't know whether it, that, the, the, the function has been already called in that instance or not. So it will keep doing that computation. I want to illustrate the difference between the two. On, on a, a case or on a computation that, that involves the, the number of choices of subsets. Remember that we talked about, about this this notation here, N choose K. Right? N choose K is the number, this notation says the number of ways of choosing a subset of size K from a set of size N. We can think about this recursively, in the sense that suppose I have a set of N elements. And I am built, I'm taking a set of size K from this, a set from size K. And now think about the element N, okay? Think about the element N. N, when I'm building a subset, if I'm building a subset here. There are two cases. There is a case of N is part of that subset, and there is an element where N is not here. N is not here and N is in the subset, okay? [COUGH] If I build a, I want to continue building the subset if N is here now I need to add K minus one element to here from the set of elements 1 to n minus 1. Right? So now I add K minus 1 element from the set of 1 to N minus 1. If, if I don't want to put n in the subset, okay? Now I need to choose K element from the set of 1 to N minus 1, okay? So this is how we would recursively compute the set of all subsets of size K for example, from a set of N elements. For each element, either I put it in the subset or I didn't put it in the subset. If I put it, I repeat the process of choosing K minus 1 element from the remaining element and there is no set. If I don't put it, I still have to choose K element because of so far I have chosen zero, but from the remaining set of elements. So this one if, if I have, this is the number of choosing K elements from this, what would be the number of choosing k minus 1 elements from this? This has N minus 1, and I'm choosing K minus 1. So it would be N minus 1, K minus 1. This one here, I have N minus 1, but I'm choosing K. So the number of subsets, it's, the number of subsets I can form like this, plus the number of subsets I can form like this. And this is why we have this very pop, common or, or famous relationship that N choose K equals N minus 1 choose K plus N minus 1 choose K minus 1. This corresponds to the cases where K was not the, the element N is not or the last one, the last element we are looking at is not part of the subset. This correspond to cases where the last element is part of the subset. So we have this relationship, now suppose I want to, I write, I, a recursive algorithm to compute this. Now, of course we have the base cases that if I have N, and I choose the other elements for any N greater than or equal to zero. Choosing zero elements from N there's one option, and 0K is zero for any K greater than zero. Okay? So we have these two base cases. Now I can turn this into you know, compute N choose K where I get NK here. And of course I have to check first if N is greater than or equal to zero and K equals zero. Return one which is this case here else if K greater than zero. Return, zero. Else, we have that. Compute, oh, return, compute and choose. K on N minus 1K. Plus compute N choose K on n minus 1, K minus 1. Okay? So this is how I would turn this into this formula. So now if you want to compute the value of 10 choose 5, is going to basically go through this and do the computation. Okay? But this is again a recursive implementation. If I wanted to do dynamic programming, I will have two loops for K equal 0 to, to, to n and for, for j equal one to N, I will start building the matrix one step at a time. Now let's look at how this recursive algorithm is going to perform on some instance of the problem. So let me just do, to make things easy, I'm going to shorten this. Like, every time I have this call, this is, I'm going to call it N minus 1 choose K and this is N minus 1 choose K minus 1. So suppose now that, I'm running this to compute, let's say 10 choose 5. Okay? So I'm calling co, compute and choose K 10,5. Of course, neither of these halts are really going to come here. What is that going to, so each one of these is a function being called? It's going to do 10, 9 choose 5. And 9, choose 4. So it's calling the function, plus calling the function on 9 and 5, 9, 9 and 4. Then when it comes here and says compute 9, 5, it's going to say, okay, it is compute 8, 5 and 8, 4. Here it is 8,4, 8,3. Then it's going to continue. 7,5, 7,4, 7,4, 7,3, 7,4, 7,3, 7,3, 7, 2. So each one of these here, when the parentheses is really a call to this function. So the function is being called over and over and over, so when we call it on ten five, it's, it mean two codes to the function of these parameters, 9,5, 9,4. When it called this, it called the function twice with eight comma five, eight comma four. And this again, 6,5, 6,4, 6,4, 6,3, 6,4, 6,3 6,3, 6,2, 6,4, 6,3. Here. 6,3, 6,2, 6,3, 6,2, 6,2, 6,1 and it continues because it hasn't reached any of these case now what is happening here let's look at 6,4, 6,4, 6,4. Okay? And we have [SOUND] 6,3, 6,3, 6,3, 6,3, 6,3, 6,3. So this function, this recursive algorithm is going to call itself. Six times on the, on the same parameter. 6,3, 6,3, 6,3, 6,3, 6,3. Okay? Because the recursive algorithm, when it reaches this call here and says okay, now I need the value on 6,3, it's going to call the function. It doesn't know that somehow, somewhere else it had already computed it. And this. Example illustrates the problem, in this case, with recursive algorithms. We have these overlapping sub-problems. This sub-problem, 6 choose 3, is overlapping. It's here, and here, and here, and here, and here, and here. What this means is that this path, and this path, and this path, and this, and this one and this one, they are overlapping at this problem. Recursive algorithm is not going to keep track of that. It doesn't know that this overlap. It's going to keep computing them. Suppose that computing 6 choose 3. Right? The function is going to take million seconds here. It's going to do a million, plus million, plus million, plus million, plus million, plus million. Dynamic programming algorithm knows this, keeps track of this. It has one entry in the matrix because it's building it so this is, if this is the K, it has three here. If this is the N, it has six here. It has this value. It might take million steps to compute it. But it will take less, by the way, because again there's still some over-lapping here that is being exploiting. It, even if it takes a million steps, next time it's need it, it's better, the computer is going to take one step to look up that value, one step to look up that value, one step to look up that value, and so on. Okay? So this is the major difference between dynamic programming and recursion. When we have this notice that to have a dynamic programming algorithm, I had to had a, to I had to have a recursive formula. I had OPT of I, J equal max of OPT I,J minus 1 and so on. When I have recursive formula the natural thing for me to think about is let me implement it recursively. Let me have a recursive algorithm. But when we have this nat, this nature of the problem that the subproblems are overlapping, recursive is, recursion and recursive implementation is going to be costly because the, this subproblems. Are going to be computed over and over and over. Every time we see that's a problem, the function is going to compute from scratch whereas dynamic programming overcomes this problem by storing the values in in a matrix. And in it's nature of going from smaller sub-problems to larger sub-problems so that by the time is comes to 9,5 it had already computed these values here. So when it comes to compute 9,5 all it needs to do is look up this value, look up this value, and add the two values. When it comes to compute 8,5 it gets this, it gets this, and adds them. Here when, when recursive algorithm comes to compute 7,4, it's going to compute 6,4 take all the time to compute this, then it will take all the time to compute this, and then it will add the two numbers. Dynamic programming, when it wants to compute this, it just looks up this value, looks up this value, and adds these two. Okay? So there is a lot in common between a recursive algorithm and dynamic programming algorithm at their core. There is some sort of recursive formula that corresponds to the optimization criteria or the optimality criteria in our score, that we are trying to optimize. The recursive algorithm is going to naturally implement that recursively, keep calling the function every time, every time if we need the value. Dynamic programming instead, is going to store the values of these the sub problems and every time it sees the sub problem again, it just looks up the value. Of course if there is no overlapping, if I draw this tree and I'm not seeing the same value twice, we are not going to gain anything from dynamic programming implementation because the power of dynamic programming comes when we have overlapping sub problems. Okay? So this is a case that illustrates, why dynamic programming is a good algorithmic technique where we store the values and we retrieve them when we need. Whereas recursive algorithm is going to keep calling the function and doing the computation from scratch, even though we have done it many times. Okay? So keep this in mind when you are thinking about recursive versus dynamic programming algorithm and I highly, highly encourage you to implement the dynamic programming for RNA secondary structure, and to, to implement a recursive version of that, and use large sequence, like a sequence we have, let's say 500, or, or 200 symbols in it, and run the two algorithms and see what you get. Okay? Again you will notice that recursive algorithm is going to keep computing the same, solution to some, same sub problem over and over whereas dynamic programming will never do that. It will compute the solution to a subproblem once, and then later it will just look it up in the matrix.