We are now ready to present the algorithm for solving decoding problem that was developed by Andrew Viterbi for the case code. To implement the dynamic programming for decoding problem, we introduce the variable score k, i which corresponds to the maximum product weight among all paths from source to node k, i or the node corresponding to state k and located in the ith column of the graph. Let's consider all predecessors of this node corresponding to all possible states. And obviously score at the node k, i equal to maximum, so all possible states l of scores in the previous column multiplied by the weight of the edge from the node in the previous column, node l to the node k in the column i. Which is in other words is maximum through all states score in the previous column multiplied by the weight of the edge (l, k, i-1). So the recurrence for the Viterbi Algorithm will be score k, i equal maximum through all states, score l, i- 1 multiplied by weight of the edge l, k, i-1. The initialization of course is score at the source = 1. And the maximum product weight over all paths from source to sink is computed at nodes sink, in which a score of sink will be maximum through all states of the scores in the last column of the Manhattan. Now let's estimate the running time of the Viterbi Algorithm in the case when HMM has forbidden transition. Forbidden transition means that the probability of switching for a given pair of states is 0. For example, in this HMM diagram there are four states but you see there is no edges connecting some states, they correspond to forbidden transition in the HMM. You notice that the number of edges in this case in the Viterbi's Manhattan is smaller than for the k's of the full HMM diagram when all transitions are allowed. And in this case, the running time is of course, number of edges in the HMM diagram multiplied by the number of emitted symbols, n. You, of course notice that if in the past we were maximizing the sum of weights of edges in the paths in the graph, now we maximize the product of edges for a paths in the graph. And when we maximize the products, there is a danger of overflow. Therefore, bioinformaticians prefer to work with this logarithmic of scores. So, if you transform the recurrence for the score k, i by taking its logarithm, then the recurrence corresponding to the product of weights is transformed into the recurrence corresponding to the sum of weight. And this transformation substitute weight of edges by the logarithm and transform product of weight into familiar sum of weights. So there is no difference between computing optimal solution in the decoding problem and the traditional longest path in the graph, something that we studied before. Note, that we have already learned how to compute probability of the hidden path pi, and it turn out be easy. It amounts simply to multiplication of transition probabilities. And our intuition tells us that probably it will be equally easy to compute the probability of the emitted sequence x. Let's see whether our intuition is correct. So when we compute probability of pi, it is simply multiplication of transition probability. But when we compute probability of x, it's the sum through all possible paths pi of probabilities of x, pi, and it is not clear how to compute this sum. Note that probability of x is sum of, through all possible hidden paths pi of the product weight of pi. And compared with score at sink in our Viterbi Manhattan which is maximum through all possible paths pi of the product weight of pi. These two expression look extremely similar. Let's try to see wether we can explore this similarity to come up with an algorthm for computing probability of the emited sequence x. So to find out, what is the most likely outcome of an HMM, we first need to solve the following outcome likelihood problem. Find the probability that an HMM emits a given string. The input is a string x emited by an HMM, and the output is the probability that the HMM emits this string. And since you already saw the similarity between computing probability of x and Viterbi algorithm, let's try to figure out whether we can solve the outcome likelihood problem by changing a single symbol in the Viterbi recurrence. So, let's try to modify Viterbi algorithm and introduce a variable forward k,i, which is total product weight of all paths to the node (k,i), here are all predecessors of node k. And therefore, forward k,i equals to the sum through all possible states of the values forward (l, i- 1) in the previous column multiplied by the weight of the edge from the previous column to node k. Which is simply sum through all state A forward l, i- 1 multiplied by the weight of the edge (l, k, i- 1). And the only difference between this recurrence for forward k,i and score k,i is the change of maximum into the sum. Now, that's all interesting, but you may have a question, what it all has to do with biological applications? And in the next section we will start looking at application of HMM to finding product demands.