The third and final implementation of a graph that we're going to talk about is the adjacency list implementation. This combines some of the features of both of our implementation so far into this new structure that's going to have some unique properties. Let's take a look. Here, we have the exact same setup. We have a vertex list over here and this is going to be again implemented as a hash table, so we have nice O of one access time, and we have our same edge list over here. What we're going to do in adjacency list is we're going to maintain a linked list of all adjacent edges to a given vertex off of the vertex list. So this means that off of you, I'm going to have a linked list that contains the fact that I have edge a as well as the fact I have edge c that are incident to our vertex u. What this means is that here, a, I'm not just skimming only to store a, but I'm going to store a pointer to a itself and a pointer to my edge c itself. Here, instead of storing u and v, you can simply store a pointer back to where its location is. So this is u's location inside of my adjacency list, and this is a pointer to u's location inside of my adjacency list here. So what we have is we have pointers that come off of a linked list inside from our set of vertices that link back to the edge list, and the edge links list then links back to the linked list element inside of my adjacency list. So we have pointers that are going both between my list of nodes as well as my list of edges, and this becomes a huge mess of different pointers. Let's look at another one to see how this starts to build out. Look in the w to give myself a little bit of space. We're going to see at w has our first edge is going to be the edge wu. This is edge c. So this will have a pointer to edge c. This w pointer inside edge c is going to point back to the w node. This node itself has a next pointer that's going to point to my next edge, which may be edge b, that connects v and w. So again, pointer to the edge list node for b and our w back to our location in our list w. The very last node is node d. D again points to d, and w points back to this node. So what we have is we have this totally connected world where we've got all of these pointers interconnecting with between the edge list and the actual implementation of our node list. This is a crazy mess of pointers, complicated to think about an implement. But let's see how great the running times are. If we want to insert a vertex, all we need to add is add the new vertex k here at the bottom and there's nothing else to do. We don't have to worry about creating an initial linked list because initially, this will just be a null pointer. We don't have to worry about adding anything to the edge list structure. So insert a vertex is just going to be our O of one amortized time, the same as an edge list. The removeVertex on the other hand though is going to be a slightly more complicated because imagine removing the vertex w. Removing the vertex w is going to require not just removing the vertex w, but also cleaning up this data structure of all of the edges. So in removing the vertex w, we're going to want to make sure that we move through all the degrees of the vertices and remove all of the associated nodes. This is going to run in terms of the number of degrees of the vertex that we're going to try and remove. This allows us to maintain the data structure removing every single point on the way. Our adjacent is going to be an operation that we can now find is relatively quick, but not as quick as O of one. So imagine, if we need to find if u and w are adjacent. Here, we need to either travel to u's list and see if any of the pointers inside u points to a vertex that has v connected to it, or go through all of the things in w and point to a vertex that has w connected into it. Doing so is going to take us either to degree of v_1 or the degree of v_2 since we have to go through every single incident edge in their list. So that total, the totality of this running time is going to be the minimum of those two values. So the minimum of those two values is going to tell us the smallest list we have to go through and we can go through the smallest to that linked list. So it's not quite O of one time, it's not as good as an adjacency matrix, but it is pretty good. Finally, to find out the number of incidentEdges, we simply need to walk the linked list. So this is going to simply be degree of v. So we've done a lot of analysis. We've talked a lot of different algorithms. The big picture out of all of this is going to be a table that compares everything together. So let's look at that. Here is all of the different implementations built in a single table and let's look at this table. So the amount of space that's required is going to be n plus m for the edge list and the implementation list. Adjacency matrix, we don't need n plus m, we actually need n squared time, wherein adjacency list requires n plus m time. So we can see that in an adjacency matrix, we're going to have the most space because that matrix can become huge. It's going to be squared by the number of nodes in the actual implementation. The insertVertex time, it's going to be constant time in an edge list and adjacency list as we discussed, which is great. Only an adjacency matrix is going to require linear time. The removeVertex, we can discuss removeVertex has the actual mechanism just remove the vertex or removing the vertex and all the edges. To get an apples and apples comparison, I'm going to examine all of these algorithms by removing the vertex and removing all of the associate edges. To remove the vertex and the associate edges requires us to go through the edge list in our edge list implementation. It requires us to go through our entire matrix in our adjacency matrix, but only requires us to go to a linked list and our adjacency list implementation, just require a degree of v operations. This is the best algorithm here. To insert an edge requires O of one time in every single case. We can simply add it to the edge list, so they're all great implementations. Likewise, removing an edge is also constant time operation. Finally, the set of instant edges, something we've talked about with all of these algorithms, to find the incident edges and an edge list, we have to go through the entire edge list. To find the incident edges and adjacency matrix, we have to go through all of the nodes. To find out the adjacency list in all of the adjacency list implementations, we can just simply do degree of v. So this is going to be optimal algorithm here. Finally, if we want to find out if two nodes are adjacent to one another, we see that adjacency matrix has this amazing constant time run time. While adjacency list has the minimum of degree, so it's a worse run time. The big thing to take away is there is no clear right answer. So I circled and put a check mark by the best category for every single row. In doing so, we see the adjacency matrix is secure if we care about the areAdjacent operation. It's the only one that has the best areAdjacency running time. But on the other hand, if we care about inserting and removing vectors as well as checking out the list of all of the incident edges, we can see the ideal running time is going to be here for an adjacency list or in some cases, just as great for an edge list. So there is no single algorithm that works best for all applications. Instead, as we dive into the different applications of graphs, we're going to look at what implementation of a graph is best to use, and how and when we should use our different implementations. We'll dive into this next week as we start talking about doing awesome things with graphs. I'll see you then.