Arrays and Linked Lists are both ordered collections of data. There are several key operations that we want to compare and contrast between these two collections. We've already talked about accessing a given index in the collection. We saw in a previous video that an array we can access in O of 1 time, that we get direct access via a formula, where we must transverse every single node in a list in order to get to the particular index. So, an array takes O of 1 time to access a given index. A list takes O of n time. When we insert at the front of the array, we know that we need to resize the array every so often. By inserting at the front of the array, we can double it every single time, and give us an amortized access time of O of 1. In a linked list, we saw the inserting at the beginning of a linked list is simply O of 1 time. It was in fact just three lines of code by adding the new data at the beginning, saying new data's next pointer equal to the old location the head and setting the head equal to the new node we just inserted into the list. Let's think about finding data. Given data, how do we go about finding data inside of a collection? So if I want to find the orange cube in an array, I have no other choice but to start at the beginning, and look at the first element, then look at the next element, then look at the next element, aha, I found the cube. So the amount of time of looking at every single element is going to take operations equal to the amount of the data in there. So to find the orange cube, we need to check every single piece of data, so that's going to take O of n operations to find our orange cube. Suppose we want to find the purple cube in our list. If we want to find the purple cube in our list, we need to look at the head of the list. Start looking, nope, not the purple cube. Go to next one, not the purple cube. Got to the next one, not the purple cube. Go to the next one, not the purple cube. Go to the next one, not the purple cube, and so forth. So eventually, we're going to find the purple cube or possibly not find it, we're going to also have to travel through every single element in the list to potentially find it. So we also say finding an element in a list is O of n. So to formalize this given data, find the location of that data in the collection, well, in our array, it takes O of n to find a particular piece of data. In a linked list, it also takes O of n time. So it takes the exact same amount of time in both of these structures to find a piece of data. But could we do better? So looking at the code to find an element inside of an array, we saw this code in our array's lecture. An example for you, you'll see that lines 29 through 34 has the code to find an element. In the List.hpp class, we saw that we have a find function that finds a data inside of our list. So you have the code for both of these if you go back to linked memory or the arrays, and you can actually play around with this find algorithm. But I'm going to argue that we can improve this find algorithm if our data is sorted. So that's the same objective, given data, we want to find out the location of that data within the collection. But the only difference here is now we know that our collection is sorted. So we're going to find 17. We could go and say 2, 3, 5, 7, 11, 13, I found 17. I could use my O of n strategy called a linear search. But I think we can do better. I think if I want to find 17, it's like a phone book. If I want to find something in a phone book, I don't start at the beginning of the phone book. Instead, I start in the middle of the phone book. So here, if I want to find 17, I'm going to jump to the middle of the array. By jumping to the middle array, I can see is 13 greater than or less than 17? Well, 13 is less than 17, so I know that 17 must appear on the right-hand side. So I can eliminate half of this data from my search completely. I can then repeat this process with the rest of this data, and say, is 19, the middle of the what's left greater than or less than my data? Well, it is greater than. So I know my data must be on the right-hand side. Now, in just three operations, I found the number 17. I found my data in the array. I did this by looking at half of the array each time and eliminating half of it. The array has to be sorted, but we use a strategy called a binary search. By using a binary search, we were able to eliminate the need to look at every single data, and we were to cut in half each time. When we cut something in half, that's a log base two operation which we denote as log n. So when we write lg, that denotes simply log base two of n. So this is a log n operation. So we say the total running time is O of log n because we're constantly cutting our data in half, and half, and half, and half. So that's awesome. So we know we can do better than O of n time if we have a sorted array. Let's look at a list. Instead a list, here we have a list that is sorted, just like array, and we want to find 17. So, how do I jump to the middle of the list? I can't. There's no magic pointer that points to the center of the list. Even if we had a pointer that points to the center of lists as we added to the beginning of the list, that pointer would be out of date very quickly. So there is no way to jump to the center of a list. Instead, the best we can do is still travel the list. Look at the first element. Look at the second element. Look at the third element. Look at the fourth element. Look at the fifth element. Keep going until we find 17. So only by doing a linear search to the list can we find an element when we have a linked list. So, linked list, even if the data is sorted, is still O of n time. So given data, find location data or entire analysis is going to be an unsorted array takes O of n time, a sorted array takes us log n time, and a list takes O of n time. You should not only know these runtimes, but you should also know the intuition behind these runtimes. So when you see a similar data structure, you will be able to drive these runtimes and go through the analysis of understanding why they worked through the way they work. So I'm going to look at one last operation that is often going to prove extremely useful. So given an element, we want to insert a new element immediately afterwards. So this is essentially insert after operation. So if we have a linked list node, we want to know how much time it takes to insert after that linked list node. Given an array index, like index 4 inside of an array, I want to know how much time it takes to insert right after a given index. So if we have a linked list node or if we have an array, how much time does it take to insert after that element? So looking at array, if we want to insert a purple cube after the orange cube, then if we look here, we need to stick a purple cube right in here. So with an array, all of this data sequentially in memory. What that means is there's no way we can just slam that purple cube right in the middle there. Instead what we do is we have to copy the data that's already after our orange node, move all of this data over and only once we moved all of this data over do we have room to insert the node. Since we have to keep moving all the data, either all data to left or to the right, we can move in either direction, but if we insert an element at the very center of the list, we're going to have to move n divided by two data moves. So by moving n divided by two elements using big O notation, we know that this is big O of n total moves. So the amount of time it takes to insert after a given element is O of n. On the other side if we look at a list, so given an element, so given a linked list node, given this linked list node right here, how much time does it take to insert after our linked list node? Well, we can just create a new list node, put this purple cube in there, and change this next pointer to be here and put this next pointer to our grey cube and done. So insert after our known linked list node, this is O of 1 time. It doesn't matter if this data is a million nodes wide, we never have to copy over any data. We just have to stick one thing right in there, repair a few pointers. It's three lines of code and no loops whatsoever. This is O of 1 time. So if our objective is to insert an element, either a linked list node or index immediately after a given element, then it's going to take O of n time in an array, or O of 1 time in a list. Likewise, we cannot see that the lead after operation is going to be the exact same as insert after. If you think about removing the next element in array, we have to copy over all of our data and move it back to make sure array is still sequentially in memory. Well, at least we can just repair the pointers. So whether or not we have insert after, or whether or not we have delete after, if we have a given node with an array, it takes us O of n time to lead our inserted node right afterwards, in a list, it takes is just O of 1 time. So you've seen that arrays and lists are both ordered collections that can have trade-offs between the runtime and the flexibility of their implementation. We're going to build a data structure on top of arrays and lists, and we're often going to question what do we want in the base of our data structure. Do we want an array, or do you want to list? These two fundamental linear data structures are going to build everything else that we're going to build in this class. I'm looking forward to building these with you, and I'll see you there.