So typically, the kind of problems that you have to worry about in data center networking, well, actually, networking in general, forwarding loops. What does that mean? Loop back, you expect that every hop of the packet is making an advanced towards the destination. But if that is not happening and it keeps cycling back and forth, that's the forwarding loop. That could be because of the tables being incorrectly set. Because generally speaking in networking, what happens is that you have a source and a destination. There are several hops that a particular packet may take from source to destination. At every hop there's a switch that is making a decision where to send the packet next towards the destination. So it is using a table lookup in order to do that, but if those tables are not properly maintained that can result in forwarding. So that is one problem. Link failures, as the name suggests, a particular switch may have multiple outgoing links and maybe the link that may take it to its destination may not be available and therefore there is a failure, and that results in problems in getting the packets where it has to go. Inconsistencies in forwarding, and this has to do with the fact that in a large network, all the tables are not getting updated at the same time, it is happening at different times. There could be inconsistencies between the forwarding tables from one switch to the next twitch. That can result in bad things happening. Usually, the forwarding inconsistencies are the reason for the forwarding loops as well that exist in the networks. Another problem that generally we have to worry about is unreachable hosts, meaning that I want to send it to a particular destination, but the network gets fragmented because some links have failed and therefore there's in partition in the network and so I'm not able to get to the desired destination. So for instance, if I'm here and I want to send it to here, and let's say there's a partition that breaks up this connectivity, then I cannot get the packet delivered to the eventual destination. So that's another thing that one has to worry about in networking. Perhaps something that is completely undesirable is unwarranted access to hosts. By that what I mean is, could be that there is a portion of the network that is reserved for traffic for a particular application and I shouldn't be getting to any of these hosts. If there are forwarding tables that are inconsistently setup, I could end up in sending packets or receiving packets from hosts that I should not have access to. This is particularly important in data centers where we have co-location of several different services, and when we are reliant on the integrity of my service, not being affected and not even being sniffed by others, then it becomes extremely important to provide this guarantee of no unwarranted access to hosts. Now we can ask the question, well, these are general networking issues and are they prevalent in traditional networks as well as data center networks? We can compare the two. So the forwarding loops is exactly the same in both traditional networks and data center networks. In traditional networks, it is caused by failure of the spanning tree protocol that I used for routing the packets from source to destination. Link failures, once again, the response into link failures may be different in a data center network versus the traditional network, but the problem is exactly the same, that you have failed links and you cannot get to the next hop that you want to get to. Especially if you think about data center networks where the connectivity shrinks as you go towards the root of the tree. In that case, if there is failures that can adversely affect how we can get your flow to the desired destination. The third problem that I mentioned is unreachable hosts. In traditional networks, this problem arises because of the access control lists or routing entries being incorrectly set. That's usually the problem. But we know in a data center network today, we're using software-defined networking. The software defined networking assumes that there are forwarding entries that have been setup properly in the switches and if those entries are not properly setup, then that can result in unreachable host. It's not that you have less connectivity. Usually, the link failures, there are multiple redundant paths and therefore, in principle you should be able to even if there's link failures, you should be able to reach a particular destination using the available paths. But if the forwarding entries are incorrectly set up, then that can result in unreachable hosts. The last problem I mentioned is unwarranted access to host. In traditional networks, once again, this is due to errors in the way the access control lists have been setup. In software defined networking that we use in a data center, it is caused by unintended overlap of roots. So you did not intend certain paths to be traversed by certain packets, because if I'm giving complete network isolation for one service's traffic from another service's traffic, then I have to be very careful in how SDN sets the switches. If the rules overlap, then it can result in unwarranted access to host. So these are the comparison between data center networks in traditional networks. The challenges in the context of data center networks is that there are lots of potential for control loops. What I mean by that is from switches to switches there could be things that are set up incorrectly causing some loops. The controller may be the reason why such control loops get propagated in the first place, or the application itself may be buggy. That can be another source of control loops that exists in the network as a whole. The second challenge is that the end-to-end behavior of networks is very unpredictable. Because the networks are getting larger, 100,000 nodes in a data center, and it is continuously increasing, which means that there are lots of potential paths that can overlap, paths that can get to destination that should not get to and so on. Another concern is that the network functionality is becoming more and more complex and later on in this course, we will talk about network functions virtualization as well. So since network functions are becoming complex, there are more chances of not being able to know what the end-to-end behavior should be in looking at the data center networks. There is also an issue with respect to partitioning the functionality. As we've seen in how the data center networks have been built, there is a controller that is doing the setting up of how the switches should operate. So that you can separate the control plane from the data plane and the switches can just worry about forwarding the packets based on what the controller has done in terms of setting up the switches. That's a partitioning of the responsibility between the controller and the switches, but it's an unclear boundary of functionality, how they should be separated between network devices and the external controllers. That's another reason why it becomes very difficult to understand end-to-end behavior of networks. The effect of the bugs can be fairly catastrophic at times. For one thing, you can have unauthorized entry of packets into a secured zone which is bad news, so we don't want to have that. Network bugs can also lead to vulnerability of the services and the infrastructure as a whole to external attacks by malicious users. So that's another thing that you have to worry about. Also denial of service is something that I'm sure that you've heard this terminology before. So denial of critical services is something that can be a consequence of bugs in the network. Most importantly, from the end user's point of view, it's going to affect the network performance and it may even result in violation of the service level agreements that a particular application has or the data center vendor. This could lead to revenue loss and perhaps cancellation of the services on the data centers. So these are all things that data center providers have to worry about in order to make sure that things run smoothly.