So the container technology is a new technology. So as you saw with the spinning of VM and so on in the projects that you've been doing, it can be pretty expensive to spin up a VM. But it turns out, as I gave an example, even in the context of the virtualization technology, there could be multiple instances of operating systems running. They may be all be the exactly the same, and it is just that there are multiple instances. So there is an opportunity to get rid of that redundancy by having a single operating system, but providing the main guarantees that you want. What is the main guarantee that you want? For your application components, you want memory independence, isolation, and the ability to have multiple address pieces within a particular domain and so on. So that's the idea behind the container technology. So you can think of it as a lighter form of resource virtualization as opposed to the full-blown virtual machine of a hypervisor, and what you can do is on top of the same kernel, you may have a Linux kernel here, and on top of the Linux kernel, I can have multiple containers. So these are all containers and each of them have complete independence and isolation just like a virtual machine. But when it comes to certain things like the device drivers and so on, that will all be common for all of these because that's not changing from one instance of this container to another instance of the container. So you can have multiple containers in the same kernel, and there's an illusion given to each container that they are the only one using the hardware resources. That's an important thing that is there in virtualization world that's maintained in the container world as well, and all the kernel services, for example, all the IO subsystems, you don't have to replicate them in each container, but you can share them in the same operating system. So that's the main point about containers and the advantage of containers, instead of having a single process, you're saying that a collection of processes can be in a container and that might be the way you want to think about a particular application. It gives you complete fault isolation across these different containers so that anything that happens within a particular container can kill this, but these guys are not affected by that. So that is an important integrity concern that you get with virtual machines, you want to maintain that in the container world. Container technology is much more performant compared to VM. First of all, spinning up the VM itself, we're going to see that in a minute. So spinning up a VM is much more expensive than spinning up an individual container, and you still get all of the good things that you associate with virtual machines in terms of resource isolation, performance guarantees for each container and so on. Negotiating and getting there can be quicker also because it's a container to an operating system as opposed to a VM to hypervisor negotiation. So there could be some advantage of that. But the main advantage comes about in the fact that these containers can be spun up very quickly and taken down very quickly, and that is a big win because one of the main concerns in virtualization technology is that, if we talk about elasticity, meaning elastically I can grow and shrink the resources that I need. Good in principle. In practice, what that means is every time I want more resources, I'll spin up new VMs, and spinning up a new VM can take a few minutes, and if that is going to take that much amount of time to spin up a VM, then I might say, "Well, I don't want to have that latency, I'm going to have pre-allocated VMs even if I'm being more for it." On the other hand, containers can be spun up in less than a second, in the millisecond, few hundreds of milliseconds. So that's two orders of magnitude compared to spinning up a VM, and therefore that performance advantage is one of the biggest advantages. But after that in terms of negotiating additional resources and so on, those things is still crossing the boundary between the hypervisor and the VM, or the Linux and the container, that's not going to be that significant in terms additional resources, but the biggest advantage comes in the fact that you can dynamically grow the number of containers that you want and shrink it. That's the main thing, and it gives you the same security guarantees, configuration, a namespace independence. You can create multiple processes within a container just like you can create multiple processes within a VM. So all of those are exactly the same and you get the performance advantage, and that's the reason why today if you look at it, because of the fact that there's order of magnitude faster, a boot up time for starting a container compared to a VM, even data centers are starting to use the container technology within the data centers, and we heard that from Manii when he when he talked about the service fabric that they're integrating that into the Microsoft Service Fabric Framework. When you have containers, there are fewer operating systems to manage and patching of operating systems which happens perennially. So continuously, we have this operating system patches that come about because of security reasons, all of that can be managed much more easily, both for applications as well as the operating system. It can lead to improved resource utilization because a Linux operating system is the one that is managing the containers and it knows the usage of resources, and so it can actually result in improved resource utilization, not necessarily the time it takes to get the resources from an individual container's point of view, but the utilization of resources.