[MUSIC] Okay, now how about we shift gears now and close out this module with a few more videos? In the next few I'm going to address processor virtualization, and then the last one I'll talk about I/O virtualization, and close out with a summary of all the concepts we discussed. Now, in Power Systems, a core is the singular unit where threads can be dispatched from an operating system. The cores are referred to as processors, and the processors are divided into what we call processing units. That's the smallest unit upon which a thread can be dispatched when the processors are pooled into a shared processing pool. All right, so note what I said there. Right, the core is referred to as a processor, and that core is divided in processing units when it's a shared pool implementation. And the processing unit is the smallest unit. Now, systems can be configured with as few as 2 processors and up to 192. Partitions are allocated either dedicated whole processors or processing units, like I said, when using shared processor pools. 1.0 processing units is the equivalent of a whole physical processor. A partition can be configured with as little as 0.05, or on some hardware models, 0.1 processing units, very small, right? And after that minimum is satisfied, additional processing units can be allocated in increments of as small as 0.01 processing units. So to better understand these concepts, let's run through the Power System processor in some simple graphics, all right? So the simplest and easiest of the processor concepts to understand is the dedicated case. In this case, the LPAR is assigned processors that are used solely by that LPAR. If the LPAR needs more processing, it must wait for the threads to finish on those processors before it can dispatch more threads. And in an equally unproductive situation, if the LPAR doesn't need all the processing capacity of the processors in a given dispatch cycle, that capacity goes unused. For that reason, many customers choose to use shared processors instead of dedicated. The one case were dedicated processors make more sense is when the absolute maximum performance of the processors is required. Because there is a small amount of overhead for the hypervisor to essentially do context switching to handle the shared processor pool. So performance per processor is slightly lower than the dedicated case. But again, as we saw earlier, with the ability to use all the capacity of the processors, your throughput's going to be better. All right, so next up while dedicated processing LPARs have their place, the overwhelming number of LPARs use shared processors. The processors that are not dedicated or considered part of the shared processor pool. So there's no special definition of a shared pool, it's just those processors which are not dedicated use. Any LPAR configured to use shared processors can access any of the processors in the pool. There are two configuration parameters that control this access. One is in the graphic, the virtual processor. This is a VCPU in common terminology. Now, to the operating system and the LPAR, this is an entity where it can dispatch it thread for processing, it looks like a physical core. During the dispatch cycle, the LPAR can dispatch as many threads as it has VCPUs. The amount of runtime on the processor is controlled by the other configuration parameter, which is the processing units. Like I said, the processing unit value is essentially a percentage of the time that the thread is given on a VCPU to run on a physical processor. So for example, if the processing unit value is 2.0 and the VCPU count is 2, the threads can run for 100% of the time on the two physical processors that they get assigned in a dispatch cycle. Now, increasing the VCPU count to four results in the threads being suspended after 50% of the dispatch cycle. Making time for another LPAR's threads to use the two physical processors. Now, there's a lot more to understand about VCPUs and processing units, but I'll leave that to you as additional reading if you're interested. [MUSIC]