The concept of an enclave is essentially a group of work from different address spaces grouped together so they can be handled at the same priority rather than based on traditional rules. This also means that work coming from the same address space can have some work handled at a high priority and some work at a low priority, rather than treating all the work from the same address space, the same. Enclaves are grouped by common characteristics and service requests. Since they are pre-emptible, the z OS dispatcher and WLM can interrupt them for more important ones. This is especially interesting when it comes to transaction handling processing, which often involves several programs in several address spaces and involves both TCB and SRB work. An enclave is essentially a business transaction without address space boundaries. There's two types o, f independent enclaves and dependent enclaves. An independent enclave is a true system resource manager of transaction, and it is separately classified and managed in a service class or performance group. A dependent enclave is a logical extension of an existing address space transaction and it inherits its service class or performance group number from its owners address space. It's also important to point out that enclave transactions can span multiple dispatchable units in one or more address spaces, and enclaves are reported on and managed as a unit. All of this work around task management comes down to making sure the processing resources we have are getting used as wisely as possible. We want our processors to be performing work that helps our overall business goals. Not busy doing work adjacent tasks. The Z processors are built using a modular design in which supports a number of books or drawers per server. Now, when you see CPU and memory arranged horizontally, that's commonly known as a drawer, like a chest of drawers. Vertically, they're called books, like a book on a shelf and we'll use these terms somewhat interchangeably, but that's the basic idea. It's a physical collection of compute resources. Now, each book or drawer has the processors, the memory, the IO fan-out, the actual specifics here will vary greatly from one machine to the next, particularly when it comes to the cache layout. But the goal is always to optimize the efficiency of jobs as they run while still allowing for scalability when other work needs to run. The prism hypervisor and that's PR/SM, hypervisor, makes it so a workload can be dispatched onto any available physical processor on a multi-book or multi-drawer system. The ability for z /OS and prison to work together so it can potentially align work on a set of processors within a physical unit is known as hyper dispatch. Ideally, you have work being done close to where the data is and prism is partially responsible for making sure that that happens. Simply running everything on any available processor would lean to work running slower and increased latency, especially when we're working on newer high-frequency processors. Here's another view of processor management with prison. I like this view on multiple layers. There's the physical layer with the hardware processor units and then there's prism, which is what builds up this catalog of the types of processors. The general purpose processors, IFL's, ICF's, SAP's, and then the LPAR is the logical partitions. Those don't attach directly to the processors. They have no concept of the physical side of things. All they know is how those resources are presented to them by prison. When an LPAR says, "Hey, I've got some work to do. Give me a processor." Prism looks at what it has, what else is running, the type of work it is and it makes a decision based on that, not just what's available, lying around there. Put it another way, there's horizontal CPU management and vertical CPU management. In a horizontal view, z OS knows everything about the workload and Prism knows everything about the processors. In a pre- hyper dispatch world, demand from z OS was met with supply from prism. With hyper dispatch, we're able to connect those two halves and now z OS is able to take into account the processor topology when dispatching work. It will actually work with the enhanced prism Miller code to build a strong affinity between logical processors and physical processors in the configuration. How great is that? Now, realistically the CPU setup of the system or systems you're working on is something you probably have little control over. We're not going to go much deeper into this, but I wanted to mention it as it is an important fasted of task management for z OS. Something else you'll want to read into in much greater detail if z OS performance is your specialty, might be WLM, and SMT. Now, quickly put WLM, the Workload Manager allows z OS-2 well, manage its workloads. It lets you set performance goals and it assigns work based on what needs more resources to meet those goals, and SMT simultaneous multithreading permits multiple instructions per core to run simultaneously. Even without an increase CPU frequency, it's able to get more done even while waiting during cache misses. In our next section, we'll jump back into TCB is for a bit and continue our job execution journey.