Once you have selected the instance type and the oils image for your deployment, you'll need to think about the storage. You choose primary storage based on the performance requirements of the applications that you run. When you choose backup, your main consideration is how quickly can you recover from a backup when needed? Let's start with primary storage or PD. These tables show some key performance characteristics of PD SSD and PD standard storage on different machine families. You can see the average and the maximum throughput and performance for each of the disk type listed here. Let me quickly recall some of the key features of PD from the previous module. You don't need to strike multiple disks to improve performance. Your annual performance will scale linearly as you increase the size of your PD. PD SSD disk scale linearly until they reach either the limit of the volume or the limits of each compute engine instance. Of course, if you need to exceed the 64 terabyte capacity limit on one PD, then you can attach additional disks. You can see here that for best performance, PD SSD leads, for example, a memory optimized machine type with PD SSD can give you up to 800 megabits per second throughput, that is best suited for SAP HANA. Now, the figures shown here are valid at the time of recording of this video. All the performance metrics associated with PD are listed on our online documentation. Here is a quick overview of guidelines for selection and sizing of primary storage on Google Cloud for SAP deployments. Google recommends that HANA databases and other databases are sized following these storage guidelines. For scale-up HANA databases, it is recommended that your storage is twice the size of the memory. The data volume and the loud volume are of the same sizes as the memory. The recommended disk type for these volumes is solid state disk or PD SSD. A typical scale upon database has SSD storage approximately four times the size of the memory. The sizing recommendations are similar for scale-up. SSD storage requirements are between 2-3 times the size of the memory, and the NFS for HANA shared volume is the same size as the memory. For other types of databases, you can use PD standard or PD SSD, persistent disks. You can refer to the documentation of the databases for guidance on sizing requirements. Absolute sizes are based on need. You can assume that cloud-based application servers have similar storage sizing to the on-premise installations. In all cases, it is recommended you a wide local disks as you will lose all the data when the VM goes down. So use zoning persistent disks where appropriate. Now that you have selected the correct disk type and determined the storage, how do you choose the layout of the disks? Disk layout must be enabling best performance, and remember, with persistent disks in Google Cloud, we have some unique features. First, disk performance will scale with the number of loose CPUs and size of the disk. Next, for multiple disks attached to the same VM, you can recall from our previous module, that the performance of PD SSD is leading the overall performance. Disk performance is distributed evenly across the number of attached disks independent of the disks individual size. This is very important for SAP HANA. Considering these features, what should be your strategy for disk layout? Let's look at an example shown here. For a two terabyte HANA database, if you allocate separate PD SSD for each of these shared data on log volumes, you can see that the overall performance IOPS is equally distributed between these disks irrespective of the disk size. You can see this on the leftmost diagram for this disk layout. Performance is maxed at 25k IOPS. If you choose separate disks for share but combined data on log volumes on one PD SSD, the IOPS performance increases to about 37.5k. But as you can see on the rightmost layout diagram, the best performance is achieved when you allocate a single disk for data log unshared volumes in a scale-up configuration. It is really important to note that this recommendation is different from disk layout recommendations for SAP HANA on other Cloud providers. Now let's look at the same example in a scale-out scenario. Of course, in a HANA scale-out scenario, the shared volume will be on a shared file system as this must be accessible to multiple nodes of this scale-out installation. Shared disc will be separate and will be on NFS. Therefore, the layout in the middle is not applicable, but you will still have the best performance when you combine data and log volumes. Remember, again, this is different from disk layout design principles for SAP HANA on other Cloud providers. Here are some additional considerations. It is critical to use SSD or standard spinning disks in nearly all cases where you use SAP HANA. This is for performance reasons. This is especially important for system replication. There are some situations where you can use standard PD to reduce costs for backup volumes where performance is not the primary driver, like HANA backup, you can use more cost effective PD standard. You will look at the factors for choosing storage for shred file systems in a bit. For now, simply note that the relevant folders for shared files storage are the ASES, the user SP directory, SAP and Wendy for failover purposes and other additional folders like interfaces are SP transports. Now let's look at some of the options for shared file storage for the application layer, often SAP deployment. NetApp Cloud volumes is a partner solution available on Google Cloud Marketplace that supports multi zone HA. Cross-regional replication for DR scenarios is on the road map for now, but you can do this manually using snapshot replication across regions. It has a minimum capacity of one terabyte. NetApp ONTAP , from the same partner, is a customer managed version which can be set up for multi-region DR. NetApp solutions are offered in selected regions, additional regions will be added in future. Google Cloud file store is the native option. It is a zonal service, so it doesn't support either multi-zonal or regional deployments. It is best suited for single zone, HANA scale-out deployments, where high availability is achieved by standby nodes. It offers an SLA of 99 percent and has minimum capacity of one terabyte. For regions where NetApp solutions are not available, customers can also use their own Linux NFA seller or Windows file share as applicable.