Now that we've explored compute, and why it's needed for big data and ML jobs, let's now examine storage. For proper scaling capabilities, compute and storage are decoupled. This is one of the major differences between cloud computing, and desktop computing. With cloud computing, processing limitations aren't attached to storage disks. Most applications require a database and stored solution of some kind. With compute engine, for example, which was mentioned in the previous video, you can install and run a database on a virtual machine, just as you would do in a data center. Alternatively, Google cloud offers fully managed database and storage services. These include, cloud storage, cloud Bigtable, cloud SQL, cloud Spanner and Firestore. The goal of these products is to reduce the time and effort needed to store data. This means creating an elastic storage bucket directly in a web interface, or through a command line. Google Cloud offers relational and non-relational databases, and worldwide object storage. We'll explore those options in more details soon. Choosing the right option to store and process data often depends on the data type that needs to be stored in the business need. Let's start with unstructured, versus structured data. Unstructured data is information stored in a non-tabular form, such as documents, images, and audio files. Unstructured data is usually best suited to cloud storage. Cloud storage has four primary storage classes. The first is Standard Storage. Standard storage is considered best for frequently accessed, or hot data. It's also great for data that is stored for only brief periods of time. The second storage class is Nearline Storage. This is best for storing infrequently accessed data like reading or modifying data once per month, or less on average. Examples include data backups, long tail multimedia content, or data archiving. The third storage class is Coldline Storage. This is also a low cost option for storing infrequently accessed data. However, as compared to Nearline storage, Coldline storage is meant for reading or modifying data, at most once every 90 days. The fourth storage class is Archives Storage. This is the lowest cost option used ideally for data archiving, online backup, and disaster recovery. It's the best choice for data that you plan to access less than once a year. Because it has higher cost for data access and operations, and a 365 day minimum storage duration. Alternatively, there is structured data, which represents information stored in tables, rows, and columns. Structured data comes in two types, transactional workloads, and analytical workloads. Transactional workloads stem from online transactional processing systems, which are used when fast data inserts and updates are required to build row-based records. This is usually to maintain a system snapshot. They require relatively standardized queries that impact only a few records. Then there are analytical workloads, which stem from online analytical processing systems, which are used when entire data sets need to be read. They often require complex queries. For example, aggregations. Once you've determined if the workloads are transactional or analytical, you'll need to identify whether the data will be accessed using SQL or not. So if your data is transactional, and you need to access it using SQL, then Cloud SQL, and Cloud Spanner are two options. Cloud SQL works best for local to regional scalability, while Cloud Spanner is best to scale a database globally. If the transactional data set will be accessed without SQL, Firestore might be the best option. Firestore is a transactional NoSQL document oriented database. If you have analytical workloads that require SQL commands, BigQuery is likely the best option. BigQuery Google's data warehouse solution lets you analyze petabyte scale data sets. Alternatively, Cloud Bigtable provides a scalable NoSQL solution for analytical workloads. It's best for real time, high throughput applications, that require only milliseconds agency.