Hi, and welcome back. Let's talk about metrics today. When it comes to Travis, Travis can provide for all types of integrations. That makes it easy to start capturing metrics on your project. Here are a few products that are available either as GitHub apps or Travis integrations that can help you start capturing metrics. If you navigate to the docs.travis-ci.com and you scroll down to the integrations and notification section, you'll find a lot of integrations that are available with Travis today. The first one I want to look at is called SonarCloud. If we click onto SonarCloud, navigation here SonarCloud comes with an integration that you can enable on Travis. SonarCloud can be used to capture code quality, It can be used to capture known bugs. They have a free pricing tier for open source projects, so you can try that out in your spare time. The second project I want to draw your attention to is called Code Climate. Now Code Climate is another one of these metrics capturing tools, that helps you cover or captured test case coverage to make sure that your development efforts are actually capturing all of the testing that needs to be done for your tests, for your project. You can find out more about the Code Climate product by clicking on the Code Climate link and going to codeclimate.com and checking out the resources there. Now, these types of tools will help us measure and manage our tests, improve our security, get better quality in our code. However, sometimes we also need data that helps us optimize the overall CI system. These types of metrics might require us to take some additional steps that capture data around job executions such as when we're getting failures, when we're having successes, how long the jobs are executing. So in this exercise we'll use a few open source projects to set up Travis to capture this type of data. Just a word of warning, this is a class experiment only, the examples I'm going to show you are going to for a production environment would require you to take additional steps to secure it, to make sure that the data's persistent, then to make sure that you have backups for your environments just in case you want to take this a little further. For now, just consider this a class exercise and we'll walk through some of the details on getting you started and getting you set up, but we won't go into any details on how to us scale run these tools for a production environment. However, they can be modified to run in this way. So just as an overview to get you an idea of the tools that we're going to be using today. First, let's cover the things that we're going to use to capture the metrics from Travis. Building on our past experience we'll use the Heroku application that we've been working on to be able to host capturing the metrics using Probot, and then we'll also host a data analytics system on Heroku itself. This system is the same system that we can use for other applications besides just Probot. So we'll be using an open source project called Prometheus. Prometheus was originally created by SoundCloud. You can read all about Prometheus app, prometheus.io. They have fantastic documentation that you can go through to learn all about Prometheus. That system will help us capture time series data in a scalable manner and it will help capture from our CI project. We will also be using another open source project to collect and visualize the metrics from Prometheus called Grafana. So if you go to grafana.com, Grafana can be used to capture the metrics that Prometheus is creating and use Prometheus to visualize the data that's being captured by Prometheus. So with Grafana they provide a free tier, that'll help us get started really quickly. So we'll use the free tier and set up some of the can template that we have in our class exercise to see some of the analytics that are coming from the Prometheus system and being generated by the Travis CI environment. Then finally, we're going to be leveraging a component that's made available through Travis called, well, it's actually an integration between Travis and GitHub which is called the GitHub Checks API system, and you may have seen a little bit of this on the back-end but the important part here is that because Travis.com leverages this API, then we can actually use this integration that Travis has on Travis CI.com for every single job to start capturing the events that are occurring when we execute a build. Now, the Travis CI integration gives us an ability to integrate with the checks API system from GitHub to leverage capturing information such as, who created the build requests, when the build requests was created, when the build requests was finished, and this is what other integration providers are using to integrate their tools within the GitHub ecosystem. We're going to leverage this checks API system to create ourselves what we call a scraper. Now, there's a little open source project that is available on GitHub.com under Siimon prom-client. Now, this is a client that makes it really easy for us to create what is required for Prometheus which is a scraper client which helps us produce the actual analytics that Prometheus needs to collect to be able to send that data to a system like Grafana so that we can visualize it. Now, we'll integrate this particular library into our Probot app so that we can actually take the data that we're capturing from the GitHub Checks API system, and generate the metrics at a given point so that a Prometheus scraper or a URL can be accessed by Prometheus called the scraper URL, which will be available from our Probot app. So we'll dive deeper into the coding exercise portion of this exercise a little later.