Welcome to week two. What we're going to focus on here is testing. This is a central foundational element to getting your delivery pipeline into a more automated, more continuous state. When you test an individual piece of code, the criteria you're going to apply to it are relatively specific. I also think it's good to look at your test program as a whole as itself kind of an experiment. The reason why that's so important is that you cannot test everything. That's just not in the cards no matter how good your testing program is. So, you're going to make some imperfect but testable decisions about how you do that. What better framework to govern this activity as you go from iteration to iteration, or sprint to sprint, then science itself. So, we apply the idea of the scientific method here. We start with an idea and we want to make sure that, either in general or specifically for an individual item, we think that whatever test we're going to do here is worthwhile. You have to learn for your team what that means, or what proves worthwhile and what doesn't. Then you have to have a testable idea about what that means. How will we know if this test is delivering value to the program or not? That's really important. Then, you want to look at, as we go from iteration to iteration, are we running the right kind of tests? Are we making the right kind of test investments? Criteria you might apply to this needs to be highly iterative. So, in a given test cycle, what test do we invest in? Why did we invest in those? How do we expect those to help us? Will velocity increase? Will the amount of manual stuff that developers, testers, ops people have to do decrease? Then, you want to look at when that release goes out and bugs arise and support issues come in and have to be dealt with, it's important to have really good retrospectives on those and ask, could we have caught that error with unit tests, and integration tests, the system tests or there's some other reason, the deploy process or some other part of the process where we should have caught this. By continually doing this, and asking, "All right. We need to rework our idea" or "You know what, this is working. Great. Let's continue to invest in this area." You'll get to a better outcome. I would do this, then try to create the perfect project plan about how you're going to update your testing infrastructure. What the test look like? Well, one really great thing that I think is helpful as an intra item is this "Given, When, Then" pattern. The idea here is that we take say a user story, and this one is about a technician, who fixes heating ventilation and air conditioning systems, who wants to log into a system. Ted the Technician. We can take this user story, and about Ted wanting to create an account so he can login, and we can map it to this pattern. "Given", a circumstance. "When", the user does something. "Then", we expect this certain thing to happen. Especially for these system tests, these larger tests, this is a really good way to organize and structure our thinking. So, we might lay it out like this. Given Ted does not have an account and he is on the sign-up form, when he enters his email, and he enters his password, and he confirms the password, it's where one of those two password things, and he submits the form, then an account is created in Firebase. This is a service on Google that you'll see in some of the sample application. The navigation shows an optional "Log Out", and the Window is no longer visible. So, if you haven't written tests and you haven't structured them, I would suggest sitting down, and taking a user story or some phenomenon that you've tested, and just try this out to see how it works for you. The other really cool thing about this is that this format maps to a tool called Cucumber that takes this kind of syntax directly as an input, allows your developers or testers to write what's called a Step function to basically, map these things into specific assertions and steps in the code, and actually run tests against it. Assertion is the idea that when all this stuff happens, one plus one should equal two. Did the code actually deliver us a two because we're asserting that it ought to be two. This idea of an assertion is really central in testing. So, as we go through the details this week, I think it's really important to periodically zoom out, and think about how do we get to a happy place with our tests? To do that, I think you have to sometimes think like a designer. What are we trying to deliver to the user, and how do we use that to focus our tests in this environment where we, of course, can't test everything? You need to work like an engineer, decomposing things, automating them, testing them, working systematically, and I think you also need to think like an economist. I mean, what is the downside if a certain thing versus a certain other thing breaks, and how much time is a certain thing taking on a team and energy? It's important to be able to look at this stuff from all these different perspectives to get the best possible outcome and particularly to foster the kind of interdisciplinary collaboration that's required to get to a test program that really makes sense for the whole pipeline.