Well, hi there. I'm John and this is the InfoSec Skills learning path for the 2021 OWASP Top 10 and this video is security risks number 4, insecure design. Let's hop into this thing and start talking about it. Insecure design. This category, it focuses on risks related to design, related to architecture. It's the fundamental way that applications are put together. How do you think about these things? The idea here is that the OWASP organization or the people that put all these things together call for more of a threat modeling use. Let's start to threat model the applications, run through different threat scenarios and see is this thing designed properly? There needs to be secure design patterns. There needs to be reference architectures. Some of these documents that frankly some people may bulk out or just their eyes glaze over whatever, as it turns out, they can be really important documents and really important parts of designing the application properly before you ever start writing the first line of code. You can see there we need to move beyond the shift-left. Shift-left is this design or this idea that we need to move the security conversation left in the entire application development life cycle. That is when you as a team, when you as an organization, sit down and say, hey, we need to develop a new application for whatever business reason. Then when does the security conversation become part of that entire application discussion? Again, the idea of shift-left is let's move the timeline for when the security discussion becomes part of the overall application discussion. We need to shift it left. We need to make it earlier in the design or in the application discussion. What insecure design would say from the OWASP organization, the top 10, what they would say is insecure design needs to actually move beyond shift-left. We need to go even further left. This needs to move into the pre-code activities. These are things that happened, like I said, before you ever write your first line of code. Those are critical for the principles of Secure By Design. I guess if you could think of an analogy here, if you're going to build a house, for example, and fundamentally when you put that house together, there's a lot of things you're going to think about. You want to make sure that you got the right number of bedrooms or bathrooms or if it's going to be a one-storey or two-storey or how big is the garage? All those kinds of things. What color are we going to paint the walls and just things like that. But fundamentally, if you design that without security in mind, without saying, hey, I need to think about maybe there are some blind spots where some robbers could break in or by design is the door put in such a way that you could jump through the window right next to it and whatever, so you could put a lock on the door, but it's just the fundamental design itself just does not allow for security to happen in that structure. Anyway, that's the idea here as well. That it's like, you can build an application and you can get it to do what you want it to do. You can make it look really pretty and you can make it run really fast and it's awesome and all that stuff. But if it is fundamentally designed without security in mind or from a poor security perspective, then you're going to have problems. That's the idea of this move beyond shift-left, move into pre-code activities to talk about Secure By Design. It needs to be just baked into this entire discussion. A few other things to point out here. Insecure design, just by nature, it's a very broad category. Sometimes in years past or in OWASP Top 10 lists past, there would be very specific things, like cross-site scripting or XML external entities, those are some from the 2017 list, insecure deserialization, those kinds of things. It's like, well, what is that? We could talk very specifically about different types of cross-site scripting attacks or XML external entity problems, those kinds of things. Whereas insecure design, like I said here, it's a broad category, represents a lot of different weaknesses. But effectively, it's expressed there, as you can see, as missing or ineffective controls in terms of the design. It gets back to what I was talking about before about how you fundamentally design an application. You need to differentiate between design flaws and implementation defects. You can see there a secure design can have an implementation defect that can lead to vulnerabilities. Those vulnerabilities may be exploited. In that example, you can have a really solid design, you can have a really secure application that's built, but then you could implement that in a way that's not very safe. Maybe you have a variety. You've got your code built, you've got your access controls put in, you've got the right cryptography, whatever it is but then, I'll pick on cryptography because that's been a previous video here, if you don't implement cryptography correctly then that could lead to vulnerabilities and attackers could exploit that. So it's like all right, while you did use cryptography, you did not implement it very well. The design itself called for HTTPS sites to be mandated but if you did not implement that cryptography correctly, then that could lead to problems. Again, you can still have a secure design with problems in the way that you implement that design, and that can lead to problems. However, having said that, you can see the next bullet there, insecure design cannot be fixed by perfect implementation. Because again, by definition, the security controls that were needed for this thing were never created to defend against some of these attacks. Again, you could design something very well and then implement it poorly, and that could lead to problems. But if you design it poorly, then you leave no chance for the implementation of that application, to be implemented in such a way that it can still be secure. At least if you design it correctly, if you design it in a secure way, then you leave yourself the opportunity to implement it correctly as well, and you have a secure application. If you design it incorrectly, then no amount of implementation and controls and all that is going to overcome that insecure design. Frankly, that's one of the big reasons that this is number 4. This is in the top five of the top 10. Not only is it in the top 10, it's in the top half of the top 10. Anyway, you can see there, lack of business risk profiling inherent in the software system being developed leads to insecure designs. This gets into some of those discussions. Some of those things that maybe are not very exciting to talk about or people just get bored at these; business risk meetings in conference rooms or on Zoom calls or whatever, but these are important things to do. You have to talk about the business risk. You have to talk about the needs of the organization. You have to classify, "Hey. What is critical, what is not so critical?" Those kinds of things. Those need to inform the design of the application. Those need to inform the implementation of security controls around an application and around the data that you're dealing with here. That is an important part of this entire discussion of secure design, or in this case, insecure designs. You want to avoid insecure design of course. Those are a few things to keep in mind in terms of insecure design. But on the flip side, I wanted to talk about secure design. If we talk about insecure design and hey, that's a bad thing, whatever, let's talk about secure design. Secure design, it's this culture, it's a methodology. It constantly evaluates, it constantly looks for threats, it constantly ensures that code is robustly designed and it's tested. So it's this continuous process if you will. It's this continuous mindset that says, "Hey. I want to constantly be looking at my code, testing my code, threat modeling my code to make sure that it is designed properly. If I'm finding problems, certainly some big problems, then I may need to go back and really think about what I need to do to fix those, go back and re-do the design or whatever." You can see there threat modeling should be integrated into refinement sessions or similar types of activities. You need to look at threat modeling activities, and there's a lot of resources that we'll talk about later to lead you down a good path for good threat modeling. You can see there in the user story development, when you talk about an application, and you walk the story, you walk through different use cases of what a user is going to experience with this application or different ways that this application is going to be used, then you need to talk about the correct flow, you need to talk about failure states. You need to make sure that everybody understands what the correct flow is. You need to make sure people understand and agree on what the failure states are. If those are not agreed upon, then that's going to impact the way you design this thing. There needs to be good communication. That's something you need to look at. You can see there the next one, analyze assumptions, conditions for expected and failure flows, make sure that they're accurate, they're desirable. Again, this is looking at these expected outcomes, the failures that could happen. So don't just test for, hey, is my application working the way that I designed it to work, the way that I want it to work? But also check for the failure states, for the unexpected outcomes. What if you put numbers in a text box where that is supposed to have texts, what if you put special characters in there and then start to launch different attacks that way, will the application accept that or not? Look at these edge cases. Look at these bizarre ways that an attacker might manipulate your code and see what happens there. You need to look at all those things. You need to determine how to validate the assumptions, how to validate and enforce conditions for proper behaviors. So not only look at a proper behavior, expected outcomes, failure states, those kinds of things. How do you validate that the application is working properly? How do you validate that there was actually a failure? So you have to look through those kinds of things. Those are good discussions to have as an application team, whether it's the development or the security or operations or all of those teams together. Then that last one there, secure design, it's neither an add-on or a tool that you can just add that you can purchase later and just wrap that into your software. It is a, like I said, at the beginning, it's a culture, it's a methodology, it's a mindset, it's a constant process, it's a way of thinking, it's a new way of thinking, a different way of thinking about how to write code, about how to build applications. So that needs to be baked into the entire process. Secure design, it's not an easy thing, it's not a quick thing. It takes a lot of effort, it takes a lot of communication, but it is a very important thing. Frankly, it goes back to what I said before. If you don't design an application securely, or to say it differently, if you have an insecure design, then you've left yourself no chance to implement security controls properly that will overcome that insecure design. That's why it's so important to have this communication, to do these threat modeling actions, to look at all these different cases and look at the user flow and all those things, the failure states, in order to design this thing properly. It's a really important thing. Secure development. Not only have you designed the application, when you actually go to develop or when you go to start writing the code. Because remember we need to move beyond Shift Left. We need to move left of Shift Left. Once you've had those discussions, once you've figured out, okay, this is the application, these are all the different things that we've talked about before, then secure development is a critical part of this as well. You can see that secure software requires a secure development life cycle. It's not just, hey, let me sit down and start cranking out the code. This all needs to be done in a very methodical, very, I'll call it, I guess, formalized manner so that the developers know how to write the code properly, they know the way to approach, the proper and secure way to design or to build an application. When things don't work correctly or when things go sideways or weird or whatever, they know who to reach out to and get involved when those things happen. A critical part of that you can see there that second bullet, engage your security specialists from the beginning of the project all the way through, through the whole project and the maintenance, the whole thing. So make sure that the security teams are engaged from day 1. That's an important part. Again, I understand maybe sometimes it's a political minefield or sometimes it's like, hey, you don't know the people that work in the security team, they're just not easy to get along with. Whatever. Or maybe they're saying that about the development team or so forth. I would just say, for the sake of the business, for the sake of the organization and the sake of the users that will use this application, you've got to figure out a way to get along and communicate properly or else you're going to see the results of that and it's not going to be good. Here's a tool right there. I listed use tools like the OWASP, Software Assurance Maturity Model, SAMM to help structure your secure software development efforts. There's a lot of things like that. I mean, this is one of them, but there's a lot of things like this maturity model. It'll help you structure the way that you develop applications. For those that are like, hey, I've never really engaged in a secure development life cycle for my software. There's a lot of tools out there and this is a specific one. But I would just encourage you to get out there and look at different resources, talk to different friends, talk to co-workers, that thing to say, hey, what does this need to look like? There's plenty of resources out there to help guide you along in structuring your software development efforts. Anyway, it's an important thing to do. Let's move on, I know I've talked a little bit about insecure design, about secure design, secure development. But I did want to touch on the factors that the OWASP organization looked at as they compiled the data, as they categorize the different CWEs and all that. This is the exact same format as we've seen in the previous ones, but I just wanted to highlight these. There's 40 different CWEs that were mapped to insecure design. The max incident rate there, you can see it's 24.19 with the average incidents at three percent. Those are applications that were actually vulnerable to at least one or more of those CWEs. Those numbers are not quite as big as what we've seen on some of the previous ones, which is, you can start to see a little bit of why this is not number one or two or three. It's moving down the list a little bit. But it's still an important thing, still a lot of issues going on out there. You can see the average weighted exploit is 6.46. Again out of 10. It's over half. Above five. It's worse than it could be. Not as bad as it could be, I guess, but it's still up there. Still, on the upper half, I guess if you will, and then the impact is 6.78. Again, the exploit is, how easy is it to exploit vulnerabilities associated with the CWEs that are associated with insecure design. Then the impact is how big of a business impact would it be if these vulnerabilities were exploited or if this risk were to be exploited. It's almost seven. that's big as. Again, that's out of 10. That's a big number. Then you can see there the max and average coverage. These are applications that were tested against those CWEs. Again, 77 plus percent were tested against at least one of those CWEs there of the 40, and then the average was about 42.5. Then the total occurrences, the 262,407, again, that's out of just over 500,000. Over half of the applications that were tested had issues with at least one of the CWEs that were mapped to the insecurity design, security risk. Then the total CVEs that are mapped over to those CWEs is 2,691. It's not the biggest number out there, but it's still a big number that's almost 2,700 different vulnerabilities that exist today that we know about that give attackers a door into attacking applications based on the insecure design of those applications. That's a big number. Again, if you look at it from the few step into the world of the attacker, then you look at this and you think, man, I've got a lot of applications that are vulnerable. Over 262,000 and I've got almost 2,700 ways that I can try to get into these things. That's a pretty big threat landscape there the doors were open all over the place for you as an attacker so that it gives you an idea as the application owners. The security owner of this application that you say, hey, what am I up again, it's a thing. You can think about that. Let me give you maybe a couple of different scenarios. Again, this is a broad category, so these are just some couple of different, I won't say random scenarios. But they're just examples, I guess, of ways that applications would be built or would be designed, I should say. Just the inherent design itself gives way to vulnerabilities, to risks that are associated with the application. Let's say you have a web application and you have a user that's accessing your web application happens all the time. There is a forgot password type of recovery function that's built into, ''Hey, I forgot my password.'' I mean, this happens all the time. Everybody forgets a password. I guess maybe a quick word on that, either people are using the same password over and over and over again so they don't forget it. Which is a problem that gets into credential stuffing attacks and all that stuff gives way to that. Or they are not using the same password over and over and over again, which is a good thing except they forget it all the time. I would put in a quick plug here for the password systems that you can utilize, that will store different passwords for you and then autofill and those things. I've heard some different security practitioners. I heard a good idea one time where a guy said, ''Hey, I use this certain password manager application and so it stores all my passwords for me,'' which again, that's a huge target for attackers obviously. I mean, if I'm an attacker and I can get in, if I can hack into your password manager, then there's the keys to the kingdom. What this guy did is, he said, ''Hey, I use the password manager,'' but then I put in and this is where he just remembers, he has the password for a given site, just pick one, whatever. Maybe it's your bank or maybe it's your favorite online TV streaming movies site or whatever it is. The password manager would fill out automatically the password that's been stored for you. But then he goes in after that, after the autofill from the password manager and types in another couple of letters or another couple of numbers or whatever it is. Then that gives him the ability to have a super strong password. I mean, that thing can have the special characters and upper and lower and numbers and all stuff and some random that you would never guess. But if the attacker would have come in and steal their credentials for the password manager, then they're still not going to get in, because they don't know what those extra few characters that are added on at the end would be. It's just a little note about passwords and just a pretty interesting way to manage your passwords in a more secure way. But nonetheless, so back to the scenario. You have users who forget their passwords, and this happens all the time. Let's say that there is a credential recovery workflow, this function that says, ''Hey, let's figure out what your password is.'' You have to fill in a series of questions and answers that kind of thing. The questions and answers, you can see there can't always be trusted as evidence of identity of that specific user, because it could be true that more than one person knows the answers. In fact, the questions and answers is prohibited by, you can see there the NIST 800-63b, and the OWASP ASVS. That's that verification standard that I've talked about in a couple of videos. Using questions and answers as password recovery features is not a great idea, because someone else could know those. Again, that's an example of a design flaw. Using this example, if you said, ''Hey, we're going to implement password recovery by using questions and answers.'' You could use the most secure question and answer challenge in the world like, hey. I don't even know exactly what one would be. I mean, it could be something super specific. That would be an example of an insecure design, the question and answer password recovery workflow. That's an insecure design with a secure implementation. Trying to put a secure implementation on top of an insecure design. You're trying to be as secure as you can with the implementation in the sense that you're trying to add these really specific, really hard to guess questions and answers. But the problem is someone could still find those out, and so that's why the design itself is insecure. You're just not going to be able to overcome that insecure design, even with a secure implementation as you could try to come up with. That's one scenario of just the inherent design itself, is insecure. Let me give you another scenario of, well, let me build this out really quick. Mother's maiden name, I mean, it's a pretty common one. Then someone could just know that. I know we've talked through all this one. Another scenario is, let's say you have a cinema like a movie theater type application or organization. They have an online application to book tickets, Let's say that they allow you to book. A group discount. Let's say that they allow you to reserve or block out 15 attendees in your group before they require any money payment. That inherently would be the insecure design because attackers could come in and test and see if they could book. Maybe they could spin up one of their botnets or whatever. They got a bunch of bots and these bots are going to start to book blocks of 15 tickets or maybe they do 14 or whatever the number would be, but they start to just book, so to speak, blocks of say, 14 or 15 tickets at the movie theater, at the cinema and then what happens is no money has exchanged hands but here's what happens. All these attackers, they're going to book 15 tickets and then 15 more then 15 more. Then what does end up happening is all of the tickets are now spoken for, and so now legitimate users that are trying to book their ticket at this theater, the theater would say, "Hey, I'm sorry, all of the tickets have been reserved." There are no more seats available for this movie at this time. Then obviously in reality what happens is the movie time comes and goes and no one showed up to watch the movie. The movie theater never got any of their money and so they just lost out. I mean they had theoretically an empty theater there and they lost out on all of that revenue because they didn't securely design the way that their ticket booking system was developed. Again, the inherent design there was that they allow these groups of tickets without the deposit being required, and so anyway, obviously, the fix for that would be just require a deposit regardless of the number of tickets being booked. You could still book your group discount, you can still even give a discount, but it's like, hey, make sure you give me my money before I give you your ticket, an idea. That's not that hard of a thing to see. But it's things like this. These are examples that you can look at and say, "Hey, is there something like this in the applications that I develop or the applications that I'm trying to secure?" These are things that can start to make you think, hey, what are my applications look like and what are some ways that attackers could get around the way that mine has been designed. A couple of other things to talk about here to protect yourself. I know we've gone over a few of these already, but establish and use a secure development life cycle. This gets into, hey get out there and look at some of the resources for, I mean, there's whole classes on secure development life cycle in and of itself. I would encourage you to go out there and check out some of those materials established and use a library of secure design patterns. This is where if you're looking at libraries and functions and those types of things that you're going to be using throughout your application, then establish a library of secure patterns and then be like, hey, I'm going to go back to that library and look at those patterns and those are the ones I'm going to use. That's a good way to help your design. Use threat modeling for you can see their authentication, access control, those things. Is it easy for an attacker to get in? What are the different threats that could be in place or could be present in that particular part of my applications so use threat modeling there. Then you can see they're integrated security language and controls into user stories. This is that whole user use cases and what is the user experience like. Make sure that you've got security language. Make sure you've got security controls in place for those things. Look at plausibility checks at each tier. How easy would it be to attack this thing? How plausible would it be that our application tips over if some attack comes in or those types of questions that you could integrate into your design of your application. You can see the right unit and integration tests to validate that critical flows are resistant to the threat model. If we go back to the threat model and if you've run the proper threat modeling, then you need to make sure that you validate that the critical flows are resistant to those threats. Again, this goes back to the discussions that you start to have before the applications even developed and you say, "Hey, have we designed this thing properly? Or have we communicated properly among all the teams?" Have we looked at the business logic? Have we looked at the business outcomes? What are we trying to accomplish with this application? Then that helps inform the critical flows that the user would experience and then that informs the threat modeling to say, " Okay, if this is my critical path, if this is my critical function in this application, then that's the thing that I need to secure and make sure that I'm looking at the most." When I insert or inject the threat model over the top of that, then how do I know that the application has withstood the test of that threat model? That's what this one is all about. Unit and integration tests to validate that those critical flows are still in place after the threat model happens. You can see there two, compile the use cases and the misuse case. This gets back to what I was talking about. Make sure the application doesn't just work the way that you need it to work. If you say, " Hey, the user supposed to put in an email right here and then the password right here." What if they put in some math equation instead of an email? What if they put in some code executable in the email field? That kind of thing, so what happens then? You need to test all of those things. You can see there segregate two your layers on the system and the network layers, depending on the exposure and protection needs. You can start to segregate the application based on system and network. I mean that not to go too deep into this, but obviously you've got network infrastructure that happens and then you've got the system, the application itself that rides on that network. Where could this be vulnerable? Where could the attacks come from? Has all of that been properly designed? Then that last one there limit resource consumption by user or service. This is just trying to contain the problem, control the outbreak if you will, control the damage. You don't want one single user or service to consume all of your resources. I mean, that could lead to a lot of problems. This idea of limiting things is a good one to think about as well. Those are just some things you can do to protect yourself from insecure design problems. Last here, I'll just mention some of these resources. You can see there that first one, it's security architecture we talked about, I talked about reference architectures just for a quick second at the beginning. But those or that, some of those you can see there's a OWASP SAMM. Some of those models that I talked about. There's security architecture, there's threat assessments. There's some of the nist documentation there that third one. Threat modeling manifesto. I'll mention this one really quick. Again, the OWASP organization is comprised of a lot of volunteers. I mean, it's people that are out there in the world that just want to make the Internet a safer place, make applications more secure. The threat modeling manifesto is not inherently an OWASP document or OWASP publication, but it's a really good resource to use. There's a member of the security community as it were, that is just trying to help out and say," Hey, here's if you want to talk about threat modeling with applications today, this is a good place to go to talk about all that." Then you can see that last one. I love this one. github.com. I don't even know how to pronounce that H-Y-S-N-S-E-C. But anyway, awesome threat modeling, It's not just threat modeling is awesome threat modeling, but that's another one. Great resource when you talk about insecure design or secure design and you talk about threat modeling of your application. There's another threat modeling resources you can look at. That one obviously that's GitHub, that's not specific a OWASP proper, but it is part of the OWASP community, so that's one of the things I love about this is it's not, like I've said before, you're not going to see just hey, OWASP peers, OWASP resources than it must come from owasp.org or whatever. Now, you're going to have stuff all over the place. You're going to have everyone and anyone that is contributing, that is a good member of the contributing security community. OWASP can say," Hey, go read their stuff. These are good people. They know what they're talking about, so go check them out." Anyway, those are some great resources to look at to make sure that your application stays safe and frankly, that it is designed properly from the very beginning and all throughout the production and the maintenance and everything, every aspect of the application stays secure and safe as possible. With that, let me say thanks for hanging in there and watching this video today. I look forward to seeing you in the next one down the line.