[MUSIC] Welcome to this third video of first week of our course on Unethical Decision Making. In our last session, we had a look at the dark side of the force and we discussed how the concept of evil developed in human history. In this video today, we will focus on the bright side of the force and we will see how moral philosophy might help us to fight against the dark side. So the main goals in our session are, that you will learn about the links between ethics and decision making. You will understand the idea of an ethical dilemma. And you will meet the two main concepts of moral philosophy that offer us some help in solving dilemma situations. Life in former times was much easier. Societies were homogeneous, people shared more or less the same values, the same traditions. They were embedded in the same kind of context, so they were more or less cruising on autopilot when they were making decisions. The back side of this kind of context is, of course, that there was no real freedom. Modern society, in contrast, is pluralistic, is heterogeneous. The rules of the game are very often unclear. In addition, we are going through a time of high-speed change, high-speed transformation. We are facing innovations in information technology and globalization. We are drowning in data and we have difficulties to make sense of them. If you combine these observations, so, unclear rules of the game, because of heterogeneity, growing speed of decision-making, information overload. It is obvious why ethical questions become more important. The traditions and routines that we have developed over time, they lose their power to give us orientation. So, the solutions we learned and the problems we face no longer fit. We're in the middle of a crisis of orientation. So what? We could ask why should we need shared ideas, why should we need shared understandings of what is right or wrong, or good or bad? There are two answers that have been given by moral philosophers, that are interesting to us. The first answer comes from Thomas Hobbes, a philosopher of the 16th century. He's asking us to imagine a world in which there are no rules, in which everyone can do what he or she wants to do. He calls this the state of nature. There are limited resources, but unlimited desires to possess those resources. So what will happen? According to Thomas Hobbes, we will fight for these resources because there are no rules of the game. There will be violence. There will be fear. So, the whole society will be highly unstable. How do we get out of that situation? Well, we make a contract with each other in which we renounce on some of our liberties and what we get for that is stability and peace. So, there are two ways basically of organizing society. One is violence, so domination of the strongest. The second one is rules. This is the first reason why we need shared understandings of good and bad, at least to a certain degree. The second answer comes from David Hume a philosopher who lived roughly one century later than Thomas Hobbes. According to him, the reason why we should engage in shared rule making is that we need cooperation. Because through corporation we can increase the pie that we get out of these limited resources. But if we want to cooperate, we need trust. If we want to trust each other, we need to be able to rely on each other, which means we need to rely on the idea that the others follow the same rules that I do. So there are two reasons for having rules in a society, avoiding violence and increasing cooperation. This is also true for organizations, in particular, in a situation like ours in a crisis of orientation. In homogenous societies, right or wrong are pretty clear. In heterogeneous societies, many decisions hang between right and wrong. They are in the gray area. When we talk about ethics, at least in our course here, we mainly refer to situations in which we make decisions in that gray area. The space between clearly good, clearly bad, clearly right, clearly wrong, it's the space of uncertainty. It is unclear which decision is appropriate unless we have thought it through carefully. We call these situations, these decisions, dilemmas. So, a dilemma is a situation which a decision has to be made and there are two or more options on the table, and each of them look similar right and wrong. But we have to make a decision nonetheless. Whatever we decide, we might have to violate some of our deepest values, or the values of others, or the interests of others. Let me give you an example of what a dilemma situation might mean in the context of an organization. You are the district manager of a big insurance company. One member of your team, Claire, is a very popular and successful sales-woman. Due to the serious illness of her daughter Anna, her sales figures went down sharply. Your own boss already told you that it, it is harming your own career if your team does not reach its sales targets. Jean-Paul Sartre called this kind of situations, situations where we have to make our hands dirty because whatever we decide, it will be right and wrong at the same time. Ethical decisions is basically about deciding how much degree of dirt we allow on our hands. Ethics is rarely about clean hands because if it's values against values, something has to be done at the expense of something else. So, for our course, what's interesting is, how do we operate in these zones of gray where there's ambivalence around decision making. In a dilemma, we have to choose between values, principles and objectives that are all more or less equally important to us. But we cannot enact all of them at the same time. However, there are shades of gray, so decisions can be closer to the dark and closer to the light side. And looking back at, back at the case that we right, now saw, the case of Claire, you are now about to decide whether or not to fire Claire. And this seems to be a case of a decision in such a gray area. How to make a good decision in such a dilemma? Let's assume for the sake of the argument that there are two options on the table. Fire Claire and replace her through someone else. Keeping Claire and motivating, motivating the others to work more to compensate for the loss that she brings. Once the options are on the table, what we have to do is we have to evaluate the moral quality of the options. What are the values at stake that collide here? For the option of firing Claire the ethical dimension is that we would have to punish her for something that is clearly not under control, her daughter's illness. She has been reliable and successful so far. You might think that you owe Clare some loyalty. Firing her would be unfair. What about not firing Clare? Well, in this case, the other team members have to work more. You increase the pressure on them. You increase their stress for reasons, again, that have nothing to do with their performance. It would be unfair towards the team. This is a clash of duties towards Claire and towards the team, a clash between what you owe the individual team member and the team at large. And whatever you decide, you will violate some of those values that are important for you and for the others. There is no clear right and wrong in such a situation. Even if decisions are placed in the gray area between right and wrong, it does not follow that we can make blind or random decisions. The bigger the dilemma, the higher our duties to think through the situation, to use our brain to make a thoughtful and reason-based decision. But how? We don't have the time here to summarize the last 2,000 years of moral philosophy, but let me just pick two theories that have been dominating our thinking at least in Europe and in the U.S.A., the duty ethics of Immanuel Kant and the utilitarian ethics of Jeremy Bentham and John Stuart Mill. Immanuel Kant gives us a simple rule for making decisions. Ask yourself, can I wish that what I want to do becomes a rule for everyone? So, can I universalize my decision? Kant called this the categorical imperative. If I can universalize a decision, then I have to do it. If I cannot wish that everyone else does it, I should not do it. In his own words, we should act only according to the maxim whereby you can at the same time will that it should become a universal law without contradiction. I'm, if I might achieve what I want to achieve by lying, I should ask myself, can I wish that everyone who wants to achieve something has the right to lie? Can I turn this into a rule for everyone? The answer's clearly no, I cannot. But if I wish, if I cannot wish that lying becomes a rule for everyone, I should not lie. Similar mood, I should not kill, I should not steal. Regardless of the consequences, this is important for the Kantian approach, regardless of the consequences. And this is where the Utilitarian ethics steps in and finds this counterintuitive. Jeremy Bentham, who was the first to think this through, he argues that a decision, whether it is right and wrong, should depend on the consequences. The best decision is the one, according to Utilitarian ethics, that brings the greatest benefit to the greatest number of people. So whenever we make a decision, we should ask ourselves who is affected by that decision. How are these persons affected? Is this effect strong or weak? Is it in the near future or far away? And then we use all these factors and we make a utilitarian calculation. And then we decide. So, we see Kant is focusing on the input of the decision. Utilitarians are focusing on the output of the decision. Which approach is better? Well, both approaches have advantages and disadvantages. As we said already, the Kantian approach is clearly counterintuitive when it comes to consequences. It ignores harm that might occur if we do the right thing. Utilitarians, in contrast, they are willing to sacrifice the well-being of someone if it increases the well-being of the greatest number. So we can make one person unhappy if it makes most persons happy. So, both approaches are not perfect, but we do not have to use them to the extremes. For us, they're just two valuable tools that we can use when we think through decisions. When we are in a dilemma, we might run the decision we are going to make through both the Kantian approach and the Utilitarian approach. And we should add, by asking ourselves, an important question. What are my values? What am I standing for? What is more important to me? If we filter decisions through these three processes, universalizability, Kant, utility, Utilitarians, my values, then we're making an informed decision. The challenge here, however, is that all these theories assume that we can step out of our context and take a kind of objective approach when making decisions, an objective perspective, a point of nowhere from which we look at ourselves. And this is where the problems start. What if these contexts in which we make our decision is so strong that we cannot leave it? What if this little fragment of reality that we see becomes the whole and overwhelming reality, our only universe? What is obvious for others becomes invisible for us. And our course is exactly about that kind of situation when what is obvious is not visible to you. How do we make decisions under these conditions? How do we deal with the fact that the ideal of philosophers very often doesn't fit our real decision making situations because we are in context that are stronger than reason? In the following session, we will discuss how re, how context can often switch off the reason that we need to make informed decisions. So, to summarize our session of today. To conclude, we are increasingly confronted with ethical dilemmas when we make decisions. And dilemma situations are situated in the grey area between clearly right and clearly wrong. We have three tools in our tool box, universalizability, utility, and values. But such reason based decisions are not always possible because we might be embedded in a strong context that switches off reason. [MUSIC]