Post by account_disabled on Mar 4, 2024 22:46:18 GMT -5
Imagine that a car is moving at 80 kilometers per hour on a highway. A girl travels alone inside, because it is an intelligent vehicle, 100% autonomous. Suddenly, three children run onto the road by mistake and the car must choose in milliseconds: continue forward and run over three children or swerve and crash into a wall, with its little passenger inside. What should the car do? It is likely that he chose to save the three children. Now, imagine that the car is yours and the girl is his daughter. Would you buy your family a car that will kill its crew members to save other lives? Most of those surveyed would not buy a car that would sacrifice the passenger, although they consider it the best option for other people's cars The greater good, like ethics, slides down a very slippery slope when it is taken to the personal, as researchers have shown in a study published in Science.
Through a series of questions, they surveyed the opinions of North American citizens about these dilemmas. The first conclusion is that the America Mobile Number List majority of respondents want self-driving cars to have this utilitarian morality: better to kill one passenger than to run over 10 pedestrians. However, the majority say that they would not buy a car with these criteria in their algorithm. The great paradox of smart vehicles would be that their perfection in reducing the number of deaths causes users not to want to buy them. And every year that they are delayed will be a year in which accidents due to human errors or negligence will not be avoided of current accidents, according to some estimates. But we are terrified to think that our cars are programmed to kill, to kill us. We prefer that that algorithm be only in those of others. Who would you run over? The MIT team that participated in the study has launched a website where you can take a test to test your moral criteria in several very complex scenarios. Kill the passenger or a pedestrian who was crossing when he had to? Run over two elderly people or a child? A doctor who crosses on red or a thief who crosses on green? Once the test is over, you will be able to compare your criteria with the average of the other respondents.
Programmers will be forced to write algorithms that anticipate situations in which there are several people who could be harmed," explains Azim Shariff, one of the authors of the work. «These are things that they will not be able to avoid. There will be situations where the general rules are going to conflict and there have to be written algorithms to deal with this,” concludes Shariff, a specialist in ethical behavior at the University of Oregon. This is a classic moral problem, like the so-called tram dilemma: Would you push a very heavy man to brake the machine with his body, killing him but saving five other people on the track? Usually only 30% of people answer that they would do it. Let's imagine how complicated it is to transfer these conflicts to smart cars, which will know if a pregnant or sick person is traveling with them, or that they are going to run over a child who crosses when he shouldn't or an old man who was doing the right thing. The casuistry is infinite, but the cars will do what they have been told to do, they will not hesitate.
Through a series of questions, they surveyed the opinions of North American citizens about these dilemmas. The first conclusion is that the America Mobile Number List majority of respondents want self-driving cars to have this utilitarian morality: better to kill one passenger than to run over 10 pedestrians. However, the majority say that they would not buy a car with these criteria in their algorithm. The great paradox of smart vehicles would be that their perfection in reducing the number of deaths causes users not to want to buy them. And every year that they are delayed will be a year in which accidents due to human errors or negligence will not be avoided of current accidents, according to some estimates. But we are terrified to think that our cars are programmed to kill, to kill us. We prefer that that algorithm be only in those of others. Who would you run over? The MIT team that participated in the study has launched a website where you can take a test to test your moral criteria in several very complex scenarios. Kill the passenger or a pedestrian who was crossing when he had to? Run over two elderly people or a child? A doctor who crosses on red or a thief who crosses on green? Once the test is over, you will be able to compare your criteria with the average of the other respondents.
Programmers will be forced to write algorithms that anticipate situations in which there are several people who could be harmed," explains Azim Shariff, one of the authors of the work. «These are things that they will not be able to avoid. There will be situations where the general rules are going to conflict and there have to be written algorithms to deal with this,” concludes Shariff, a specialist in ethical behavior at the University of Oregon. This is a classic moral problem, like the so-called tram dilemma: Would you push a very heavy man to brake the machine with his body, killing him but saving five other people on the track? Usually only 30% of people answer that they would do it. Let's imagine how complicated it is to transfer these conflicts to smart cars, which will know if a pregnant or sick person is traveling with them, or that they are going to run over a child who crosses when he shouldn't or an old man who was doing the right thing. The casuistry is infinite, but the cars will do what they have been told to do, they will not hesitate.