During a meeting of class 6.c40/24.c40 (ethics of computing), Professor Armando Solar-Lazama is the same impossible question for his students that he often asks himself in research that he he Takes along the group.
“How do we make sure that a machine does what we want, and only what we want?”
At this time, some people consider the golden age of generic AI, it may look like an essential new question. But the prestigious professor of computing in MIT, solar-lezama, is in a hurry to indicate that this struggle is older as mankind.
He begins to resume the Greek myth of King Midas, which was the emperor, which he gave divine power to change anything from touched in solid gold. Mainly, when Midas accidentally loved everyone in a guild stone, this wish came back.
“Be careful what you ask for because it can be provided in the ways you do not expect,” they say, warning their students, many of them aspire for mathematicians and programmers.
Digging in the MIT archives to share slides of granulated black and white photos, he describes the history of programming. We hear about the Pygmalion Machine of the 1970s, requiring incredibly detailed indications for computer software of the 90s, which takes 800-page documents for engineers’ years teams and programs.
Being remarkable in its time, these processes took a long time to reach users. He left no place for spontaneous discovery, sports and innovation.
Solar-legend talks about the risks of manufacturing modern machines that do not always respect the programmer’s signs or red lines, and those who are equally capable of accurates as saving life.
Titus wrestler, a senior head in electrical engineering, deliberately shakes heads. The Rosler is writing his last paper on the morality of autonomous vehicles and weighing which is morally responsible when a fictionarily hit and kills a pedestrian. Their arguments inherent perceptions behind technical advances, and consider many valid approaches. It bends on the philosophy theory of utilitarianism. “Broadly, according to utilitarianism, the largest number brings the best to people,” says Rosler.
MIT philosopher Brad Sco, with whom solar-lezama developed and educating the team, takes forward, takes forward and takes notes.
A class that demands technical and philosophical expertise
Ethics of computing, decline was introduced for the first time in 2024, was created through the Common Ground for computing education, an initiative of MIT Schwarzman College of Computing that develops many departments simultaneously and teach new courses and teach new programs Brings to launch that mixes computing. With other subjects.
Instructor alternative lecture day. Skove, The Lawrence S. Rockfeller Professor of Philosophy, brings its discipline lens to examine the widespread implications of today’s moral issues, while solar-lezama, who is the Associate Director and Chief Operating Officer of MIT’s Computer Science and Artificial Intelligence Laboratory, through him Perspective.
Sco and solar-legend participate in each other’s lectures and adjust their follow-up class sessions in response. In real time, the learning element from each other has been introduced to more dynamic and responsible class conversations. A text and a lively discussion courses combine to break the subject of the week with graduate students from philosophy or computer science.
“An outsider may think that it is going to be a class that will ensure that these new computer programmers sent by MIT to the world always do the right thing,” says Sco. However, the class is deliberately designed to teach students a different skill set.
It has been determined to create an impressive semester-length courses, which was more than giving more lectures from students about the right or wrong, Darshan Professor Casper Hare as his role as Dean, a colleague of social and moral responsibilities of computing the social and moral responsibilities of computing. Imagine the idea for the morality of computing in. Hare admitted the skove and solar-legend as prominent trainers, as they knew that they could do something deeper than this.
“Thinking deeply about the questions that come in this class requires both technical and philosophical expertise. MIT does not have other classes that both side-by-side,” Sko says.
The same thing is that Senior Alec Westover enrolled. Mathematics and Computer Science Double Major explains, “Many people are talking about how AI’s trajectory will look in five years. I felt that it is important to take a class that would help me think more about it.”
Westover says that he is ready to see with interest in morality and desire to separate the right from wrong. In mathematics classes, he has learned to write a problem statement and has obtained immediate clarity on whether he is successfully resolving it. However, in the morality of computing, he learned how to make written arguments for “difficult philosophical questions”, which cannot have a single correct answer.
For example, “A problem that we can be worried about, what happens if we create powerful AI agents that can do any work that a human can do?” The Westover asks. “If we are interacting with that degree with these AI, should we pay them salary? How much do we care about what they want?”
There is no easy answer, and the Westover assumes that he will face many other dilemmas in the workplace in the future.
“So, is the internet destroying the world?”
The semester began with a deep dive at AI risk, or “does AI create an existence for humanity,” how to decide our brain under uncertainty, unnecessarily, and long -term liabilities. Debate about, and regulation of AI. A second, the long unit raised zero on “Internet, World Wide Web and Social Impact of Technical Decisions”. The end of the word sees secrecy, prejudice and free speech.
The subject of a class was devoted to stimulating asking: “So, is the internet destroying the world?”
Senior Ketleen is studying in Ogo Course 6-9 (Computation and Cognition). Being in such an environment where she can examine such issues, is well why the self-known “technology doubt” enrolled in the course.
The growing of a mother who is impaired with a mother and hearing a younger sister with a developmental disability, Ogo became a member of the default family, whose role was to call the providers for technical support or program iPhones. He took his skills into a part -time cell phone that set a part -time job, which paved him the way to develop a deep interest in calculation and develop a route for MIT. However, in its first year, a prestigious summer fellowship made her question behind the morality behind how consumers were affected by the technology that she was helping in the program.
“Whatever I have done with technology is from the perspective of people, education and personal relationship,” Ogo says. “This is a niche that I love. Taking humanities classes around public policy, technology and culture is one of my big passions, but this is the first course I have taken which includes a philosophy professor.”
Next week, Sco AI gives lectures on the role of prejudice, and Ogo, which is entering the workforce next year, but eventually plans to participate in law school to focus on regulating issues related to related issues, questions questions. Raises his hand to ask or share the counterpoint four times. ,
The Sco, a controversial AI, excavates in checking the software, which uses an algorithm to predict the possibility that people accused of crimes commit crime again. According to an article by a popular 2018, the compass was likely to destroy black defendants as future criminals and was given false positivity twice at the rate as it happened to white defendants.
The square session is dedicated to determining whether the article warns the conclusion that the compass system is biased and should be closed. To do this, Sco introduces two different principles on fairness:
“The original fairness is the idea that a particular result may be fair or unfair,” they explain. “Proportive fairness is about whether the process by which a result arises is appropriate.” Various types of conflicting criteria of fairness are then introduced, and the square discusses which were admirable, and what conclusions they warned about the compass system.
Later, both professors go up to the solar-lezama office, so how exercise that day is done.
“Who knows?” Say solar-lezama. “Maybe after five years from now, everyone will be able to laugh on how people were worried about the risk of existence of AI. But one of the subjects running through this class I was beyond the media discourse. I am learning about the debates and moving from these issues. ”