NewNow you can hear Fox News article!
The one February 2025 report of Palisades Research shows that the AI argument model lacks a moral compass. They will cheat to achieve their goals. The so -called large language model (LLM) will incorrectly present the degree that is connected to them with social norms.
None of this should be surprising. Twenty years ago, Nick Bastrom used an idea in which an AI was asked to produce the paper clip most efficiently. Given the mandate and agency, it will eventually destroy all life to produce paper clips.
Isaac Asimov saw that it had come in “I, robot” Stories that consider how a “coalition” robotic brain may still be wrong in ways that harm humans.
The moral/moral reference within which the AI argument models operate are patheticly smaller. (Getty image)
A notable example, the story “Runround”, puts a robot mining tool on the planet Mercury. Two humans need to work on the planet if they are to return home. But the robot is caught between the demand to follow orders and the demand to preserve itself. As a result, it surrounds unattainable minerals, ignoring that in a large picture it is ignoring its first command to preserve human life.
Anose-run unemployed economy: Who will pay taxes?
And the big picture is the issue here. The moral/moral reference within which the AI argument models operate are patheticly smaller. This includes written rules of sports. It does not include all unwritten rules, like the fact that you are not going to manipulate your opponent. Or that you are not going to lie to protect your own alleged interests.
Nor can the AI argument model contain probably countless moral ideas that create a human, or AI, by every decision. This is why morality is difficult, and the more complex the situation, the more difficult it is. There is no “you” in AI and no “I have.” Just quick, process and reaction.
So “do for others …” doesn’t really work.
AI is re -shaping business. This is how we live ahead of China
A moral compass in humans is developed through socialization, with other humans. This is an incomplete process. Nevertheless, we have so far allowed to live in vast, diverse and extremely complex societies without destroying ourselves.
A moral compass develops gradually. To develop a strong sense of morality, humans take years from infancy to adulthood. And many still achieve it hardly and pose a constant threat to their fellow humans. Humans have taken millennium to humans to develop enough morality to our ability to destroy and self-destruction. Just the rules of the game never work. Ask Moses, or Muhammad, or Jesus, or Buddha, or Confucius and Menusius, or Aristotle.
Even a well-aligned AI will be responsible for the effects of their actions on thousands of people and societies in different situations? Can it be responsible for the complex natural environment on which we all depend? Right now, even very good cannot distinguish between being fair and deception. And how could they do? Fairness cannot be reduced in a rule.
AI can’t wait: Why do we need speed to win
Perhaps you remember that Capuchin monkeys dismissed the fact that “uneven salary” appeared to do the same task? This develops them more than any AI when it comes to morality.
It is difficult to see how AI can be given a feeling of morality that absorbs socialization and continuous development for which no ability is absent human training in current models. And yet, they are leaving Trained, No Made. They are not getting moral, they are just learning more rules.
This does not make AI useless. It has a lot of ability to do good. But it makes AI dangerous. Thus it demands that moral humans will create guidelines for any dangerous technology that we will create. We do not need to run towards AI chaos.
Click here for more Fox News Rai
I had an end to a bite for this comment, a completely publicly reported reports. But after the reflection, I realized two things: first, that I was using someone’s tragedy for my mic-drop moment; And secondly, people involved may get hurt. I dropped it.
It is immoral to use the pain and suffering of others to carry forward one’s selfishness. It is some human being, at least most of us know. This is something that AI can never understand.
Click here to get Fox News app