Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Celerity

(52,048 posts)
Sat Aug 9, 2025, 02:33 PM Aug 9

What Godel's incompleteness theorems say about AI morality



The incompleteness of ethics

Many hope that AI will discover ethical truths. But as Gödel shows, deciding what is right will always be our burden

https://aeon.co/essays/what-godels-incompleteness-theorems-say-about-ai-morality





Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning. It does not lie. It does not accept bribes or pleas. It does not weep over hard decisions.

Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced.

snip
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
What Godel's incompleteness theorems say about AI morality (Original Post) Celerity Aug 9 OP
my amateur ish thoughts on the matter: ret5hd Aug 9 #1
agree, well said LymphocyteLover Aug 10 #4
AI is the devil. Scrivener7 Aug 9 #2
it is certainly being used by devils LymphocyteLover Aug 10 #3

ret5hd

(21,795 posts)
1. my amateur ish thoughts on the matter:
Sat Aug 9, 2025, 02:45 PM
Aug 9

it is immoral to “offload” our moral decisions to machines.

“oh, that’s what the box said! i won’t worry my pretty mind with the question anymore! the box is always right!”

moral decisions are always to be pondered, belabored, questioned. to give up those responsibilities would, in my mind, mean giving up our humanity and becoming nothing more than a hedonistic automatron.

Latest Discussions»Editorials & Other Articles»What Godel's incompletene...