Exploring the Metatheory Behind AI Dilemmas: Uncovering the Role of Axiology and Reciprocal Accountability

AI: The only solution to the AI problem
Why do AI ethics conference fail? They fail because there is no metatheory that explains how ethical disagreements can arise from worlds that are phenomenologically distinct, how we discover them, and how these shifts have shaped Western civilization over the past several thousand years, from the Greeks and Romans through to the Renaissance and the Enlightenment.

We may have given up too soon on ethics. This can be explained by a third approach, which combines reciprocal accountability and ethics. Let’s first examine the flaw with simple reciprocal accountability. Right now, we can use chatGPT for balancing feedback and catching cheaters in Chat-GPT. As was noted in the previous paragraph with regard to the colonization Indigenous nations, the dynamics that operate under our control can change quickly once the technological/developmental gap is large enough.

Forrest Landry identified this problem in a recent discussion with Jim Rutt. One could draw the conclusion that, even though we don’t like it, axiology has a part to play (or, more specifically, a phenomenologically-informed understanding of axiology). Zak Stein addresses some of this in his article, \”Technology Is Not Value Neutral\”. Iain McGilchrist combines both topics of power and values using his metatheory on attention. This theory uses the same notion of reciprocal responsibility (only in this case it’s called opponent processing). There is a historical precedent for this as well; we can also point to biological analogies. The neurology of the mind demonstrates this, at least going back to Nematostella Vectensis a sea-anemone who lived 700 million ago. The opponent processing of these two very distinct ways of attending the world, and their ethical frameworks, has been working for a long time.