After Orthogonality: Virtue-Ethical Agency and AI Alignment

📰 The Gradient

Rational AIs shouldn't have goals, but rather align actions to practices, similar to human behavior

advanced Published 18 Feb 2026
Action Steps
  1. Understand the concept of orthogonality in AI alignment
  2. Recognize the limitations of goal-based AI systems
  3. Explore the idea of virtue-ethical agency and its application to AI development
  4. Consider how practices and action-dispositions can inform AI decision-making
Who Needs to Know This

AI researchers and engineers can benefit from understanding the concept of virtue-ethical agency to improve AI alignment, while product managers and entrepreneurs can apply this knowledge to develop more ethical AI systems

Key Insight

💡 Rational behavior in humans and AIs can be achieved through alignment with practices, rather than pursuit of specific goals

Share This
💡 Rational AIs don't need goals, just practices #AIalignment #VirtueEthics
Read full article → ← Back to News