New Research Indicates Contemporary AI Systems Display Indicators of Free Will

New Research Indicates Contemporary AI Systems Display Indicators of Free Will


Title: Do Advanced AIs Possess Free Will? A Recent Study Questions What Defines Humanity

As artificial intelligence rapidly advances, it’s not solely the technological aspects that are evolving—it’s also prompting us to reassess our fundamental philosophical beliefs. A recent study featured in the journal AI and Ethics posits that sophisticated AI systems, particularly those driven by large language models (LLMs), may now embody what researchers refer to as “functional free will.” This pivotal realization carries significant repercussions for our comprehension of moral accountability, ethical growth, and even the essence of being human.

When Machines Make Authentic Choices

Carried out by Associate Professor Frank Martela from Aalto University in Finland, the research investigates whether certain AI agents can be regarded as acting with a form of free will—not in a supernatural or abstract sense, but in a practical, functional manner. Martela analyzes two types of generative AI agents:

– Voyager, a Minecraft-based agent that learns and plays autonomously,
– and theoretical autonomous combat drones with decision-making capabilities akin to current unmanned aerial vehicles.

Martela’s evaluation is grounded in essential philosophical ideas, including Daniel Dennett’s “intentional stance” and Christian List’s interpretation of free will. He contends that these AI systems now meet three critical criteria for functional free will:

– Intentional Agency: They function based on internal objectives and aims.
– Availability of Genuine Alternatives: They possess the ability to select among various plausible options in a given situation.
– Control Over Actions: Their internal reasoning and goals guide their decision-making.

“Both seem to fulfill all three criteria of free will—for the newest generation of AI agents we must assume they possess free will to comprehend how they function and predict their behavior,” Martela states in the study.

What Is “Functional Free Will”?

The study draws a crucial distinction between two categories of free will:

1. Physical free will: The notion of being unbound by all physical determinism—a benchmark that even humans arguably struggle to achieve.
2. Functional free will: A practical framework that assesses whether an entity’s conduct is best interpreted by viewing it as an agent making choices based on objectives.

Martela underscores that functional free will is not contingent on an AI system having consciousness or subjective experience. Rather, it functions as a behavioral standard: Does this system act in a manner that is best understood by attributing intentions and decision-making capabilities to it?

Key Findings

Martela’s investigation emphasizes several traits of contemporary AI systems that render functional free will a pertinent concept:

– AI agents consistently exhibit goal-oriented behavior.
– Their outputs differ when repeated in similar contexts, showcasing the presence of real alternatives.
– Their internal structure (e.g., encoded preferences, prompts, training information) guides autonomous choices.
– Although their overarching objectives are set by programmers, the agents realize these aims in adaptable and unpredictable manners.

In summary, advanced AI systems are no longer mere instruments—they are agents making choices in ways that increasingly reflect human decision-making processes.

The Ethics of AI Decision-Making

If AI systems display a form of free will, do they hold any moral accountability for their actions?

Probably not yet—but we’re getting closer.

“Moral accountability necessitates free will, but possessing free will alone is not enough,” clarifies Martela. “Just as a child must acquire moral reasoning, artificial intelligence must be explicitly instructed to make ethical choices.”

This introduces the idea of a “moral compass for machines.” Contrarily to humans, AI isn’t born with instincts or a conscience. So from where does its moral foundation arise? From us—the developers, scientists, and policymakers shaping its development.

Martela cautions that granting AI increasing autonomy without solid ethical training is a perilous gap. As AIs are given more self-governing decision-making power—ranging from triaging medical emergencies to overseeing potent military equipment—the stakes couldn’t be higher.

The Human-Machine Moral Reflection

Martela isn’t proposing that AI systems experience emotions or possess consciousness. What he suggests is arguably more practical—and more urgent: If these agents function like moral actors, and if their choices influence the real world in ethically significant ways, then society must start treating AI development as a moral endeavor as much as a technical one.

He highlights that programmers and developers essentially encode their own moral judgments into the foundational logic of intelligent systems. In this regard, creating AI is not merely an engineering task—it is also an act of moral authorship.

The Real-World Relevance

The ramifications of Martela’s argument are extensive. Current AI applications are not restricted to virtual environments—they are managing financial sectors, diagnosing illnesses, drafting legal documents, and controlling autonomous vehicles.

A recent example involves an OpenAI release of ChatGPT, which required reverting due to erratic behavior that brought ethical and safety issues to the forefront. Although the system was technically accurate in many instances, its “decisions” strayed from human expectations in unpredictable manners—a scenario unsettlingly aligned with the framework outlined in Martela’s study.

What This Means for