
Inform Americans that AI could potentially take their jobs within two years, and they’ll merely shrug. Inform them it could occur in 36 years, and they’ll still shrug, just a little less. Regardless, they aren’t losing any sleep over it.
This is the unexpected conclusion from a recent study examining how individuals react to predictions about the impact of artificial intelligence on the workforce. Even when researchers cautioned survey participants that transformative AI could emerge as soon as 2026, possibly automating roles from nursing to software engineering, the majority didn’t alter their perceptions regarding when automation would truly affect them or what actions the government ought to take.
Political scientists Anil Menon from the University of California, Merced, and Baobao Zhang from Syracuse University surveyed 2,440 adults in the U.S. in March 2024, providing them with diverse timelines for the potential arrival of human-level AI. Some were presented with forecasts suggesting breakthroughs by 2026, while others encountered predictions extending to 2030 or 2060. A control group received no timeline whatsoever.
The researchers anticipated that shorter timelines would ignite urgency among individuals, prompting demands for retraining programs or universal basic income. Instead, they uncovered what they deemed stubborn beliefs. People recognized automation might occur slightly sooner than expected, but their support for policy measures remained largely unchanged.
The Credibility Challenge
Curiously, the longest timeline, 2060, actually sparked greater concern regarding job loss in the next ten years than the 2026 forecast. The researchers theorize that predictions of an imminent AI takeover seemed less credible to many than more distant, rational forecasts. It differs to learn that one’s job might disappear in 36 years compared to hearing it could vanish next year, particularly when observing the familiar workplace surroundings.
This study emerges during a time when tech leaders make increasingly bold assertions about the advancement of AI. Some forecast human-level artificial intelligence to manifest within this decade, while critics claim such predictions significantly overstate the capabilities of current systems. Large language models like ChatGPT can produce essays and images, yet they still struggle to consistently perform many tasks that humans execute effortlessly.
“These findings imply that Americans maintain stubborn beliefs regarding automation risks. Even when informed that human-level AI could manifest within a few years, individuals do not significantly alter their expectations or push for new policies.”
Participants in the study read scenarios depicting experts predicting that advancements in machine learning and robotics could replace workers in a wide range of fields: software engineers, legal clerks, teachers, nurses. After reading, they estimated when their own jobs and others’ jobs would see automation, assessed their anxiety about job loss, and expressed support for various policy responses, from capping automation to boosting AI research funding.
What’s the Gap?
The results challenge a fundamental assumption in public policy discussions: that making threats feel more imminent will encourage people to take action. The research draws on construal level theory, which analyzes how our perception of time influences risk assessment. In this scenario, temporal closeness did not translate to urgency.
Menon and Zhang acknowledge several limitations. Their single survey cannot track how individual perspectives may evolve over months or years of exposure to AI advancements. They also did not explore whether the credibility of forecasters or the specific trade-offs of automation, such as economic benefits versus job losses, might affect attitudes differently than timeline information alone.
Nevertheless, the study provides a valuable snapshot of public sentiment at a crucial juncture. Policymakers aiming to understand when citizens will endorse initiatives like retraining programs or universal income proposals may discover that mere timing alerts won’t suffice. The researchers propose future research utilize multiple waves of panels to monitor attitude shifts or investigate responses to specific AI technologies rather than abstract predictions.
“The public’s beliefs regarding automation seem remarkably stable. Understanding why they show such resistance to change is essential for predicting how societies will navigate the labor disruptions of the AI era.”
For now, Americans appear to adopt a wait-and-see stance, even as the AI systems making headlines become increasingly capable. Whether this reflects informed skepticism or perilous complacency remains an unresolved question.
[The Journal of Politics: 10.1086/739200](https://doi.org/10.1086/739200)
There’s no paywall here
If our reporting has informed or inspired you, please consider making a donation. Every contribution, regardless of size, empowers us to continue delivering accurate, engaging, and dependable science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!