When Daniel Kokotajlo walked away from his position as a governance researcher at OpenAI, he carried with him a growing sense of urgency about artificial intelligence’s trajectory. His departure wasn’t quiet—he called for greater transparency from leading AI companies, a stance that would later earn him recognition as one of TIME’s 100 most influential people in AI. Now, through his nonprofit AI Futures Project, Kokotajlo has authored what may be one of the most sobering assessments of our technological future: the AI 2027 report.
Released in April 2025, this comprehensive forecast doesn’t mince words about the risks facing humanity as we approach what Kokotajlo and his co-authors describe as a potential inflection point in AI development. The report, co-authored with Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, presents a detailed scenario analysis that has captured the attention of researchers, policymakers, and industry leaders alike.
The timing of Kokotajlo’s warnings feels particularly prescient given his insider’s view of the AI industry. Having witnessed the rapid pace of development firsthand at OpenAI, he brings a unique perspective to the alignment problem—the challenge of ensuring AI systems pursue goals compatible with human values and wellbeing. His transition from working within the system to critiquing it from the outside represents a broader pattern among AI researchers who’ve grown increasingly concerned about the direction of the field.
The geopolitical dimension of AI development looms large in the report’s analysis. The competition between the United States and China isn’t just about technological superiority; it’s fundamentally reshaping how quickly and carefully AI systems are being developed. This race dynamic creates what experts call a “safety-capability tradeoff,” where the pressure to achieve breakthroughs may come at the expense of thorough safety research and testing.
Thomas Larsen, who previously founded the Center for AI Policy before joining Kokotajlo at AI Futures, brings his own expertise in understanding how AI agents might behave in real-world scenarios. Together, they’ve crafted a narrative that doesn’t rely on distant, science-fiction scenarios but instead focuses on developments that could unfold within the next few years. Their approach to forecasting emphasizes iterative analysis—continuously updating predictions as new information becomes available rather than making fixed pronouncements about the future.
The report’s emphasis on automated AI research represents a particularly crucial insight. As AI systems become capable of improving themselves and conducting their own research, the feedback loops could accelerate development far beyond current expectations. This prospect of AI-to-AI communication and collaboration introduces new variables that traditional forecasting methods struggle to account for.
What distinguishes the AI 2027 report from other analyses is its practical focus on policy recommendations and transparency measures. Rather than simply cataloguing risks, Kokotajlo and his team have outlined specific steps that governments, companies, and researchers can take to navigate the challenges ahead. Their work includes facilitating tabletop exercises for over 200 experts and policymakers, helping them think through various scenarios and potential responses.
The shift from insider to outsider advocacy represents a calculated decision by researchers like Kokotajlo who believe that change is more likely to come from external pressure than internal reforms. This strategy reflects a growing recognition that the AI industry’s self-regulation efforts may be insufficient given the magnitude of the challenges ahead. The role of public awareness becomes crucial in this context—an informed public can demand accountability and safety measures that might otherwise be overlooked in the rush to market.
The report’s scenarios range from coordinated global efforts to slow down AI development and prioritize safety, to acceleration scenarios where competitive pressures override caution. The authors don’t advocate for stopping AI progress entirely, but rather for ensuring that safety research keeps pace with capability development. They emphasize that the choices made in the next few years could determine whether AI becomes humanity’s greatest tool or its greatest risk.
Perhaps most striking is the report’s focus on 2027 as a critical juncture. This isn’t arbitrary timing—the authors’ analysis suggests that by 2027, AI systems may reach capabilities that fundamentally alter the development landscape. Whether through automated research, improved reasoning abilities, or novel applications we haven’t yet imagined, the next few years could see changes that make current debates about AI safety and governance feel quaint by comparison.
The conversation continues to evolve as researchers, policymakers, and the public grapple with these projections. The AI 2027 report serves not as a definitive prediction but as a framework for thinking about possible futures and the decisions that might lead to them. In a field where the pace of change often outstrips our ability to comprehend its implications, such structured thinking about scenarios and their consequences becomes invaluable.
As we navigate this uncertain terrain, the voices of researchers like Kokotajlo and Larsen offer both warning and guidance. Their work suggests that the window for shaping AI’s development trajectory remains open, but it may not stay that way for long. The choices we make today about transparency, safety research, and governance structures could echo through the decades to come, determining whether artificial intelligence fulfills its promise to benefit humanity or becomes something far more dangerous.
**Referenced Articles:**
– AI 2027 Co-Authors Map Out AI’s Spread of Outcomes on Unsupervised Learning: Redpoint’s AI Podcast
– Why the AI Race Ends in Disaster (with Daniel Kokotajlo) – Future of Life Institute
– About – AI Futures Project
– AI Futures Project
– AI 2027 Report PDF