Weekly Digest on AI, Geopolitics & Security

For policymakers and operators who need to stay ahead.

No spam. One clear briefing each week.

The April Gambit: Why Trump’s Beijing Visit Could Decide Whether AI Becomes a Weapon or a Tool

Trump’s planned April 2026 visit to Beijing is not just another high‑stakes summit between the world’s two most powerful leaders. It is emerging as a turning point that will help determine whether artificial intelligence (AI) becomes primarily a weapon of strategic competition or a tool embedded in shared safety norms and crisis protocols.

For all the talk of “winning” the AI race, both Washington and Beijing now face a hard strategic reality: neither can manage catastrophic AI risks alone. The diplomatic calendar for 2026—multiple Trump–Xi meetings, China chairing APEC, and ongoing UN‑linked AI governance efforts—creates an unusual window in which AI safety, military risk reduction, and crisis communications could be hard‑wired into the relationship. Whether that happens will depend largely on how both sides use, or waste, the April gambit.

A rare opening in a hardening rivalry

The context for April 2026 is a deeply competitive—but not yet irreversibly hostile—AI relationship.

– Trump and Xi have agreed in principle to formal AI talks, with multiple interactions expected through 2026, including Trump’s April trip to Beijing and additional encounters at the G20 and APEC.
– At the 2025 APEC leaders’ meeting, Xi signaled “good prospects for cooperation” on AI and other fields and ensured AI will be a core agenda item at the 2026 APEC summit in Shenzhen, which China will chair.
– The APEC meeting also produced the APEC AI Initiative (2026–2030), focused mainly on accelerating AI development, with only secondary references to “security, accessibility, trustworthiness, and reliability.”

At the same time, China is methodically positioning itself as the architect of a more multilateral AI order:

– Beijing has proposed a World AI Cooperation Organization, pitched as a venue for aligning development strategies, governance rules, and technical standards.
– Chinese diplomats at the UN Security Council have urged a global consensus on “peaceful, safe, and controllable AI”, emphasizing human control, fairness, inclusiveness, and opposition to AI arms races in lethal autonomous weapons and terrorist misuse.
– China supports the UN’s Global Dialogue on AI Governance and a new Independent International Scientific Panel on AI, explicitly situating the UN at the center of global AI rulemaking.

By contrast, Washington under Trump has embraced a much more unilateral, race‑oriented posture:

– Trump’s AI orders emphasize U.S. dominance, deregulation, and removal of barriers to innovation, casting AI governance primarily as a constraint on American power.
– At the UN, senior U.S. officials have rejected centralized global AI governance, signaling skepticism toward multilateral institutions that Beijing favors.
– Domestically, Trump has moved to preempt state‑level AI rules, empowering the Justice Department to sue states with “onerous AI laws” and to impose a single, minimal federal standard designed to help the U.S. “win the AI race.”

Into this mix enters April 2026: a bilateral meeting where both leaders arrive with sharply different philosophies but a shared, if reluctant, recognition that runaway military AI and escalatory crises are a threat to both.

The nuclear precedent: proof that military AI risk can be compartmentalized

The clearest proof that the U.S. and China can negotiate guardrails in an era of rivalry came in late 2024, when Biden and Xi agreed that only humans could authorize nuclear strikes.

That agreement did three important things for today’s AI debates:

1. Reaffirmed human control over the most destructive weapons, explicitly constraining the role of AI in nuclear decision‑making.
2. Created a template for risk‑reduction protocols: narrow, concrete, and focused on catastrophic scenarios rather than broad technology controls.
3. Demonstrated that hard security norms can be insulated from broader competition, even as both sides continue jockeying economically and technologically.

For experts urging Trump and Xi to prioritize AI safety, the lesson is straightforward: military AI risk can be addressed through targeted bilateral commitments without requiring a full thaw in great‑power competition.

The April summit offers a chance to apply the nuclear‑authorization logic to three domains where AI raises acute escalation risks:

– lethal autonomous weapons systems (LAWS)
– AI‑enabled command, control, and early warning
– cross‑border crisis communications involving algorithmic systems

Divergent AI philosophies—and a narrow lane for cooperation

The paradox of the current moment is that both sides acknowledge the dangers of uncontrolled AI, but they do so from fundamentally different mental models.

China: AI as a regulated development tool

Chinese leaders and national security thinkers increasingly describe AI as a developmental asset that demands active regulation and risk supervision:

– Xi’s speeches stress that AI should evolve in a “beneficial, safe, and fair direction” and should “benefit people of all countries and regions.”
– At the UNSC, China’s envoy called for “peaceful, safe, and controllable AI” kept under human control at all times, explicitly warning against AI arms races in lethal autonomous weapons and terrorist use of AI.
– Influential Chinese security scholars, including at top think tanks like CICIR, warn about loss of control and strategic misjudgment from advanced AI systems and argue that balancing security and development is a “shared global challenge.”
– Domestically, Beijing has pushed out more aggressive AI regulation than the U.S., including updated cybersecurity law provisions on AI and rules governing public‑facing systems such as chatbots.

United States under Trump: AI as a race to be won

The Trump administration has moved in the opposite direction, viewing AI regulation primarily as an obstacle to competitive advantage:

– Trump revoked his predecessor’s efforts at more cautious AI governance and reframed AI as essential to “United States national and economic security and dominance across many domains.”
– His AI executive orders emphasize removing regulatory barriers, accelerating data‑center buildout, and preventing states from passing restrictive AI laws that could slow innovation.
– U.S. statements at the UN have “totally rejected” international bodies asserting centralized control or global governance of AI, clashing directly with China’s UN‑centric proposals.

These are not superficial differences; they are strategic worldviews. China sells AI governance as a public good and a pillar of responsible rise; Trump sells light‑touch governance as the price of winning.

Yet within that clash lies an opportunity: both models can tolerate narrow, compartmentalized cooperation on extreme risks that neither side wants to face alone.

The quiet infrastructure for cooperation: scientists and diplomats

Below the political theater, a quieter ecosystem for U.S.–China AI risk reduction is already taking shape.

– The International Dialogues on AI Safety bring together top AI scientists from both countries to discuss “extreme AI risks,” with sessions already hosted in Beijing and Shanghai.
– These dialogues focus on issues like loss of control, misaligned advanced systems, and runaway deployment, and they are explicitly framed as technical and pre‑political—designed to feed into, but not replace, diplomatic processes.
– Chinese national security think tanks openly acknowledge the danger that great‑power tech rivalry could lead to “strategic instability” and miscalculation, reinforcing the case for at least some shared guardrails.

From the U.S. side, technical and policy experts outside government have similarly argued that elite‑level scientific engagement is a necessary complement to, not a substitute for, formal protocols. Their message to the Trump team is that ignoring these channels does not reduce risk; it simply blinds Washington to Beijing’s evolving thinking on AI safety and crisis behavior.

April’s summit could be the point where this informal infrastructure is translated into explicit political mandates: instructions to defense ministries, foreign ministries, and standards bodies to convert broad concerns into concrete agreements.

What a “smart” Trump–Xi AI agenda would look like

Experts urging a “smart security agenda” for April 2026 are not calling for sweeping, utopian bans or fully harmonized AI regulation. They are focused on narrow, high‑impact measures that reduce shared risk without eroding U.S. leverage or China’s sense of technological sovereignty.

Four priorities stand out.

1. Red‑line lethal autonomous weapons and mandate human control

China has already signaled discomfort with an unconstrained AI arms race in lethal autonomous weapons and has called for major powers to prevent it. The U.S. retains concerns about verifiability and asymmetric constraints but shares an interest in avoiding systems that can trigger war faster than humans can respond.

The April summit could:

– Reaffirm, in bilateral language, that AI must remain under “meaningful human control” for all uses of lethal force, not only nuclear weapons.
– Establish a joint working group on LAWS norms, potentially using the UN’s existing arms‑control forums as a multilateral extension point.
– Encourage voluntary transparency measures—such as shared principles on testing, fail‑safes, and override mechanisms—without requiring intrusive inspections.

The political logic for Trump is clear: such a move can be sold domestically not as “arms control” but as ensuring America’s enemies cannot hide behind autonomous systems to start a war by accident.

2. Build AI‑aware crisis communication protocols

Existing crisis hotlines between Washington and Beijing were built for human‑driven incidents—fighter jet collisions, naval standoffs, accidental radar locks. They are not tailored to algorithmic misinterpretation, autonomous system failures, or AI‑generated misinformation that could emerge in a conflict.

A pragmatic April outcome would be:

– A commitment to AI‑specific crisis protocols, including 24/7 contact points empowered to discuss incidents involving autonomous or AI‑enabled systems.
– Agreement on rapid information‑sharing in narrowly defined scenarios, such as suspected AI malfunction in early‑warning systems, to prevent misreading a software error as a deliberate escalation.
– Support for table‑top exercises involving both militaries and technical advisors, simulating AI‑related crises to stress‑test the protocols.

This builds directly on the 2024 nuclear human‑authorization precedent by adding an AI layer to existing deconfliction tools.

3. Coordinate against non‑state catastrophic misuse

Both sides publicly worry about terrorist and criminal misuse of AI, from automated cyberattacks to AI‑assisted bioweapons design. Unlike great‑power competition, this is a domain where interests are genuinely aligned.

Here, Trump and Xi could:

– Announce a joint initiative against catastrophic AI misuse, focused strictly on non‑state actors and extreme threat models (e.g., mass‑casualty biological or cyberattacks).
– Task law‑enforcement and intelligence agencies with developing common red‑flag indicators for dangerous AI activity on major platforms and model‑hosting services.
– Encourage information‑sharing via existing counter‑terrorism and cybercrime channels, rather than building entirely new structures.

For Trump, this maps neatly onto a domestic law‑and‑order narrative; for Xi, it reinforces China’s message that AI must remain “peaceful, safe, and controllable.”

4. Align on a minimal baseline for frontier model safety

China’s domestic AI regime already goes farther than U.S. federal law in requiring safety assessments, content controls, and ethical review for many public‑facing systems. The U.S., meanwhile, is experimenting with voluntary standards via bodies like NIST and with international AI safety summits among allies.

A realistic bilateral step would be:

– Agreement in principle that frontier‑scale models (those capable of enabling catastrophic misuse or unpredictable behavior) should be subject to pre‑deployment safety testing and post‑deployment monitoring.
– A commitment to share non‑sensitive technical best practices from the International Dialogues on AI Safety with relevant regulatory and standards bodies in both countries.
– Parallel, not identical, national frameworks that converge on a minimal floor of safety testing without harmonizing every regulatory detail.

This approach preserves Trump’s insistence on U.S. sovereignty over domestic rules while quietly narrowing the gap on the most dangerous systems.

China’s multilateral AI gambit—and the U.S. risk of strategic absence

While Washington leans away from global governance, Beijing is steadily occupying the vacuum.

– Xi’s repeated promotion of a World AI Cooperation Organization and China’s advocacy for the UN as the primary hub of AI governance signal a long‑game: position China as the rule‑maker, not the rule‑taker.
– With China chairing APEC in 2026, it can set agendas that foreground AI development narratives aligned with Chinese standards and language, drawing in developing economies that feel sidelined by transatlantic frameworks.
– As the U.S. pulls back from multilateral AI mechanisms and focuses on domestic deregulation and allied “minilaterals,” there is a real prospect that global technical and ethical baselines will tilt toward Chinese preferences by default.

From a narrow competitive standpoint, this might look tolerable to a Trump White House focused on short‑term economic leverage. From a systemic risk standpoint, it is more troubling: fragmented standards increase the odds of unsafe deployment, regulatory arbitrage, and misaligned incentives for frontier developers.

The strategic bet behind expert calls for April‑focused cooperation is not that Trump will suddenly embrace multilateral AI governance. It is that limited bilateral safety arrangements can coexist with Washington’s skepticism toward global rule‑making, while still anchoring the most dangerous areas of AI in some shared discipline.

The Xi calendar: managing Trump’s unpredictability

One underappreciated dimension of 2026 is how carefully Beijing appears to be structuring Xi’s interactions with Trump:

– The first in‑person meeting of Trump’s second term occurred on the sidelines of APEC in late 2025, offering a low‑stakes environment for Xi to test Trump’s red lines on AI cooperation.
– The April 2026 Beijing visit gives China home‑court advantage and more control over optics, agenda‑setting, and sequencing of announcements.
– Additional encounters at the G20 and the 2026 APEC summit in Shenzhen create a rolling series of touchpoints where commitments floated in April can be refined, expanded, or quietly sidelined depending on domestic and geopolitical reactions.

Trump’s unpredictability, often seen as a risk factor, also presents a strange opportunity. A leader who dismisses traditional arms‑control logic may be more willing to endorse novel, narrowly tailored AI safety steps if they can be framed as:

– strengthening U.S. dominance (by preventing adversaries from exploiting dangerous systems),
– protecting American troops from “out‑of‑control machines,” or
– putting the U.S. “in charge” of how AI is used in war and peace.

Xi’s apparatus, by contrast, is highly disciplined and long‑term. Chinese officials can stage‑manage proposals across multiple meetings, gradually socializing ideas with Trump and his advisers, and aligning them with Beijing’s narrative of “people‑centered” and “controllable” AI development.

The risk is that both sides misjudge the moment: Beijing might overplay its hand and push for symbolic multilateral wins that Washington will not accept, while Washington might reduce AI talks to a bargaining chip in unrelated trade or financial disputes. The reward, if calibrated correctly, is a set of actionable AI risk‑reduction measures that neither side would have proposed unilaterally, but both find themselves willing to sign onto under the pressure of a high‑profile summit.

Weapon or tool: what is really at stake in April

The core question hovering over Trump’s Beijing visit is not whether AI will be used for military purposes—it already is, and it will be. The real question is whether the most destabilizing uses of AI can be fenced off through a mix of:

– hard red lines (no AI‑authorized nuclear launches, strong human control over lethal decisions),
– AI‑aware crisis tools (hotlines, notification protocols, and exercises tailored to algorithmic risks),
– joint action against catastrophic misuse by non‑state actors, and
– minimum safety baselines for frontier models, even amid divergent broader regulation.

If April produces concrete progress on even two of these fronts, it will mark the beginning of AI as a tool constrained by shared survival logic, not just a weapon in an unconstrained race. If it does not, 2026 risks becoming the year AI fully merges with great‑power rivalry—faster, more opaque, and more error‑prone than any previous generation of strategic technology.

The diplomatic machinery, expert networks, and precedents are already in place. What is missing, for now, is a political decision from both Trump and Xi to treat AI safety and military risk reduction as central, not peripheral, to their agenda.

That decision may well be made, or deferred, in Beijing this April.