This week, officials from the United States and United Kingdom announced a landmark agreement to formally collaborate on testing and evaluating the risks of artificial intelligence (AI).

Given that this announcement landed on April 1, you might have wondered if it was just another April Fool’s headline.

But with AI growing rapidly across every sector and industry—and with AI companies booming in both the U.S. and U.K.—it’s actually surprising this kind of agreement wasn’t formalized sooner.

Signed by U.S. Commerce Secretary Gina Raimondo and U.K. Science Minister Michelle Donelan, this agreement sets the stage for both governments to combine their expertise and technical talent to put safeguards in place for this fast-moving technology.

"The U.K. and the United States have always recognized that ensuring the safe development of AI is a shared global responsibility," said Secretary Raimondo in a press release. "Today’s announcement highlights the importance of ongoing international collaboration, with both countries now sharing critical information about AI models’ capabilities and risks, as well as foundational technical research on AI safety and security."

This announcement builds on the creation of AI Safety Institutes (AISIs) in both the U.S. and U.K. last November. The agreement will include secondments of researchers—temporary placements in each other’s institutes—and an exchange of data from private sector participants. For example, private AI models from companies like OpenAI and Google, along with published safety reports from Anthropic and others, will now be open to review by the new AISIs.

While this partnership is a new step for AI, it’s modeled after long-standing collaborations between the U.S. NSA and the U.K.’s Government Communications Headquarters (GCHQ), that have worked closely on national and global security for decades.

So the big question is: What does this mean for private businesses—or even for companies outside the U.S. and U.K. that work with AI?

A robot hand with the letters AI and a Lady Justice statue on a wooden table with law books. 3D illustration.

Building a “common approach” to AI safety testing

The safety tests developed by the U.S. and U.K. through their AISI partnership will inevitably have a global impact, since many leading AI companies started or are based in the U.S. before gaining worldwide attention.

That doesn’t mean these are the only major economies working to put safeguards around emerging AI.

Last year, both the European Union’s AI Act and President Joe Biden’s executive order on AI pushed businesses to disclose the results of safety tests. Canada has also drafted its own guidelines for the responsible use of AI in government, paving the way for private sector AI research.

Canada also beat both the U.S. and U.K. in finalizing AI data protection rules back in September 2023—not to mention that Canada now has the third-largest pool of AI researchers and investments in new AI companies worldwide.

In fact, recent research from EDUCanada shows that over 35,000 new jobs in AI and machine learning will be created in the next five years, with major Canadian cities—Toronto, Vancouver, Montreal, and Ottawa—all ranking among CBRE’s top talent markets in North America.

All this means that businesses on both sides of the border will be affected by the work of the U.S. and U.K. AISIs—and they should see these “safeguards” as a chance to drive real innovation.

Using R&D to build safer AI (and a stronger capital strategy)

While the current lack of AI regulation can be daunting, it’s also an opportunity for new businesses to stake their claim in the emerging AI safety market—as more governments join forces to ensure safe AI deployment.

Along the same lines, governments will keep prioritizing innovation funding programs like R&D tax credits and research grants for businesses in fields where innovation isn’t just an opportunity—it’s a necessity, as we’re seeing with AI.

If you’re an AI business operating in the U.S. or Canada, there are plenty of non-dilutive funding options to help you cover the costs of R&D that’s driving breakthrough innovation—while carving out your own space in this fast-growing market.

Even though there’s over $20 billion in R&D tax credits available in North America today, only about 5 percent of eligible businesses (just 1 in 20) are actually taking advantage of this resource.

A partner for financing innovative R&D

At Boast, our tech industry experts are among the best in North America. We combine deep knowledge of government tax codes and technology to truly speak your business’s language of innovation.

This makes it easy to communicate the unique value your R&D teams deliver every day, and it dramatically streamlines the process of preparing a compelling R&D tax credit or grant claim—compared to doing it in-house or even working with a traditional accounting firm.

The results? Teams that work with Boast save an average of 60 hours, and our experts deliver claims that are 35 percent more accurate on average.

Want to see how our team can help you get more from your R&D investments? Talk to an expert today.

U.S. and U.K. AI Safety Regulation FAQ

  1. What did the U.S. and U.K. announce about AI safety? The two countries announced a landmark agreement to formally collaborate on testing and evaluating the risks of artificial intelligence systems through newly established AI Safety Institutes (AISIs).
  2. How will the partnership work? The AISIs will facilitate the exchange of technical researchers, data from private AI models, and research reports between the U.S. and U.K. This collaboration aims to develop common approaches to evaluating AI safety.
  3. Why does this partnership matter for businesses? Since leading AI companies are based in the U.S., the safety standards and testing methods developed through this partnership will likely have a global impact and shape AI regulations in other countries as well.
  4. How can AI businesses benefit from this? Instead of seeing increased safety scrutiny as a barrier, AI companies can use this as an opportunity to lead innovation in AI safety and build expertise in this emerging field through focused R&D.
  5. How can R&D funding support AI safety innovation? Government innovation funding programs like R&D tax credits and research grants are expected to prioritize AI safety as a key area for technological advancement. Companies can use these non-dilutive funds to finance their AI safety R&D.

Related Posts

    • December 17, 2025

    California Modernizes Tax Code: What SB 711 Means for Your R&D Tax Credits

    • December 12, 2025

    CFO Outlook 2026: Why Investing in Innovation Is More Important Than Ever

    • December 9, 2025

    Canada Doubles Down on Innovation: New $358M Defense Initiative Complements Historic SR&ED Enhancements