If you’ve been anywhere near a dev team lately, you’ve probably heard the term “vibe coding.” It’s the practice of using AI tools (think Cursor, Copilot, Claude, or any number of agentic coding platforms) to generate code from natural language prompts rather than writing it line by line. It’s fast, it’s powerful, and it’s raising a question that CFOs, CTOs, and finance leads are increasingly landing in our inbox: Does AI-assisted development still qualify for R&D tax credits?
Short answer: Quite possibly yes, but there are real nuances worth understanding before you file.
What “vibe coding” actually means
The term was popularized by AI researcher Andrej Karpathy to describe a shift in how developers interact with code. Instead of writing each function manually, a developer describes what they want and an AI generates the implementation. Agentic coding tools go further, autonomously executing multi-step tasks across a codebase with minimal human input.
The appeal is obvious. Early data suggests AI-assisted development can accelerate specific coding tasks significantly. But there’s a flip side. Research has flagged real concerns around security vulnerabilities in AI-generated code, technical debt that can accumulate faster than teams can manage, and a risk that speed of output can outpace quality assurance. One 2026 meta-analysis found that AI-assisted development without structured review processes can accelerate technical debt at roughly three times the rate of traditional approaches. None of that makes AI tools bad, but it does makes governance and human oversight important.
Which, as it happens, is also central to how tax agencies think about R&D credit eligibility.
What hasn’t changed: The qualifying criteria
In both Canada and the U.S., the core criteria for R&D tax credit eligibility haven’t fundamentally changed because of AI. What’s changed is how those criteria apply to a new kind of work.
In Canada, work qualifies for SR&ED if it involves scientific or technological advancement, there’s technological uncertainty that couldn’t be resolved through standard practice, and the work is conducted through systematic investigation or search. The CRA uses the same three-part test that has governed the program for decades.
As Boast’s Mat Rutishauser, SR&ED expert and Solutions Consultant, explains:
“Coding, whether done by a human or an AI, has always been considered support work in the context of SR&ED. The ‘core’ of SR&ED is rooted in the development of a new algorithm, advancing a system architecture, and designing a better model—the coding involved is simply the implementation of that SR&ED core, regardless of whether the coding was done by a human or an AI.”
It’s the same ethos in the U.S., where coding is similarly a ‘means to an R&D outcome’ and not considered credit-worthy when not tied to true innovation. For instance, the Section 41 R&D tax credit is built on four tests: Permitted Purpose, Technological in Nature, Elimination of Uncertainty, and Process of Experimentation. The introduction of AI doesn’t change the goal, which is still to improve a product, process, technique, formula, invention, or computer software. The use of AI is a change in methodology, not a change in intent.
That’s the key framing: AI tools change how development happens. They don’t automatically disqualify the underlying research.
Where it gets interesting: The “human as lead investigator” question
The trickier question is around the nature of the human role when AI is doing a lot of the heavy lifting. Tax agencies on both sides of the border are starting to work through this.
In the U.S., KPMG analysts argue that even in AI-assisted environments, the developer, acting as a lead investigator, can use AI to rapidly generate multiple viable alternatives that must be systematically tested, validated, refined, and potentially discarded. The experimentation moves from the slow manual process of coding each alternative to the more complex and cognitive process of designing the tests, evaluating the AI’s output, and making critical design trade-offs. Under this framing, the process of experimentation isn’t diminished by AI — it’s shifted to a higher level of abstraction.
In Canada, CRA has become more precise in how it evaluates AI and machine learning claims. AI/ML claims jumped roughly 40% year-over-year since 2023; CRA has noticed the surge, and science advisors have sharpened their review criteria accordingly. The bar hasn’t changed in principle, but the scrutiny has increased. Prompt engineering, for example (that is, iterating on system prompts to improve response accuracy) is generally not considered SR&ED-eligible on its own. But it can contribute to a qualifying claim when it’s part of a larger investigation into genuine technological uncertainty.
The practical implication: A team fully replacing human judgment with AI output (ie. accepting generated code without systematic review, testing, and iteration) is in a weaker position than a team using AI as a tool within a structured, human-led development process.
As Boast’s Mat Rutishauser puts it, “a team doing work that can be completely handled by AI was never doing SR&ED in the first place. The flip-side is also true: a team doing SR&ED cannot be fully replaced by AI.”
Documentation is where claims succeed or fail
Both the IRS and CRA are placing increasing weight on contemporaneous documentation, which includes records captured during the work, not reconstructed at filing time. For AI-assisted development, this creates a new documentation challenge and, arguably, a new documentation opportunity.
Traditional metrics like lines of code are becoming less indicative of true research activity. The focus must shift to capturing the process of experimentation and the cognitive effort of the developer. That means documenting architectural decisions, including which AI-generated alternatives were considered and why they were accepted or rejected. It means treating debugging logs as a record of experimental failure and resolution. It means capturing the technical uncertainty you were trying to resolve, not just the output you shipped.
For teams leaning into agentic tools, this is more than a compliance exercise. Good documentation practices also protect you from the technical debt risks that pure vibe coding can introduce.
Boast’s take
AI-assisted development is a reality, and it’s not going away. We believe the R&D tax credit programs in both Canada and the U.S. can accommodate it when the underlying work genuinely involves technological uncertainty, systematic investigation, and a human team that’s directing the process, not just prompting and accepting output.
What that means for your team:
- The tool doesn’t disqualify the claim. The process determines eligibility.
- Human ideation, stepping in where LLMs fail and iteratively improving on what LLMs can do normally are the activities that make a claim defensible.
- Documentation practices need to evolve alongside your development practices.
This is a fast-moving area. The legislation hasn’t caught up to the tooling, and guidance from both the CRA and IRS continues to develop. If your team is using AI tools in development and you’re not sure how it affects your claim, the right time to talk to a specialist is before you file — not after an audit notice arrives.
Boast has helped more than 2,000 companies across Canada and the U.S. navigate R&D tax credits, securing more than $900M in claims across North America. If you’re working through what AI-assisted development means for your claim strategy, we’re happy to help you think it through.