Is Anthropic building Rwanda’s AI future — or its dependence?
Experts say the partnership could expand AI access in health and education, but warn it may deepen vendor lock-in, data risks, and reliance on foreign tech.
By Eden Harris // 02 March 2026Anthropic’s new memorandum of understanding with Rwanda to deploy artificial intelligence tools in health and education has drawn both praise and skepticism, highlighting a deeper question facing many African governments: Will partnerships with foreign AI firms build domestic capacity — or deepen reliance on foreign technology stacks? The new three-year partnership aims to match “Rwanda’s needs and priorities” by providing AI tools, developer access, and training to public servants and health systems, but the details of the agreement haven’t been shared. “This entire announcement actually is about adoption, really, rather than building local capacity,” Ayantola Alayande, a researcher at the Global Center on AI Governance, told Devex. “It’s basically a nice way to put market capture.” Alayande believes that for an African nation to avoid dependence on the rest of the world, “companies need to actually be invested in infrastructure, which is what a lot of the big techs don’t do.” Capacity or capture Although the agreement says it will provide skill development to Rwandans and invest in infrastructure, Alayande also believes countries should be able to build domestic competencies and necessary resources in a way that serves them. According to him, this means investing in infrastructure such as local computing capacity, independent AI systems that governments can operate and control themselves, rather than relying entirely on a foreign company’s proprietary platform. While the memorandum references skill development and infrastructure, Alayande argues that training public-sector coders on a single company’s tools risks creating long-term vendor lock-in. “The partnership says very clearly that Rwanda’s public sector coders are going to be trained on Claude AI, which is basically Anthropic’s own tool. I mean, it’s a good thing. It gives them some sort of skills. But again, this is a kind of vendor locking,” Alayande added. Paula Ingabire, Rwanda’s minister of information and communications technology and innovation, described the partnership as a significant step forward in the country’s AI development, in a news release. “Our goal is to continue to design and deploy AI solutions that can be applied at a national level to strengthen education, advance health outcomes, and enhance governance with an emphasis on our context,” she added. Rwanda’s AI policy requires organizations that are “processing personal or sensitive data, among other things, to be “regularly verifying the effective implementation of safety and security measures.” Anupam Chander, a Scott K. Ginsburg professor of Law at Georgetown University Law Center, said access to frontier AI systems could help African countries avoid falling further behind. “I see this as a positive step in building capacity in Africa for the new AI age, including access to some of the latest AI models for educators in Rwanda,” he said. “These tools could be especially meaningful in expanding access to health and education.” Data misuse fears But the deal raises questions about data protection and regulatory oversight. Chander cautioned Rwanda to ensure Anthropic doesn’t misuse its data. His concern is not hypothetical; foreign tech companies operating in Africa have faced backlash over allegations of extracting and mishandling user data. In 2023, Kenyan authorities accused American company World — formerly known as Worldcoin — of misusing its data. It described the company, which was founded by Sam Altman, the CEO of OpenAI, as having “massive citizen data in the hands of private actors without an appropriate framework.” This comes after the World offered citizens $49 in cryptocurrency in exchange for collecting a scan of their eyeballs. The company denied the claims. Despite their defense, the African nation kicked the company out. Mutale Nkonde, a visiting policy fellow at the Oxford Internet Institute, which focuses on AI ethics and policy, among other things, raised concerns after Mrinank Sharma, a former AI safety researcher at Anthropic, warned in his February resignation announcement on social media platform X that “The world is in peril” from AI and bioweapons, among other things. “Anthropic kind of markets itself as an ethical version of OpenAI, which is why it was so shocking for those of us in the industry to hear those calls from inside the house, [so those ethical claims are] marketing. That’s not reality,” she told Devex. Devex reached out to Anthropic for comment but did not receive a response before publication. Nkonde, who has worked with the U.S. Congress on the Algorithmic Accountability Act to regulate the use of AI systems, said Europe is pulling away from tech by American companies “because they view it as not just a national security risk, given the instability of the current Trump administration, but they do not believe that American values around technology can be compliant with their local laws.” “And so as we’re watching Europe become more sovereign and move away from American technology, this deal with Rwanda looks even more ominous,” she said. She pointed to a European Union investigation into Elon Musk’s Grok AI chatbot after accusations of releasing unauthorized deepfake images. “They are saying in those prosecutions, you are breaking the Digital Services Act, [which] is not in line with our values; we are going to hold you accountable,” she said. While Rwandan President Paul Kagame is regarded as a no-nonsense leader, Nkonde worries that the nation may not be able to successfully defend against a foreign entity, as a Western country could. However, Nigeria’s recent court victory may be a case study on an African nation standing up for itself against a Western tech giant, according to Business Insider Africa. The country fended off European Dynamics UK Ltd.’s lawsuit — claiming Nigeria stalled their digital infrastructure project — saying their contract was performance-based, “and only deemed delivered after a successful User Acceptance Test confirms compliance with technical specifications and statutory workflows.” Rwanda has been on the receiving end of Western partnerships, as the country will be the first nation to participate in a new Gates Foundation and OpenAI health care partnership. The $50 million initiative, Horizon1000, aims to improve Africa’s health care system. Nkonde believes Rwanda can capitalize on its partnership with Anthropic by “training Rwandan top tech talent, like they’re developing an MIT-type environment, and if they’re really bolstering that labor market and that knowledge base, then yes, that could be really positive.” She believes the Chinese company, DeepSeek AI, is the best company to potentially partner with because it’s much better for the environment, costs less, and has high-performing models. Alayande said he would like to see Rwanda moving toward building its own systems similar to those of the Chinese company. “It would be great, for example, for me to see Rwanda at some point being able to build or replicate a DeepSeek [AI] kind of move, or being able to build its own domestic chips, which is a far-fetched goal. The fact that it’s not easy to do doesn’t mean it’s impossible, right?” Alayande said.
Anthropic’s new memorandum of understanding with Rwanda to deploy artificial intelligence tools in health and education has drawn both praise and skepticism, highlighting a deeper question facing many African governments: Will partnerships with foreign AI firms build domestic capacity — or deepen reliance on foreign technology stacks?
The new three-year partnership aims to match “Rwanda’s needs and priorities” by providing AI tools, developer access, and training to public servants and health systems, but the details of the agreement haven’t been shared.
“This entire announcement actually is about adoption, really, rather than building local capacity,” Ayantola Alayande, a researcher at the Global Center on AI Governance, told Devex. “It’s basically a nice way to put market capture.”
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Eden Harris is a top freelance journalist based in Washington, D.C., covering African tech and finance as well as U.S. banking. She's known for going beyond typical conflict-ridden and nuanced-lacking headlines about Africa. She has interviewed African presidents, CEOs, and U.S. members of Congress. Before developing her Africa beat, she covered the White House, Capitol Hill, and the Supreme Court in Washington full-time at Spectrum News as a National Politics Digital Producer. Her bylines can be seen in NBC News, Semafor Africa, Al Jazeera, The Financial Times' The Banker, and more.