Health policies driven by AI are denying assistance to the underprivileged in one of the world's most impoverished nations.
A flawed AI system is increasing medical expenses for Kenya's most vulnerable families
Kenya had promised broader access to more affordable healthcare. However, a recent investigation has shown that its algorithm-based system is complicating life for those it was meant to assist the most. As reported by The Guardian, Africa Uncensored, and Lighthouse Reports, the new Social Health Authority (SHA) system in Kenya employs a predictive machine-learning algorithm to assess how much individuals should pay for public health insurance.
This system was introduced in October 2024 as part of President William Ruto’s commitment to enhance healthcare accessibility for Kenya's extensive informal workforce.
How the algorithm is causing harm to Kenyans
A nurse cares for a young patient while a guardian observes in a hospital ward.
The main issue lies in the way the system determines what individuals can afford. Kenya’s SHA employs proxy means testing, a method that estimates income based on household characteristics like roofing materials, sanitation facilities, livestock, family size, and other living conditions. This investigation revealed that the system has been overestimating the incomes of poorer families while undervaluing those of wealthier citizens.
One SHA volunteer mentioned visiting homes in Nairobi and witnessing individuals already struggling to secure food being charged premiums far beyond their financial capabilities. Some individuals faced costs amounting to 10% to 20% of their modest incomes.
When a financial barrier prevents treatment
As Kenya implemented its new healthcare framework, @AfUncensored and @LHreports sought to confirm that the process of setting insurance premiums would be equitable. Instead, they discovered an algorithm that imposed excessive charges on the least able while undercharging the wealthiest.…— John-Allan Namu (@johnallannamu) May 4, 2026
The repercussions are significant, and the situation appears critical. Kenyans lacking private insurance who cannot afford their SHA premiums face the risk of being turned away from healthcare facilities or receiving exorbitant hospital bills. The report includes cases of severely ill individuals missing treatment because the system indicated they owed more than they could afford. One single mother noted her monthly contribution was set at 3,500 Kenyan shillings, while others experienced substantial increases from their previous payments under the old system. Thus, the new policy is costing lives.
Although Ruto labeled the system as AI-driven, the report highlights that it does not utilize generative AI akin to ChatGPT. Instead, it applies predictive machine learning based on a policy tool that has faced criticism for misidentifying eligible individuals for assistance for many years. Many have previously described this system as flawed and unequal, even prior to its implementation.
Over 20 million individuals are registered for the SHA, but only around 5 million consistently pay their premiums. Hospitals are also reporting significant financial deficits due to unpaid reimbursements. This illustrates the risks associated with algorithmic welfare systems.
Vikhyaat Vivek is a technology journalist and reviewer with seven years of expertise in consumer hardware reporting.
On a different note, AI has also significantly impacted the podcasting industry, transforming audio platforms into repositories for low-quality content. AI-generated material has already saturated video platforms, gaming discussions, software coding, and search engine results, and now similar subpar machine-produced content is penetrating the podcast realm. While music typically dominates discussions about AI's detrimental effects, the challenges in the podcast arena might be less visible and harder to rectify. AI technologies can now create, upload, and even monetize entire shows at a pace much quicker than traditional podcasting studios.
Additionally, scientists have recently developed a portable device capable of detecting GPS spoofing in real-time. People commonly rely on GPS systems in the same way they trust gravity; it simply works. However, what if someone could manipulate it to mislead you without your knowledge? Unlike GPS jamming, which disrupts the signals and alerts you to a problem, spoofing sends deceptive signals that appear completely normal. You might be tracking a vehicle or shipment, convinced everything is on course, when in reality, the shipment has been redirected without your awareness.
In another personal account, I interact with AI daily and share three reasons why I chose to invest in Claude over ChatGPT. Since I rely on AI for consistent assistance rather than occasional use, I realized I needed something dependable enough to justify a purchase. The confusion arose during the selection process, narrowing down to either ChatGPT or Claude. Having used ChatGPT for an extended period, it already understands my thinking patterns and requirements, making it a familiar option. However, the more I researched Claude’s capabilities, the tougher my decision became. Ultimately, I weighed familiarity against functionality and opted for Claude, a choice I do not regret.
Other articles
Health policies driven by AI are denying assistance to the underprivileged in one of the world's most impoverished nations.
An investigation revealed that Kenya's AI-based health insurance system is inaccurately assessing the incomes of low-income households, leading many to face unaffordable higher premiums.
