In a move to clarify the role of artificial intelligence in health insurance claims, the US government issued a memo on February 6th. According to the memo, AI cannot be used as the sole basis for denying claims. While machine-learning algorithms can assist in making determinations, they cannot make decisions on their own.
This memo comes after lawsuits were filed against several health insurers, including United Healthcare and Humana, who were accused of using AI to wrongly deny coverage. Patients claimed that the AI model nHPredict had an error rate of 90%, highlighting a dangerous aspect of the technology that was receiving increased attention.
The Centers for Medicare & Medicaid Services (CMS) expressed concern about the potential for algorithms to exacerbate discrimination and bias. They urged insurers to ensure their models comply with anti-discrimination requirements. Several states, including New York and California, also warned insurance companies to verify the fairness of their algorithms.
One such patient who suffered from this issue is John Doe, who broke his arm after a fall and required rehab treatment. Although his insurance should have covered the cost, his claim was denied without explanation. John was left wondering if it was a person or an AI that made the decision.
This incident highlights the importance of transparency and accountability in health insurance claims processes. Patients should have confidence that their claims are being evaluated fairly and without bias.
In conclusion, while AI can be helpful in making determinations related to health insurance claims, it should not be used as the sole basis for denying coverage. It is important for insurers and regulators to ensure that these technologies are used ethically and transparently to avoid any unintended consequences or harm caused by algorithmic bias or discrimination.