A new push from the FDA for more artificial intelligence-powered devices could leave device makers liable for changes the devices ‘learn’ after their initial approval by the federal watchdog, according to a Bloomberg report.
AI-powered devices ingest new data and adjust accordingly – a facet that could make them incredibly valuable in medtech, but one that could also leave makers with entirely different products than were initially approved, according to the report.
A significant change in the product could cause makers to lose immunity from liability gained through FDA approval, especially if the device later becomes defective, according to the Bloomberg report. Such a shift would leave makers open to product liability suits they would normally be preempted from.
“There’s a gray area as to what would be preempted. Once you get out of those lanes, then you open yourself up to state tort liability,” New Orleans-based law firm Martzell Bickford & Centola principal and products liability attorney Lawrence Centola told Bloomberg.
Currently, federal law generally preempts personal-injury suits against medtech makers for devices that have passed the strenuous FDA premarket approval pathway, according to the report. The federal watchdog requires such a review for major changes in products with algorithms and machine learning capabilities that alter their functionality over time, resulting in a constant need for re-approval.
But the agency is shifting its regulatory oversight to consider the full life-cycle of products, and released a discussion paper last month seeking comments on a similar framework for AI-based products.
It’s unclear whether AI-based devices would be similar enough to operate under the FDA’s 510(k) pathway, where products are based on their similarity to previously cleared devices, Centola told Bloomberg. It’s even more difficult to estimate at what point a machine learning algorithm becomes a new product and requires another approval, he added.
The question gets even more difficult if changes fueled by machine-learning capabilities move a device’s effective scope beyond its initial approval, New York-based law firm Cozen O’Connor product liability defense lawyer John Sullivan told Bloomberg.
So far, the FDA has only cleared devices with “locked” algorithms, according to the report, but its recently proposed framework is clearing the way for devices and products that learn from user data and adapt.
Through the new guidelines, device makers would have to maintain “good machine learning practices” throughout a products lifestyle, including providing algorithmic transparency and assuring that data the devices acquire conforms with a products intended use, Bloomberg reports.
Makers of AI-powered products would be able to submit a modification plan during their initial premarket review laying out expected changes to the products performance, data inputs or intended use, according to the report.
The guidelines would also require manufacturers to commit to collecting and monitoring actual performance data, Bloomberg said, and continuously monitor how the products are being used and how to improve them.