The Future of AI & Healthcare: Providers and Patients Deserve Seats at the Discussions

in Federal Legislation, Health Law, State Legislation
August 16th, 2024

In February 2024, Governor Maura Healy signed an Executive Order to establish the Artificial Intelligence (AI) Strategic Task Force in Massachusetts. Governor Healy is seeking $100 million in funding, hoping to make Massachusetts “a global leader in Applied AI.” The Task Force will identify potential industry stakeholders and provide recommendations on how Massachusetts can encourage state businesses to adopt AI technologies.

Massachusetts is not the only state seeking answers on how to best encourage AI adoption by local businesses. On a national level, the Biden Administration issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Additionally, the U.S. Department of Health and Human Services (HHS) issued their own Artificial Intelligence Strategy, focusing on creating rules and regulatory action plans. Task forces are a noble first step; however, it is imperative that the right stakeholders are given a seat at the table. Providers and patients face potential consequences from AI’s adoption without the necessary safeguards in place to protect against increased liability and biases. A 2023 survey conducted by the American Medical Association (AMA) showed that 41% of physicians are both equally excited and concerned about the potential for AI in healthcare.

How will these task forces accelerate providers’ excitement while simultaneously decreasing their concerns? A simple way is to provide both providers and patients seats at the discussions, ensuring that the two stakeholders who have the most to gain and the most to lose with AI adoption have a say.

AI Uses in Healthcare

Healthcare providers are excited about the potential AI uses in their industry. Artificial intelligence in healthcare is an umbrella term used to describe machine learning (ML) algorithms and other technologies in medical settings. The two main types of AI are predicative AI and generative AI. Predicative AI performs data analysis to predict and decide treatment courses, whereas generative AI creates original content (e.g., ChatGPT).  Machine learning works by inputting data into training models for the machine to “learn.” In healthcare settings, machine learning serves as a predicative model, determining which treatment model will work best for a patient with specific symptoms. Generative AI can be used to provide patient education on medical terminology, medications, etc. However, generative AI tools should not be used as substitutes to talking with medical professionals.

Potential Issues

While the excitement surrounding AI uses in healthcare is warranted, potential legal and regulatory concerns remain that must be addressed. Given the potential for AI tools to increase provider liability and deepen healthcare discrimination and biases, these issues must be the focus of the state and national task forces mentioned above. The liability concerns of providers and the potential worsening of healthcare discrimination against patients warrant thorough discussions by all AI task forces.

1. Liability Clarification

If a doctor unknowingly relies on an AI tool, who is ultimately responsible for any adverse patient outcomes? Who will be held liable: the tech company who built the algorithm or the doctor who had the final say? Where does the medical malpractice liability fall? These questions must be explored to ensure safe implementation of AI in clinical settings.

Medical malpractice cases result when a physician deviates from the standard of care. A doctor who relies on an AI model in good faith may face liability if their actions fall below the accepted standard of care. Arguments that good faith reliance on AI algorithms may serve as a liability shield are unlikely to work in the medical field. For example, during the 1990s, legal professionals recommended that medical practice guidelines be used to establish standard of care in medical malpractice cases. However, this type of “safe harbor” defense was not popular amongst medical professionals, because physicians wanted to maintain independence when making clinical decisions. Therefore, it is unlikely to work in the AI space.

Vicarious liability may be imputed onto health systems or physician groups for failure to properly scrutinize an AI model prior to clinical implementation by their employed physicians. For example, consider an emergency department’s use of a sepsis prediction algorithm that uses only vital sign data to predict treatment standards. Vicarious liability claim may be successful if a physician errs in their treatment decision due to a misinterpretation of the sepsis algorithm’s output.

Products liability claims are also a viable possibility for AI users. Given the sheer number of different liability claims available, regulators and lawyers must establish a clear pathway before innocent healthcare workers fall into these potential AI minefields.

2. Potential for AI to deepen discrimination and healthcare biases

HHS’s Office of Minority Health released a statement on healthcare AI biases, resulting in part from the AI creators’ own implicit biases. President Biden’s Executive Order also addressed the need for AI tools to comply with all federal nondiscrimination laws. The Biden Administration plans to issue a framework for an AI Bill of Rights to ensure equity. Notably, AI biases may worsen existing health disparities from a lack of data diversity that is used to train algorithmic programs. Some algorithms determine where and which patient population needs treatment services the most. Despite being created as a “race neutral” system, algorithms may result in racially discriminatory outputs from a lack of diverse data.

The Massachusetts State House
Boston, 1787

However, it is important to remember that AI biases are perpetuated by human and systemic biases. To successfully dismantle AI biases, a recognition of the other biases at play serves as an essential step. Additionally, vulnerable patient populations are notoriously absent from existing datasets. These data gaps contribute to the lack of diverse data, continuing the cyclical nature of health disparities for future generations. A potential solution is to encourage open data sharing, allowing for public datasets to train AI algorithms. France, Netherlands, and New Zealand have exchanged datasets as part of the Open Government Partnership (OGP). The OGP also educates users on how to identify algorithmic biases and how to manage the associated risks.

For security and privacy reasons, the United States may not want to move forward with public datasets. However, to embrace AI as a welcome tool in healthcare, the task forces must establish guidelines on detecting and eliminating any algorithmic biases that may affect patient care or clinical decision-making.

The Path Forward

Artificial intelligence models are here to stay in healthcare, and thus, safeguards must be established to protect both providers and patients alike. By clarifying liability concerns and addressing ethical dilemmas, regulators and lawyers alike can ensure a safer adoption of AI models into clinical settings.

Governor Healy’s AI Strategic Task Force will be led by the Secretary of Economic Development (EOED) and the Secretary of Technology Services and Security (EOTSS). Additional task force representatives include representatives from the City of Boston, members of the Massachusetts Technology Collaborative, representatives of organized labor, and other individuals with experience in technology, life sciences, healthcare, finance, higher education, and local government. While the current makeup of Massachusetts’s AI Strategic Task Force provides a vast knowledge basis, the task force is missing two vital voices, providers and patients, who will be deeply affected by AI developments.

Governor Healy and President Biden are only two examples of leaders who have called for the creation of AI task forces. As these countless other task forces work on identifying ways to become “global leaders” in AI, the two most important stakeholders, providers and patients, cannot be left out of the conversations. The future of medicine and AI depends on their involvement.

Francesca Camacho anticipates receiving her JD from Boston University School of Law in May 2025.