The East Boston Electrical Substation: Lessons for the Future of Environmental Justice During Massachusetts’ Quest for Zero-Emissions

August 16th, 2024 in Environmental Law, State Legislation

Massachusetts was an early adopter of environmental justice principles.  In 2002, the Commonwealth adopted an Environmental Justice Policy, requiring the Executive Office of Environmental Affairs to make environmental justice “an integral consideration to the extent applicable and allowable by law.”  The policy was updated in 2021, after An Act Creating a Next Generation Roadmap for Massachusetts Climate Policy was signed into law by then-Governor Baker. This landmark bill commits Massachusetts to net-zero greenhouse gas emissions by 2050, while also promising that the actions taken to achieve this goal will be done “equitably and in a manner that protects low- and moderate-income persons and environmental justice populations.”

Despite the Commonwealth’s apparent dedication to environmental justice, one particularly contentious project has shown the limitations on that promise.  In 2014 (the same year that Executive Order 552 created the Governor’s Environmental Justice Advisory Panel and twelve years after the Commonwealth adopted the first Environmental Justice Policy), Eversource proposed an electrical substation in East Boston.  The substation project was aimed to reduce the burden on the Chelsea substation, which also serves East Boston. Eversource estimated that, by 2024, the Chelsea substation would be unable to handle the needs of both neighborhoods.

Photo: Ben Crawley

There is no question that expanding the capacity of the electrical grid is a key step in moving toward Massachusetts’s zero-emissions goals.  However, the East Boston substation is not an obvious step in the right direction, especially when considering the Act’s requirement that the zero-emissions goal be achieved in an equitable manner.

East Boston is already the home of numerous environmental nuisances.  Boston Logan International Airport is located there, with all of its noise and air pollution.  It’s the home of most of the Boston area’s stored heating oil and jet fuel, and also the storage location for large piles of road salt.   Multiple highways also run through the neighborhood.  It is, and has been, a designated Environmental Justice Community, the term Massachusetts uses to identify locales where there are prevalent environmental injustice concerns.

The new substation will only add to the burdens already placed on this community.  The site is near a popular playground, causing concern for parents.  It also takes away green space in an already industrialized area.  Further, the substation is sited on the banks of Chelsea Creek, which makes it susceptible to flooding, especially as sea levels rise due to climate change.   Eversource says that it is outside of the 100- and 500-year flood plains, but climate scientists note that flood plains move further inland as water levels rise.

East Boston has a significant minority population; 56% of residents are Hispanic, and 70% of the residents who live close to the new substation site speak a language other than English. While Eversource and the City of Boston knew this, energy company officials determined that providing Spanish translators at community meetings about the project would be disruptive.  The written materials that were translated were translated inaccurately.  Community meetings were held during the working day, making it impossible for working residents to attend, and while meetings were held on Zoom, many East Boston residents lacked the technology or knowledge to access those meetings.

Credit: Jesse Costa/WBUR

The Commonwealth was required by law to take community input into account during the permitting process.  Because the feedback process was so inaccessible to the East Boston community the state was sued under Title VI for discrimination based on national origin, with the plaintiffs asking for federal intervention. However, the Environmental Protection Agency declined to investigate and are now the subject of their own lawsuit.

Prior to any decision on the Title VI complaint, the Energy Sitings Facility Board tentatively approved the substation’s final location in February of 2021.  The same year, 84% of Boston voters disapproved of the substation’s location; however, the ballot initiative was non-binding.

In a final blow by the Commonwealth against East Boston, the Energy Sitings Facility Board granted Eversource a Certificate of Environmental Impact and Public Interest, allowing Eversource to bypass 14 state and local permitting requirements. However, new injustices continue to arise. Immediately after receiving the certificate, Eversource updated the estimated cost of the substation from $50 million to $103 million, and it will surely “pay” for those costs by raising energy prices for its customers.

Further, as part of the 2021 approval, the Commonwealth required Eversource to negotiate a Community Benefits Agreement, where Eversource would engage in mitigation measures to counter environmental impacts caused by the new substation.  The natural assumption, of course, would be that Eversource would pay for these projects.  That assumption is incorrect. Eversource intends to pass those costs onto ratepayers.

To recap, East Boston will be the home of a new electrical substation.  Eversource broke ground in early 2023. The substation was approved despite overwhelming disapproval by the area’s residents, after said residents were effectively barred from engaging meaningfully in the permitting process.  It is being built in an area already known to be affected by environmental injustice.  Despite numerous challenges, courts have upheld the project every step of the way. Additionally, the costs of this project and environmental mitigation measures agreed to by Eversource will be passed on to the very community that did not, and still does not, want it to be there.

The 2021 Act Creating a Next Generation Roadmap for Massachusetts Climate Policy is a major step in the right direction toward ensuring injustices like these will not happen again (the Act is not retroactive, meaning that the East Boston substation project could not be retroactively disapproved). By expanding the definition of what an environmental justice community is and requiring that the Commonwealth consider existing pollution in communities when evaluating proposed projects, there is more room for court enforcement of environmental justice violations.  As one staff attorney at Roxbury-based Alternatives for Community and Environment put it, “the new bill adds more clarity and lays out more tools for the community to use to be able to participate the way that they should in these decisions.”

However, the Act is not a panacea and there is still work to be done.  In its decision allowing Eversource to bypass state and local environmental regulations, the Energy Sitings Facility Board declared that, even in light of the new Act, “it has fulfilled all applicable environmental justice and LAP [the Commonwealth’s Language Access Policy] requirements in this proceeding, and that this decision is compatible with all applicable environmental justice and LAP policies.”  This is not a good sign for how the Board will handle future siting projects if it thinks that the East Boston substation project complied with the Act’s environmental justice requirements.

A bill filed by Representative Madaro, H.3187, An Act Relative To Energy Facilities Siting Improvement To Address Environmental Justice, Climate, And Public Health, addresses these concerns. Hopefully, it will be made into law and add to Massachusetts’ new protections for environmentally vulnerable communities.  However, as Massachusetts industry ramps up to meet zero-emissions goals, more conflicts will undoubtedly arise. The Massachusetts Legislature took an excellent step forward with An Act Creating a Next Generation Roadmap for Massachusetts Climate Policy, but it cannot rest on its laurels.  It needs to continue to pay attention to environmental justice, especially as the energy industry changes in the next few decades, and not forget the harm done to East Boston.

Helen Warfle anticipates graduating with a JD from Boston University School of Law in May 2025.

The Future of AI & Healthcare: Providers and Patients Deserve Seats at the Discussions

August 16th, 2024 in Federal Legislation, Health Law, State Legislation

In February 2024, Governor Maura Healy signed an Executive Order to establish the Artificial Intelligence (AI) Strategic Task Force in Massachusetts. Governor Healy is seeking $100 million in funding, hoping to make Massachusetts “a global leader in Applied AI.” The Task Force will identify potential industry stakeholders and provide recommendations on how Massachusetts can encourage state businesses to adopt AI technologies.

Massachusetts is not the only state seeking answers on how to best encourage AI adoption by local businesses. On a national level, the Biden Administration issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Additionally, the U.S. Department of Health and Human Services (HHS) issued their own Artificial Intelligence Strategy, focusing on creating rules and regulatory action plans. Task forces are a noble first step; however, it is imperative that the right stakeholders are given a seat at the table. Providers and patients face potential consequences from AI’s adoption without the necessary safeguards in place to protect against increased liability and biases. A 2023 survey conducted by the American Medical Association (AMA) showed that 41% of physicians are both equally excited and concerned about the potential for AI in healthcare.

How will these task forces accelerate providers’ excitement while simultaneously decreasing their concerns? A simple way is to provide both providers and patients seats at the discussions, ensuring that the two stakeholders who have the most to gain and the most to lose with AI adoption have a say.

AI Uses in Healthcare

Healthcare providers are excited about the potential AI uses in their industry. Artificial intelligence in healthcare is an umbrella term used to describe machine learning (ML) algorithms and other technologies in medical settings. The two main types of AI are predicative AI and generative AI. Predicative AI performs data analysis to predict and decide treatment courses, whereas generative AI creates original content (e.g., ChatGPT).  Machine learning works by inputting data into training models for the machine to “learn.” In healthcare settings, machine learning serves as a predicative model, determining which treatment model will work best for a patient with specific symptoms. Generative AI can be used to provide patient education on medical terminology, medications, etc. However, generative AI tools should not be used as substitutes to talking with medical professionals.

Potential Issues

While the excitement surrounding AI uses in healthcare is warranted, potential legal and regulatory concerns remain that must be addressed. Given the potential for AI tools to increase provider liability and deepen healthcare discrimination and biases, these issues must be the focus of the state and national task forces mentioned above. The liability concerns of providers and the potential worsening of healthcare discrimination against patients warrant thorough discussions by all AI task forces.

1. Liability Clarification

If a doctor unknowingly relies on an AI tool, who is ultimately responsible for any adverse patient outcomes? Who will be held liable: the tech company who built the algorithm or the doctor who had the final say? Where does the medical malpractice liability fall? These questions must be explored to ensure safe implementation of AI in clinical settings.

Medical malpractice cases result when a physician deviates from the standard of care. A doctor who relies on an AI model in good faith may face liability if their actions fall below the accepted standard of care. Arguments that good faith reliance on AI algorithms may serve as a liability shield are unlikely to work in the medical field. For example, during the 1990s, legal professionals recommended that medical practice guidelines be used to establish standard of care in medical malpractice cases. However, this type of “safe harbor” defense was not popular amongst medical professionals, because physicians wanted to maintain independence when making clinical decisions. Therefore, it is unlikely to work in the AI space.

Vicarious liability may be imputed onto health systems or physician groups for failure to properly scrutinize an AI model prior to clinical implementation by their employed physicians. For example, consider an emergency department’s use of a sepsis prediction algorithm that uses only vital sign data to predict treatment standards. Vicarious liability claim may be successful if a physician errs in their treatment decision due to a misinterpretation of the sepsis algorithm’s output.

Products liability claims are also a viable possibility for AI users. Given the sheer number of different liability claims available, regulators and lawyers must establish a clear pathway before innocent healthcare workers fall into these potential AI minefields.

2. Potential for AI to deepen discrimination and healthcare biases

HHS’s Office of Minority Health released a statement on healthcare AI biases, resulting in part from the AI creators’ own implicit biases. President Biden’s Executive Order also addressed the need for AI tools to comply with all federal nondiscrimination laws. The Biden Administration plans to issue a framework for an AI Bill of Rights to ensure equity. Notably, AI biases may worsen existing health disparities from a lack of data diversity that is used to train algorithmic programs. Some algorithms determine where and which patient population needs treatment services the most. Despite being created as a “race neutral” system, algorithms may result in racially discriminatory outputs from a lack of diverse data.

The Massachusetts State House
Boston, 1787

However, it is important to remember that AI biases are perpetuated by human and systemic biases. To successfully dismantle AI biases, a recognition of the other biases at play serves as an essential step. Additionally, vulnerable patient populations are notoriously absent from existing datasets. These data gaps contribute to the lack of diverse data, continuing the cyclical nature of health disparities for future generations. A potential solution is to encourage open data sharing, allowing for public datasets to train AI algorithms. France, Netherlands, and New Zealand have exchanged datasets as part of the Open Government Partnership (OGP). The OGP also educates users on how to identify algorithmic biases and how to manage the associated risks.

For security and privacy reasons, the United States may not want to move forward with public datasets. However, to embrace AI as a welcome tool in healthcare, the task forces must establish guidelines on detecting and eliminating any algorithmic biases that may affect patient care or clinical decision-making.

The Path Forward

Artificial intelligence models are here to stay in healthcare, and thus, safeguards must be established to protect both providers and patients alike. By clarifying liability concerns and addressing ethical dilemmas, regulators and lawyers alike can ensure a safer adoption of AI models into clinical settings.

Governor Healy’s AI Strategic Task Force will be led by the Secretary of Economic Development (EOED) and the Secretary of Technology Services and Security (EOTSS). Additional task force representatives include representatives from the City of Boston, members of the Massachusetts Technology Collaborative, representatives of organized labor, and other individuals with experience in technology, life sciences, healthcare, finance, higher education, and local government. While the current makeup of Massachusetts’s AI Strategic Task Force provides a vast knowledge basis, the task force is missing two vital voices, providers and patients, who will be deeply affected by AI developments.

Governor Healy and President Biden are only two examples of leaders who have called for the creation of AI task forces. As these countless other task forces work on identifying ways to become “global leaders” in AI, the two most important stakeholders, providers and patients, cannot be left out of the conversations. The future of medicine and AI depends on their involvement.

Francesca Camacho anticipates receiving her JD from Boston University School of Law in May 2025.

Navigating the Patchwork of Privacy: State Privacy Laws in the Absence of a Federal Framework

August 16th, 2024 in Uncategorized

As technologies change and internet usage has exploded, the amount of personal and consumer data we provide and generate has significantly increased. Through online activities such as purchases, social media, and even surveys or just clicking on links, companies are collecting data from consumers and using it or selling it. This collection and use of consumer personal data has raised many concerns about privacy over the years. Although a growing number of states have begun passing privacy regulations, the different standards in each legislation and the rapid introduction of new bills makes it difficult for multistate businesses to keep up their compliance with all privacy laws and for consumers to know their rights. Given these risks, it is necessary for the federal government to act quickly and establish a federal framework for privacy law in the United States.

Although data privacy is a growing concern across the country, there is currently no comprehensive federal data privacy law. As a result, states must enact their own privacy regulations to monitor consumer data within their borders. In 2018, California kicked off this effort by enacting the California Consumer Privacy Act (“CCPA”), the first comprehensive consumer data privacy law in the United States. The CCPA provides consumers with certain rights over the personal data collected by businesses. California later amended the CCPA in 2020 to include additional privacy protections. Since then, Colorado, Virginia, Utah, Connecticut, Iowa, Indiana, Tennessee, Montana, Florida, Texas, and Oregon have all followed California with comprehensive state consumer data privacy laws. Overall, thirty-three states have passed or introduced privacy bills regulating both the collection and the use of personal data. However, as the internet has no borders, this patchwork approach by states carries many risks.

Given the difference in the regulations provided by the patchwork state laws surrounding data privacy, there are many compliance costs for multistate businesses. While most of the new bills that have been passed or proposed in the past year share many similarities with the existing privacy laws in states like California, Colorado, and Virginia, each bill contains unique standards for companies and creates carve-outs from existing standards. California’s privacy law generally provides consumers with the right to access, correct, delete, opt-out of processing sensitive data, portability, opt-out of sales, not participate in automated decision making, and a limited private right of action. The privacy law also includes obligations for businesses to provide an opt-in option for collecting personal data of children under 16, give notice and provide transparency, prepare risk assessments, not discriminate against customers exercising rights differently, and inform individuals of the purpose for the processing of personal data. For example, while the Montana data privacy laws appear very similar to California’s privacy law model, Montana actually goes beyond California’s standards and creates more rights for consumers, allowing them to revoke their consent to data processing. However, Montana does not include a limited private right of action for consumers as California’s law includes. States like Tennessee have additional provisions like the carve-out that creates a compliance safe harbor for companies that comply with the National Institute of Standards and Technology standards. In other states, like Utah, the consumer privacy acts provide only baseline consumer data protections like the rights to access, delete, opt-out of targeted advertising, portability, and opt-out of sales. Additionally, Utah has less obligations for businesses, only requiring their businesses to provide an opt-in option for children under 13, give notice and transparency, and not discriminate against customers based on how they exercise rights. The rights and business obligations provided by the Utah legislation and proposed legislations in other states are clearly less than those of other states.

While having enacted privacy laws across states is helpful to establish baseline standards for other states to look to, the variation in practice as a result of different privacy regulations and standards across states impose huge compliance costs for businesses and also confusion for consumers. These costs come from out-of-state businesses being subject to multiple different state laws, as well as duplicative rules. The burden on small businesses is especially substantial. Due to these high costs and the possibility of more states passing privacy laws, businesses need to assess whether they need to make changes or wait for other states to act. These types of decisions based upon the medley of new privacy regulations creates a high risk that many businesses will not comply with state regulations. 

Not only do the state laws create a dangerous patchwork approach to privacy law, but federal regulations applying to only certain sectors of business like the Graham-Leach-Bliley Act (“GLBA”) in the financial sector, the Health Insurance Portability and Accountability Act (“HIPAA”) in the medical sector, the Children’s Online Privacy Protection Act (“COPPA”) to protect children, and others also do not provide any unifying law to federally protect personal data privacy. Despite only certain sectors having regulations that protect consumer data privacy rights, there is an increased awareness of consumer privacy rights in all sectors due to the innovation of new technologies including AI across all industries, making this type of industry-based federal regulation not comprehensive enough.

 The risks and high costs associated with the current patchwork approach to data privacy law highlights how imperative it is for Congress to act quickly and pass a federal privacy framework that streamlines regulations, clarifies how businesses should comply with privacy laws, and provides consumers with basic data privacy rights. The United States is one of just a few developed countries that does not have a comprehensive federal privacy law. On June 3, 2022, Congress released a bipartisan and bicameral proposal called the American Data Privacy and Protection Act (“ADPPA”). The ADPPA was designed to provide “foundational data privacy rights” for consumers as well as “create strong oversight mechanisms and establish meaningful enforcement” of organizations and businesses. The act would broadly apply to all businesses operating in the United States. However, although the ADPPA would provide federal standards and safeguards for personal data, the ADPPA seems to impose weaker regulations on data privacy law than California’s data privacy law, causing resistance from California lawmakers since, if enacted, the ADPPA would preempt California data privacy law. While this is true, the uniformity provided by a potential federal data privacy law has the potential for greater benefits for businesses and consumers. On July 20, 2022, the ADPPA was approved by the House Energy and Commerce Committee by a huge margin of 53-2. While this proposed act is the closest Congress has been to passing a comprehensive federal privacy law, it ended without even a House floor vote. Since 2022, the political landscape within Congress has changed, with a now Republican-controlled House, and it is unclear where support lies for the ADPPA. With the growing patchwork of state privacy regulation, federal preemption remains a dispute in the discussions of the ADPPA. Due to this uncertainty, many individual states are continuing to progress with their own data privacy laws, therefore further complicating the data privacy landscape for consumers and businesses in the United States.

On October 30, 2023, President Biden issued an executive order on AI, and called on Congress to pass bipartisan data privacy legislation to better protect the privacy of Americans. This recent federal recognition of the current and growing risk to Americans’ personal data is an indication that the issue is still at the forefront of federal discussion despite Congress not voting on the ADPPA in 2022. Given Congress’s snail’s pace movement towards a comprehensive federal data privacy framework, and the uncharacteristic swiftness of states in filing and enacting data privacy bills, it is likely that the compliance issues associated with patchwork state laws will continue to plague businesses and consumers until the ADPPA or a different federal privacy law is enacted.

Victoria Jin anticipates graduating from Boston University School of Law with a juris doctor in May 2024.