Photo courtesy of David B. Gleason under Creative Commons.
Abstract
Emerging technologies have advanced with criticism from every direction, even from their developers. Multiple corporations in contract with the Department of Defense (DoD) under Project Maven have found themselves facing employee backlash and open letters from employees and outside experts. The major concerns being raised are mostly ethical, involving fear of inclusion of their artificial intelligence (AI) work into autonomous weapons systems (AWS). Regardless of the number of employees who speak out and action a company may take, it is unlikely these systems will stop being developed. Google and Clarifai have shown that action taken can be vastly different, but AI will be developed regardless. Nonetheless, AI has been brought into the Pentagon and continues to spread.
Introduction
Imagine your work being compared to a technology that assisted Hitler in the Holocaust. Except, instead of taking a step back and reassessing your contracts and partnerships, you double down and defend the work you do. This is the reality of private companies, like Microsoft, Amazon, and Palantir, partnering with the United States government on AI. The rise of ethical questions with the acquisition of emerging technology has led to many conversations on the involvement of these corporations in the Pentagon. As a result, both the private and public sector have taken different approaches, raising questions as to how and if employees can force change.
The advancing technologies of AI and AWS assign a newfound moral responsibility to developers, who are left to wonder what they should do with their ethical authority. Why are certain actions being taken or not? Are we seeing any effects from the action or lack of action? These questions will be assessed in relation to these technologies and the involvement of Google and Clarifai with the Pentagon.
Two hypotheses about employee action against Google and Clarifai will be considered. Hypothesis one assumes employee activism will cause enough uproar in the technology industry that companies will pull out of their contracts and cause these systems to suffer major delays. With this assumption, there is the potential that the system integration delay could cause significant financial loss or halt production entirely. Conversely, hypothesis two assumes that regardless of action taken by employees in Silicon Valley, as well as other corporations working on these technologies, there will not create enough backlash against AI integration into autonomous systems and the DoD. Thus, their activism would be ineffective at scale due to continued manufacturing and incorporation.
While the impact of negative mental health has been studied in combatants on the ground for many years, it has yet to be analyzed in developers who design these emerging technologies. In part, this is due to a lack of significant fielding with these technologies because of their novelty. However, it is critical to note that morals within every person working on a military project, be it civilians or enlisted, is important and their moral or emotional struggles may have a long-term impact on them or the work they do. There is, however, difficulty in quantifying a moral struggle and its impact. These issues compound because there is no intercultural, objective way to measurement a person’s moral judgements.. By examining actions taken by Google and Clarifai, the ineffectiveness of employee activism will ultimately show that employees do not currently have the ability to impede the development of AI-integrated autonomous systems.
Literature Review
Throughout recent decades, research has been conducted on veterans to investigate moral, ethical, and mental health concerns. This research has shown that battlefield experience has a high potential to lead to long-term mental health complications. The recent emergence of new technologies with less human involvement, has seen more programmers speaking out about the development of these systems. There is a lack of research on how developing these technologies creates moral dilemmas and thus impacts the corporations. Whether those developing the systems feel a similar sense of responsibility as to what these technologies do in comparison to those who have previously been on the battlefield has also not been thoroughly researched.
Those with concerns about working on government-led AI projects or autonomous weapons systems have been fairly outspoken, but so have those who feel the work is moral. The former fears that these systems do not have enough human judgment nor testing. The latter states security concerns due to the intent China has to become a global leader in technological development and believe that without such technologies, the U.S. may fall behind.
Research on the impacts of developing autonomous weapons systems has become increasingly important due to their rising relevance in militaries worldwide. The U.S. is one of many countries intending to continue developing and eventually field these systems. As more of these systems emerge, it is important to be aware of not only the impact on developers, but also any effects that workers’ activism may have. It is important that states intending to use these systems understand the impact at every level. Autonomous weapons systems will be referring strictly to human off the loop systems.
Methodology
This study examines Google and Clarifai because of the U.S. DoD’s interest in their technologies and programming. In the past, the DoD has partnered with companies in Silicon Valley for work on AI and AWS. Not only were these companies chosen for their Project Maven partnerships with the Pentagon, but also for their different approaches to employee activism. In comparing the two, one can see how much emphasis a company may put into their employees’ morals and how that may change actions by corporations. While neither of these companies were initially working on AI for autonomous systems, much like technological growth, changes were made to policy and Clarifai announced a willingness to sell autonomous weapons technologies to the government.
Through the Project Maven partnership, Google and Clarifai assisted the DoD by providing algorithms for AI to help on the battlefield. These companies provided AI models to help on the battlefield. While the outcome of each company in the DoD was different based on how they handled employee activists, both show different methods of handling conflicting views. These corporations were founded as technology businesses, without the intent to aid militaries. They are both international technology companies, yet they added technological advancements to the DoD. For the purpose of this research, Google and Clarifai were chosen due to the significant work employees did to make their voices heard and their coverage by news articles, research papers, and books.
Research data was pulled from open-source information online due to time restraints and an inability to interview people within the industry. The lack of personally conducted interviews to obtain information on worker positions was not a large hindrance to the resulting conclusion thanks to the many articles and employee interviews available to the public. By comparing general and moral feelings about these systems alongside the feelings of their developers, one will observe a lack of effort from programmers in voicing their concerns. In assessing the effectiveness of employee and public activism on the development of military AI and partnerships, their effects must be measurable. Effectiveness in this evaluation will be measured by the successful delay of building or creating these systems at any point.
Private corporations of Silicon Valley have been involved in DoD contracts for decades. These contracts significantly increased beginning in 2016 with the Defense Innovation Unit, followed by the beginning of Project Maven in 2017. Even before the 2000s and the jump in big tech integration into the DoD, Silicon Valley was involved in presidential politics. Silicon Valley endorsed President Clinton after the release of the “Technology Policy for America.” Thus, showing that while some may view the technology industry as new within the government, and more specifically, the Pentagon, these corporations have been major players for decades.
Moral & Ethical Challenges of Autonomous Weapons Systems
Many people and organizations have vocalized disdain for fully autonomous, human off the loop weapons systems. Being that these are human off the loop, the operator would not have full control over the system or be able to stop it once in motion. One of the main points brought up within such arguments is that these systems must abide by Article 36 of Additional Protocol 1. This article states that new systems must abide by international law. Thus, AWS must be able to properly distinguish between combatants and non-combatants and also be proportional in their combat use. Whether it is possible for a machine to do so is to be determined by additional testing prior to fielding.
Being that these are emerging technologies that are not yet fully understood, nor have they reached their full capacity, there is much the developers must learn regarding what these technologies do in the field. This leads to a knowledge gap between developers and operators, in turn leading to an initial lack of trust the operators have of the systems from not understanding developmental aspects. However, once an operator works with a system repeatedly, they begin to trust the system to an extreme. Both extreme trust and lack thereof can lead to significant operational errors. Education on both sides, developmental and operational, is necessary in order to ethically use such systems.
The DoD Directive Autonomy in Weapons Systems states that those who operate autonomous systems are “to exercise appropriate levels of human judgment over the use of force.” This combination of human and system is known as human-machine teaming. While at first glance this seems significant in keeping human decision-making in actions taken, it is important to note that humans tend to place exceedingly high levels of trust in the decision output machines give. Due to the extreme trust operators have in machines, this then gives way for decisions which may be unethical or illegal. When a machine regularly makes minor decisions that the human knows are acceptable, the operator then begins to trust the more significant decisions the system makes, thus giving more and more decision capability to the machine. This deferment to machine decision could also apply to AI-enabled systems. Not every AWS is AI-enabled, but by giving those that are the ability to decide and reaffirming their decisions with unjustified trust, machine learning could easily reinforce unethical or illegal actions.
Project Maven
Project Maven, formally known as Algorithmic Warfare Cross Functional Team (AWCFT), began in 2017 with intent to bring AI and machine learning to the DoD to combat China as a rising technological power. Including AI in the DoD appeared to be one way the U.S. military could maintain its global lead. The intent for Project Maven was to evaluate drone images, use private sector machine learning technologies, and improve future drone strikes. Project Maven was brought into the Pentagon under the Third Offset Strategy, alongside systems with autonomy. Previous to 2017, however, the Pentagon did not have strong connections with the AI industry. Whether or not the sudden partnerships with technology corporations increased controversy is hard to assess. If these companies had already built trust within the DoD, it is possible that there would not have been the same level of outcry.
One of the challenges in integrating AI capabilities built under Project Maven is the lack of existing real-life data required to train systems. Training systems in artificial scenarios does not give entirely realistic data an AI model can learn from. This can cause AI-enabled systems to go awry when fielded. It is impossible for all biases to be removed and systems to be tested for every real-world scenario. Project Maven, once fielded, required algorithm updates multiple times a day to ensure accuracy in its new environment. Though this shows that these AI-enabled systems are not ready for immediate fielding and must be tweaked, it also demonstrates their ability to learn and adjust, while being algorithmically refined from potentially thousands of miles away.
With the intent to complete the Replicator Initiative by 2026, the U.S. will have thousands of fieldable AI-enabled autonomous systems. This initiative was created in order to contend with China’s growth. In order to have these vehicles, the DoD needs corporations to create the systems, many of them coming from Silicon Valley. While the U.S. has endorsed the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, it is unclear yet if the endorsement will ease AI or AWS concerns. Some still view this declaration as inadequate as it is not a full ban for AWS and still allows for the militarization of AI.
Google’s entrance into the defense world was plagued with outspoken ethical concerns. Roughly 3,000 employees signed an open letter urging Google to halt their contract with the DoD. Much of this concern stemmed from use of potential lethal power due to programming, prevalence of the military in their work, and AI. Google’s intent was to work on image data processing; however, this imagery can easily be paired with a lethal or non-lethal autonomous weapons system. In addition to the letter from employees, Google received an open letter asking them to withdraw from Project Maven from various AI researchers and academics due to the mass data Google has on civilians and its potential military integration. While civilian data sharing did not appear to be a major concern for these employees, it is a reminder of the extent their personal data can be shared to those who use Google’s services.
After the open letter did not cause Google to prematurely cancel their government contract, many employees took personal action. Some felt it best to resign from their positions. Google is one of few companies in the tech industry that young adults physically exert themselves to work for. Much like other large tech corporations, Google has rigorous qualification requirements. Even with Google being an end goal for many programmers or technology experts, the large number of resignations was a first for them due to the concerns from Project Maven. This served as an example to even the most prestigious, powerful companies in the technology sector that if their decisions go against people’s ethics, they could lose employee support.
The impact of the open letters, petitions, and employee resignations was significant. While Google did not immediately cancel their contract with the DoD, the company did decide not to renew the contract when it ended in 2019. Their outspoken employees were effective in getting their employer to not renew this contract over their company’s ethical beliefs even though Google had claimed it was working solely on marking objects with the help of AI. While Google was one of the first major players for private technology companies in the Pentagon, they responded in line with their employees’ wishes.
Hypothesis one operates under the assumption that employee activism will negatively impact Google and the implementation of AI into autonomous weapons systems in the DoD. However, the Pentagon had another company, Palantir, lined up to take over the contract. With Google being the massive corporation it is, one would think this had a severely negative impact on Project Maven. However, Palantir had such success with Project Maven that they expanded their contract with the DoD to reach further within the department. Additionally, while a $15 million contract seems a significant amount to many, for Google it is not. The continuation of this AI project with another company by the DoD shows that these contracts will simply move from one corporation to another when cancelled, proving hypothesis two. There is little effect at scale from employee activism at this point. Google reacted to activism by not renewing their contract, but regardless of backlash Google received from partnering with the DoD, their retreat from their contract was unsuccessful in incorporating AI in the Pentagon.
Clarifai
The partnership between the DoD and Clarifai, a New York-based company with a research base in San Francisco, began shortly after the launch of Project Maven. While corporations have varying reasons for the partnerships they form, Clarifai CEO Matthew Zeiler stated that their contribution goals “to save the lives of soldiers and civilians alike” for Project Maven aligned with their mission “to accelerate the progress of humanity with continually improving AI.” Clarifai was working on similar imagery technology to Google, using AI in object detection for drone surveillance. Irrespective of non-lethal intent, concerns were raised by employees within Clarifai. Though the initial intent was drone imagery, the CEO of Clarifai partnered with Crimson Phoenix, a company already in partnership with the U.S. Army’s Autonomous Combat Casualty Care Initiative, thus demonstrating their shift into AI-enabled autonomous systems.
Similar to other emerging technologies, AI is rapidly growing and changing, with its uses evolving at an explosive rate. For those who work on these technologies, the rapidity can lead to moral dilemmas. With the realization that the technology they were developing could lead to AI-enabled autonomous weapons, a Clarifai employee spoke out and subsequently quit their job. The same employee wrote a letter to the CEO of Clarifai, on behalf of a larger group of employees. This letter addressed unavoidable biases in coding and requested the CEO to address whether or not technologies they were developing would be implemented into weaponry. They quickly received confirmation that it likely would.
In addition to employees leaving their jobs over ethical concerns, Clarifai fired a former Air Force captain over the filing of a complaint. This complaint was not in regard to technology work the company was doing, but rather a security breach that was not reported in the mandatory 72 hours the Pentagon requires. This brought into question the company’s general ethics, not solely the ethics of the projects they work on. A security breach on this kind of project, especially when not shared with the DoD, is a national security concern. Regardless of how employees feel working on these projects, this national security information getting into an adversary’s hand would have a significant negative impact.
While employees of Clarifai made attempts to halt these projects to get clarity on how they would be used by the government, their efforts were met with little acceptance. Clarifai CEO’s stance remains that lives will be saved through autonomous weapons, with AI being “an essential tool.” Their recent partnership with Crimson Phoenix on integrated AI in autonomous systems only emphasizes their intent to work towards AI-enabled autonomous weapons. This coupling could warrant more questions from workers. Whether or not this will lead to more employee resignations or speaking out against the company and AI in general has yet to be seen.
Employee effectiveness in regard to Clarifai was rather lackluster. Under the first hypothesis, it would be assumed that employees leaving their jobs and writing open letters to the CEO would have caused the company to halt their contract, leaving the Pentagon to scramble for a new business to work with. However, this was not the case. There was a stunning lack of action on behalf of Clarifai after employees spoke up. This then shows that hypothesis two is more accurate for this business. The company continued to develop not only the AI models the Pentagon requested, but also partnered with a separate company to increase their efforts within the DoD and autonomous systems. Clarifai’s CEO Zeiler’s statements show that the partnership has only deepened, and they will only do more work for the Pentagon on the integration of these technologies.
Conclusion
Integration of private technology companies into the DoD has proven to be peppered with employee activism. This raises questions about what actions employees take, why they take such actions, and how effective their activism is based on how their employers respond. Both the development of AWS and the integration of AI with Project Maven have raised many ethical dilemmas. Employees and experts alike have written open letters and addressed these concerns to the private tech companies partnering with the DoD. While some companies, like Google, have chosen not to renew their contracts in light of the concerns raised by employees, others, like Clarifai, have made it clear they do not recognize these ethical concerns. Regardless, the DoD continues to partner with corporations to integrate AI into autonomous systems, drawing further concern from many employees and experts.
Emerging technologies and defense integration are increasingly important topics that continue to raise moral dilemmas and impact national security. Companies must remain clear on their technological intent and, when possible, be mindful of employee concerns. Communication between the private and public sectors is vital in mitigating concerns and being clear about what projects are being supported by the work these employees are doing. This will become increasingly important as the U.S. attempts to maintain control as the world leader in technology amidst growing competition with China.

Bibliography
Albon, Courtney. “Palantir Wins Contract to Expand Access to Project Maven AI Tools.” C4ISRNet, C4ISRNet, 30 May 2024, www.c4isrnet.com/artificial-intelligence/2024/05/30/palantir-wins-contract-to-expand-access-to-project-maven-ai-tools/.
“Amid Pressure from Employees, Google Drops Pentagon’s Project Maven Account.” PBS, Public Broadcasting Service, 3 June 2018, www.pbs.org/newshour/show/amid-pressure-from-employees-google-drops-pentagons-project-maven-account.
“Article 36 – New Weapons.” IHL, ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-36?activeTab=.
Bajak, Frank. “Pentagon’s AI Initiatives Accelerate Hard Decisions on Lethal Autonomous Weapons.” AP News, AP News, 26 Nov. 2023, apnews.com/article/us-military-ai-projects-0773b4937801e7a0573f44b57a9a5942.
C, Hari. “Clarifai Employee Was Terminated for Trying to Report Controversial Pentagon AI Project Was Hacked.” California Employment Legal Group, 17 Feb. 2020, caelg.com/2018/06/13/clarifai-employee-terminated-trying-report-pentagon-ai-project-hacked/.
Chappellet-Lanier, Tajha. “Google Employees Resign in Protest against Air Force’s Project Maven.” FedScoop, 14 May 2018, fedscoop.com/google-employees-resign-project-maven/.
“Clarifai and Crimson Phoenix Partner to Enhance Advanced AI, ML and Unstructured Data Labeling in Defense, Intelligence Communities.” Press Release, 9 Oct. 2024, www.clarifai.com/press-release/clarifai-and-crimson-phoenix-partner-to-enhance-advanced-ai-ml-and-unstructured-data-labeling-in-defense-intelligence-communities.
Conger, Kate. “Google Employees Resign in Protest against Pentagon Contract.” Gizmodo, 14 May 2018, gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300.
Cummings, Mary. “Automation bias in intelligent time critical decision support systems.” AIAA 1st Intelligent Systems Technical Conference, 19 June 2004, https://doi.org/10.2514/6.2004-6313.
Daws, Ryan. “Palantir Took over Project Maven Defense Contract after Google Backed Out.” AI News, 24 Aug. 2021, www.artificialintelligence-news.com/news/palantir-project-maven-defense-contract-google-out/.
“DOD Directive 3000.09, ‘Autonomy in Weapon Systems.’” Department of Defense, www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.
González, Roberto J. “Militarising Big Tech.” Transnational Institute, 2023, www.tni.org/en/article/militarising-big-tech.
Kalvapalle, Rahul. “Google Employees Ask Tech Giant to Pull out of Pentagon AI Project – National.” Global News, Global News, 6 Apr. 2018, globalnews.ca/news/4124514/google-project-maven-open-letter-pentagon/.
Kosoff, Maya. “Amazon Workers to Jeff Bezos: Stop Weaponizing Our Tech.” Vanity Fair, Vanity Fair, 22 June 2018, www.vanityfair.com/news/2018/06/amazon-workers-to-jeff-bezos-stop-weaponizing-our-tech?srsltid=AfmBOorN8hzTzyPEFQMRltPc-RzwMlNLifNus58hXhxJ2T4EVLmqhiB8.
Laiq, Nur. “Silicon Valley’s Political Involvement Began Long before Musk and Bezos.” The Hill, The Hill, 1 Nov. 2024, thehill.com/opinion/4966121-silicon-valley-politics-tech-gurus/.
Maldonado, Samantha. “Employees of Big Tech Are Speaking out like Never Before.” Tech Xplore, 26 Aug. 2019, techxplore.com/news/2019-08-employees-big-tech.html.
Metz, Cade. “Is Ethical A.I. Even Possible?” The New York Times, The New York Times, 1 Mar. 2019, www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html.
“Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.” U.S. Department of State, U.S. Department of State, www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/. Accessed 12 Dec. 2024.
Ryseff, James. “Mastering Human-Machine Warfighting Teams.” War on the Rocks, 8 Nov. 2024, warontherocks.com/2024/11/mastering-human-machine-warfighting-teams/?__s=calytv203wfsk2uae1ma.
Scharre, Paul. “Maven.” Four Battlegrounds, W. W. Norton & Company, New York, NY, 2023, pp. 53–59.
Segars, Heidi. “The Influence of Operator Trust on Human-Robot Interaction Within Teams.” Sage Journals, journals.sagepub.com/.
“Silicon Valley Becomes Hotbed for Employee Activism.” CBS News, CBS Interactive, 25 Aug. 2019, www.cbsnews.com/sanfrancisco/news/tech-hotbed-employee-activism/.
“U.S. Plan for ‘Responsible Military Use of Ai’ Constructive but Inadequate.” Arms Control Association, 16 Feb. 2023, www.armscontrol.org/pressroom/2023-02/us-plan-responsible-military-use-ai-constructive-inadequate.
Zeiler, Matthew. “Clarifai Mission Statement: Clarifai Contributes AI Solutions.” Clarifai Mission Statement | Clarifai Contributes AI Solutions, Clarifai, Inc., 13 June 2018, www.clarifai.com/blog/why-were-part-of-project-maven.