James Bessen, Stephen Michael Impink, Lydia Reichensperger, and Robert Seamans
Artificial Intelligence (AI) startups use training data as direct inputs in product development. These firms must balance numerous trade-offs between ethical issues and data access without substantive guidance from regulators or existing judicial precedence. We survey these startups to determine what actions they have taken to address these ethical issues and the consequences of those actions. We find that 58% of these startups have established a set of AI principles. Startups with data-sharing relationships with large high-technology firms (i.e., Amazon, Google, Microsoft), that were negatively impacted by privacy regulations, or with prior (non-seed) funding from institutional investors are more likely to establish ethical AI principles. Lastly, startups with data-sharing relationships with large high-technology firms and prior regulatory experience with GDPR are more likely to incur negative business outcomes, like dropping training data or turning down business, to adhere to their ethical AI policies.