Ethics Plan for AIFS


To achieve our goal of developing Artificial Intelligence (AI) tools to transform US food systems by targeting its biggest challenges, the AI Institute for Next Generation Food Systems will require a clear ethical framework to guide the research and the researchers. We propose to complete two projects that will contribute to socially trustworthy AI for agricultural applications.

One project will result in a set of recommendations that AIFS research cores can adopt to help assure the trustworthiness of their research.

The other will create an ethics curriculum for AIFS researchers, graduate students, and post-doctoral fellows.


To complete both projects, we will conduct interviews with the researchers in each AIFS cluster. To develop the recommendations, we will ask interviewees a series of straightforward questions expressed in lay terms:

  1. What is it that AI developers and producers are asking people to trust them with, to accomplish what purpose?
  2. What are the accountability, safety, and precautionary methods and practices that are in place to assure that AI developers/producers can be trusted to do the things they claim to want to do with the data they are soliciting?
  3. How effective are those methods and practices thought to be?
  4. How vigilant are AI developers and producers in attempting to identify new concerns and issues that can arise from their AI applications and who is responsible for guarding against them? To whom are they accountable and what are the mechanisms used to hold them accountable?

To develop the ethics curriculum, we will interview researchers about the following three themes: Bright Lines, Big Picture, and Deep Questions. This thematic approach has been successfully piloted by Mark Yarborough and Larry Hunter for ethics instruction for both the computational biosciences program at the University of Colorado Denver and the UC Davis Clinical and Translational Science Center. Interviewees will be asked to respond to the following topics:

• What are the bright lines that cannot be crossed in research? Topics such as fraud, plagiarism, and the mistreatment of animals in research are examples of the bright lines that, when crossed, undermine the public’s trust in and support for science. In the AI context, data security is similarly crucial.
• What is the big picture in which science is situated today? Questions about who pays for science, who sets the agenda, who benefits from it, what the public expects and deserves in return for its support, help learners explore the public citizen role of the researcher. Questions about the information scientists and the public have and how it is disseminated determine the level of social trust. How will AI developers validate information before using it for their models and how will users trust that AI developers have access to accurate information?
• What are the deep questions posed by AI in the food system? Exploring such questions as, for example, what impact increased knowledge about plant genetics has on notions of equitable access to technologies, how AI in agriculture may affect our global responsibility to assure food security for all, how to assess the value of AI technologies that benefit some people and make other worse off, how inequitable access to the benefits of AI research can deepen social divisions and permit team members to appreciate the expansive social implications of their work and gain experience relevant for a lifelong engagement with the public about science and its role in our collective human future.

We will work with the socioeconomics and ethics cluster personnel and the leads of the other clusters to refine the interview questions for both projects. Engagement from all components of AIFS is crucial for the success of this project and the institute overall. We will record and transcribe the interviews, use NVivo or comparable data analysis software to help analyze and synthesize the responses, and then produce our recommendations for helping to assure the trustworthiness of AIFS research, as well as the ethics curriculum.

Project Team


The projects are jointly conducted with partners through the AIFS Network:

UC Berkeley

L. Fleming

E. Ligon

UC Davis

P. Ronald

A. Smith

M. Yarborough