Generative AI Ethics & Privacy
As generative AI tools proliferate in academia, experts caution users to be aware of ethical considerations such as equitable access, data control & privacy, bias in underlying datasets, and convincing fakes. Whiting School of Engineering (WSE) encourages faculty and staff to learn about AI ethics and discuss these concerns with students. If you plan to use generative AI tools in your course(s), it is crucial that you inform students of potential risks, share appropriate policies, and implement strategies for mitigating these issues.
Generative AI Risks
Learning Opportunities and Strategies for Mitigating Risks
Despite risks involved with using generative AI, there are many ways to incorporate these tools into teaching and learning—and they may even provide benefits to accessibility (e.g., automated alt text, audio descriptions, document tagging, etc.), personalized learning, and student support (e.g., 24/7 real-time AI teaching assistants). If you intend to use these tools in your course(s), be aware of the risks and consider best practices for mitigating potential issues.
AI literacy is not something we should seek to eliminate. Although there are risks, AI technology is an astounding resource that can be leveraged if we understand its capabilities and design curricula that focus on effective teaching and learning.
Course Policies on AI
If you plan to have students use AI tools in your course(s), WSE recommends that you clearly outline course policies and communicate concerns about data privacy, collection and storage.
If you plan to have students use AI tools in your course(s), WSE recommends that you clearly outline course policies and communicate concerns about PII and data privacy, data collection processes and storage. You should also allow students to share their concerns with you and opt out of sharing their personally identifiable information—which may require providing alternative activities, assignments, or approaches for students who opt out. Consider including language from the following widely-shared syllabus template:
Be sure to advise students against sharing their personally identifiable information and other sensitive data without appropriate consent, which includes understanding the technology terms of service and privacy policies. Students should be aware that any data submitted to generative AI tools may be used to train the underlying data models and appear in future outputs.
AI for Activities or Assignments
If you intend to make AI tool usage mandatory for activities or assignments, WSE recommends providing alternatives for students who do not wish to share personally identifiable information. Alternatives may include:
- Having students use a pseudonym and/or generic email address
- Having faculty create a shared account for students to use (that is not tied to the faculty member’s personal account)
- Providing alternative assignments or activities that allow students to achieve the same learning goals without using these tools.
Before implementing any of these options, it is essential to review the Terms of Use. For example, OpenAI’s Terms of Use currently state that access credentials or accounts should not be shared with individuals outside of the organization, and users are responsible for all activities conducted using their credentials.
Ensuring Equitable Access
If you do use AI tools in your courses, WSE recommends taking steps to ensure equitable access for all students. Some generative AI tools, including ChatGPT, offer paid service tiers (e.g., ChatGPT Plus, which can write an execute code among other enhanced features). Faculty should design course activities in ways that do not disadvantage students who can not afford paid or premium versions.
Addressing Bias and Harmful Content
Generative AI tools are based on Large Language Models (LLMs) that ingest vast amounts of data. When the underlying data contains biased, discriminatory, abusive or unreliable content the outputs of these tools can be similarly problematic, offensive, and/or inaccurate. WSE recommends that faculty and staff acknowledge these concerns, discuss them with students, and implement strategies for mitigating these issues when using AI tools in teaching and learning.