Skip to Main Content

DKU AI Literacy: Policies & Guidelines

Introduce Artificial Intelligence resources, tools, and ethics for the DKU community.

DKU AI Policies/Guidelines

Duke Kunshan University follows the AI policies of Duke University, which has created some general and customizable policies for faculty to adopt, including the following:

•   Draft your own AI policy

•   Do your homework (Self-education of AI)

•   Share the rationale behind your policy

•   Support AI literacy

•   Define acceptable use

•   Explore the continuum of policies

•   Use prohibited

•   Use only with prior permission

•   Use only with acknowledgment

•   Use is freely permitted with no acknowledgement

Specific strategies faculty may adopt include:

https://www.dukekunshan.edu.cn/center-for-teaching-and-learning/ai-and-teaching/ 

•   Introduction of AI tools in their courses

•   Communicate with students about AI

•   Use AI tools to enhance teaching

•   Design assessments that deter the unauthorized use of AI

•   Emphasize academic integrity

•   Keep accessibility and privacy in mind

•   Resource Collection

Faculty members are also asked to put the following statement in the Academic Integrity Statement on their syllabi:

“Since AI is a novel technology that combines some of the features of traditional online sources and some of the features of a more responsive actor, be certain to indicate where students have permission to use generative AI tools and when they do not, and be mindful of assessment design that supports your aim.”

So far AI has been implemented in some courses in DKU:  

•   EAP: students allowed to use, with clear limits

•   CC3: students have option to converse with GenAI

•   CS: students encouraged to use as a resource to facilitate faster coding

•   PHYS: attempts at an interactive tutor

•   Some courses banned use

•   Still some GenAI integrity violations

Policies & Guidelines of Major AI Companies

OpenAI’s Policy

OpenAI’s mission is to ensure that artificial intelligence benefits all of humanity. They have published safety, trust, and policy principles aiming for broader sharing and cooperation in AI development. OpenAI’s Usage Policies (https://openai.com/policies/usage-policies/) are comprehensive and provide service-specific guidance. The policies cover such content as Universal Policies, Building with the OpenAI API Platform, Building with ChatGPT, GPT Store, and so forth, which aim to maximize innovation and creativity.

Anthropic’s Claude Policy

Anthropic, an AI research and safety company, develops AI systems intended to be reliable, understandable, and controllable. The major AI system developed by Anthropic is Claude that is a family of large language models. Anthropic’s  usage policies focus on ensuring ethical, safe, and responsible use, highlighting principles such as safety, ethical use, transparency, fairness, user accountability, research integrity, and collaboration within the AI community. For current details, refer to the company’s official sources: https://www.anthropic.com/legal/aup.

Google Gemini’s Policy

Google’s AI principles include objectives like being socially beneficial, avoiding creating or reinforcing unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses that accord with these principles. Generative AI policies are incorporated in Google Terms of Service that was updated on May 22, 2024 (https://policies.google.com/terms/generative-ai).

Google also has a Generative AI Prohibited Use Policy (https://policies.google.com/terms/generative-ai/use-policy), which bans the use of Google's generative AI for activities that are illegal, harmful, or deceptive. It specifically prohibits promoting illegal acts, content that incites harm or violence, spam distribution, bypassing safety protocols, impersonation, spreading misinformation, and creating sexually explicit content, unless for approved educational, scientific, or artistic purposes.

Meta Llama’s Acceptable Use Policy  

Meta has put effort into the responsible development of AI technologies, focusing on aspects such as fairness, privacy, and safety through various internal and external research programs, though it does not have as formal a set of public principles as some other companies. In its Acceptable Use Policy, it gives a long list of Prohibited Users, such as violaters of laws, those involved in violence or terrorism, and so on. More prohibited users can be found in this page:  https://ai.meta.com/llama/use-policy/