Use this checklist to help develop and implement effective AI policies
Practical Steps for Getting Started or Refining a Policy
We hear constantly about the need for an artificial intelligence (AI) policy for our institutions, but the noise surrounding the topic is overwhelming, and can leave many institutions unsure of where to start. URMIA recently hosted a webinar I delivered, “AI Policy in Higher Education: Where to Start and What You Need to Know.” Here, I’ll share more of a checklist to help you get started. To dig deeper into the subject, URMIA members can listen to the full recording of the webinar in the URMIA Library.
Let’s get started with the recommended steps.
1. Establish Purpose & Principles
Clarify why the institution is creating an AI policy and how AI aligns with institutional mission and values. This ensures that the policy supports innovation, academic freedom, and responsible risk-aware use, rather than simply restricting new tools.
- Define the purpose of AI governance.
- Align with mission/values and academic culture.
- Emphasize empowerment, not prohibition.
- Identify how AI adds value across campus.
2. Build a Cross-Functional Governance Structure
AI governance must include all major user groups, because AI affects every corner of the institution differently. A diverse committee provides better risk visibility, makes the policy more equitable, and increases community buy-in.
- Include faculty, students, researchers, IT, legal, staff, and regulated areas.
- Clarify authority, roles, and decision-making responsibilities.
3. Map the Institutional Landscape
Understand who is using AI, how they’re using it, and where risks or inequities exist. This step ensures that the policy fits the reality of your institution instead of making assumptions.
- Identify all user groups and their needs/risks.
- Document existing AI use cases.
- Assess AI literacy and access gaps.
- Flag high-risk units needing special rules.
4. Evaluate Key AI Risks
AI introduces new forms of bias, accuracy issues, privacy concerns, and integrity risks that traditional policies don’t cover. A clear risk profile helps determine safeguards that are practical and proportionate.
- Bias, fairness, and discrimination risks.
- Accuracy, hallucinations, and transparency issues.
- Data privacy and vendor data handling.
- Security vulnerabilities.
- Academic integrity and reputational risk.
5. Set Guiding Rules for Ethical Use
Provide simple, universal principles that apply regardless of the specific tools in use. Evergreen rules prevent your policy from becoming outdated as technologies evolve.
- Require transparency and disclosure when appropriate.
- Define acceptable vs. prohibited uses.
- Require human review of AI outputs.
- Keep principles tool‑agnostic and long-lasting.
6. Determine Policy Structure
Decide how AI governance will be presented—one master policy or separate but connected sub-policies. What matters most is consistency across audiences and alignment with the same institutional principles.
- Choose one overarching policy with sub-policies or separate academic/operational policies.
- Ensure scalability and flexibility.
7. Evaluate & Approve Tools/Vendors
Institutional AI use requires clear standards for security, data use, and transparency. Without vendor vetting, users will turn to personal tools, creating uncontrolled risk exposure.
- Establish criteria for vendor trust and transparency.
- Identify institution-approved tools.
- Clarify rules for personal vs. institutional AI accounts.
8. Provide Education & AI Literacy
AI literacy is essential for safe, ethical, and effective use of AI tools. Training reduces misuse, improves transparency, and builds trust across campus.
- Offer training for staff, faculty, and students.
- Explain risks, limitations, and best practices.
- Create communities of practice or AI cohorts.
- Make documentation easy to find and understand.
9. Pilot, Test & Collect Feedback
Pilot programs help institutions test tools and policies with real users before large-scale adoption. Feedback prevents unintended consequences and builds acceptance.
- Pilot tools with diverse users.
- Gather structured feedback.
- Adjust policies and training materials accordingly.
10. Communicate Clearly
The biggest failure point in AI policy is a lack of awareness; users frequently report not knowing whether their institute's policy even exists. Clear, highly visible communication and socialization help to promote adoption and reduce risk.
- Make policies searchable and clearly labeled “AI.”
- Share during orientations and professional development.
- Provide FAQs and examples tailored to user groups.
11. Monitor, Update, & Evolve
AI changes quickly, so AI policy cannot be “set it and forget it.” Regular review ensures the policy stays relevant, safe, and effective. To the extent possible, policy should be written in a way that allows principles to remain consistent while expecting and anticipating technology and its use to evolve.
- Set scheduled review cycles.
- Track emerging risks and new tools.
- Maintain cross-campus collaboration.
The goal is to empower people to use AI safely, provide guardrails, and try not to get in the way. People are already extensively using these tools regardless of institutional policies. Working against this is a losing effort. What we can do, and where our efforts should be focused, is on building the infrastructure and clearing the pathway for safe and effective use.
AI assisted with the summarization of the webinar, followed by human reviews of the summary prior to publication.
3/24/2026
By Michelle Johnson, Assistant Director, MIT Emergency Management, Massachusetts Institute of Technology
Insights Home