Mississippi Artificial Intelligence Network

AI Policy and Guidance Template
for Higher Education

A planning framework for colleges and universities — not a model policy. Use this to develop, review, and implement your institution-specific approach to AI and generative AI.

Important — Read First

This AI policy template is a planning and governance resource — not a final policy, not legal advice, and not a mandatory model. Institutions should adapt it to their mission, risk posture, legal environment, technological maturity, research profile, operational context, and shared-governance culture.

A sound higher education AI approach should do three things at once: enable legitimate innovation, protect people and institutional interests, and preserve human judgment in areas where academic, ethical, legal, or safety stakes are high.

Institutions may also wish to review related campus resources, such as faculty guidance on generative AI, academic integrity standards, data governance policies, accessibility requirements, procurement procedures, and information security policies.

Table of Contents

Foundational Sections

1. Introduction

2. What this template is, and what it is not

3. Foundational principles institutions may adopt

4. Stakeholder engagement and shared governance

5. Balancing innovation, academic freedom, and responsible use

Core Institutional Sections

6. Administration and leadership

7. Faculty

8. Staff

9. Students

10. Teaching and learning

11. Research and scholarship

12. Academic integrity

13. Data privacy and security

14. Human resources and workforce development

15. Institutional operations and services

16. Accessibility and inclusion

17. Procurement and vendor considerations

18. Risk management, ethics, and compliance

Governance and Resources

19. Governance, review, and continuous improvement

20. Periodic review and revision

21. Key authoritative resources to monitor

22. Implementation checklist

23. Optional closing statement

Foundational Sections

1

Introduction

Purpose and philosophy

This framework is designed to help colleges and universities develop, review, and implement their own institution-specific approaches to artificial intelligence and generative AI. Rather than serving as a final policy, legal advice, or a mandatory model, it functions as a planning and governance resource that institutions can adapt to their mission, risk posture, legal environment, technological maturity, research profile, operational context, and shared-governance culture.

Why institutions need an adaptable framework

Higher education institutions remain at different levels of AI maturity. Therefore, a flexible framework is more useful than a rigid one-size-fits-all policy. Institutions need room to account for mission differences, local governance structures, legal obligations, and varying levels of technical and organizational readiness.

Recommended framing statement

“This document provides a framework for developing and maintaining institution-specific policies, guidance, procedures, and best practices related to artificial intelligence (AI) and generative AI. It is intended to support responsible, effective, and mission-aligned use of AI across teaching and learning, research, administration, operations, and student services. It is not legal advice and should be implemented in consultation with appropriate institutional leaders, governance bodies, and subject-matter experts.”

Implementation Considerations

Start with a framework first, then issue more targeted policies and procedures by function. Most institutions will be better served by a layered governance approach than by trying to write one master AI policy covering every scenario.

Common Pitfalls

Overly broad bans, overconfident endorsements, policy language detached from operational realities, and failing to distinguish between low-risk productivity uses and high-risk decision uses.

Stakeholders to Involve

President or chancellor’s office, provost, faculty senate, CIO/CTO, CISO, general counsel, privacy officer, research office, HR, disability/accessibility office, student affairs, libraries, procurement, academic integrity leadership, institutional effectiveness, communications, and student representatives.

2

What This AI Policy Template Is, and What It Is Not

Purpose of this section

To clarify the difference between policy, guidance, procedures, and best practices.

Key questions

What decisions require formal policy? Which need implementation guidance? Which should be handled through local procedures or professional standards? Where does institutional discretion end and legal or regulatory obligation begin?

Sample guidance language

Policy establishes mandatory rules, responsibilities, authorities, and consequences.
Guidance explains how to interpret policy in practice.
Procedures define required operational steps, approvals, workflows, and controls.
Best practices describe recommended approaches that may evolve more quickly than policy.

This AI policy template distinguishes between policy, guidance, procedures, and best practices so institutions can assign the right level of authority to each document.

Implementation Considerations

Use policy only where durable rules are needed, such as privacy, procurement authority, research compliance, employee use restrictions, or high-stakes decision-making. Use guidance for pedagogy, course-level disclosure expectations, citation of AI use, and acceptable-use examples. Use procedures for security reviews, vendor approvals, incident response, and records management.

Common Pitfalls

Trying to solve everything through one policy document, or putting rapidly changing tool-specific directions into policy rather than guidance.

Stakeholders to Involve

General counsel, compliance, governance leaders, policy office if applicable, CIO/CISO, academic affairs, HR.

3

Foundational Principles Institutions May Adopt

Purpose of this section

To define the values that should shape all downstream AI decisions.

Key questions

What values should anchor institutional AI use? What tradeoffs will govern decisions when innovation and risk collide?

Sample guidance language

The institution may choose to ground its AI approach in principles such as:

  • Human oversight and accountability
  • Academic freedom with responsibility
  • Privacy, security, and confidentiality
  • Accessibility and inclusion
  • Transparency appropriate to context
  • Fairness and non-discrimination
  • Reliability and evidence-based use
  • Lawful, ethical, and mission-aligned deployment
  • Proportionality — controls should match risk

Implementation Considerations

Tie each principle to an operational control. For example, “human oversight” should map to approval and review requirements for consequential uses rather than remain a slogan.

Common Pitfalls

Principles without operational ownership or measurable controls.

Stakeholders to Involve

Cabinet, faculty governance, student leaders, legal, accessibility, ethics or compliance leadership.

4

Stakeholder Engagement and Shared Governance

Purpose of this section

To ensure AI governance is legitimate, durable, and informed by real campus use cases.

Key questions

Who should shape institutional direction? What decisions require consultation, recommendation, or formal approval? How will faculty, staff, and students be heard?

Sample guidance language

“The institution will use shared-governance and participatory processes to develop AI-related policy and guidance. Stakeholder input should be sought from academic leadership, faculty governance, staff leadership, students, IT, privacy, information security, disability/accessibility services, research administration, libraries, and other affected units.”

Implementation Considerations

Create a standing AI governance structure with a clearly defined charter. A common model includes an executive sponsor, a cross-functional AI steering committee, and domain working groups for teaching and learning, research, operations, privacy and security, procurement, and workforce development.

Common Pitfalls

Leaving governance to one office alone, excluding faculty or students, or allowing procurement and adoption to outrun governance.

Stakeholders to Involve

Faculty senate, student government, cabinet, union or labor representatives where applicable, library, instructional design, research compliance, IT, HR, legal.

5

Balancing Innovation, Academic Freedom, and Responsible Use

Purpose of this section

To avoid both institutional paralysis and careless adoption.

Key questions

How will the institution preserve academic freedom while setting minimum expectations for integrity, safety, privacy, and lawful use? When is experimentation encouraged, when is it constrained, and when is it prohibited?

Sample guidance language

“The institution recognizes that AI tools may support learning, scholarship, creativity, accessibility, and operational efficiency. At the same time, it also recognizes that AI can introduce risks related to accuracy, bias, privacy, intellectual property, security, overreliance, and inequitable impact. Institutional expectations should therefore be risk-based, context-sensitive, and grounded in academic freedom, professional judgment, and responsible use.”

Implementation Considerations

Use a risk-tier model:

  • Low risk: drafting, brainstorming, summarization of non-sensitive material, coding support in sandboxed environments
  • Moderate risk: student-facing automation, internal workflow support, administrative decision support
  • High risk: decisions affecting admission, grading, discipline, employment, benefits, safety, research compliance, or legal rights

Common Pitfalls

Confusing academic freedom with immunity from institutional obligations, or treating all AI use as prohibited or all AI use as acceptable.

Stakeholders to Involve

Provost, faculty senate, deans, academic integrity leadership, legal, compliance, student affairs.

Core Institutional Sections

6

Administration and Leadership

Purpose

To define leadership responsibilities, authorities, and institutional direction.

Key questions

What is the institution’s AI strategy? Who owns enterprise decisions? What uses require executive approval? How will resource allocation, risk tolerance, and public messaging be aligned?

Sample guidance language

“The institution should articulate an AI strategy aligned with its mission, academic priorities, student success goals, research profile, operational needs, and public responsibilities. Executive leadership should designate accountable owners for enterprise AI governance, risk management, and implementation.”

Implementation Considerations

Set decision rights for enterprise licenses, approved tools, communications, and exceptions. Require periodic reporting to senior leadership on adoption, incidents, training, accessibility, procurement, and compliance.

Common Pitfalls

Fragmented adoption, shadow AI use, unclear accountability, and leadership messaging that encourages rapid adoption without governance capacity.

Stakeholders to Involve

President or chancellor, provost, CFO, CIO, CISO, chief human resources officer, general counsel, research leadership, student affairs, communications.

7

Faculty

Purpose

To support academic freedom while clarifying faculty expectations and responsibilities.

Key questions

What discretion do faculty have over AI use in their courses? What disclosures should be expected? How should AI affect assignment design, assessment, grading, advising, and student support?

Sample guidance language

“Faculty retain pedagogical discretion consistent with institutional policy, accreditation expectations, program requirements, and applicable law. Faculty should communicate course-specific expectations for AI use, including whether use is encouraged, limited, or prohibited for particular activities. Faculty remain responsible for instructional quality, fair evaluation, student privacy, accessibility, and academic integrity.”

Implementation Considerations

Provide model syllabus language and assignment-level disclosure options. Faculty should redesign assessment where needed rather than relying solely on detection-oriented responses.

Common Pitfalls

Inconsistent expectations across courses, inaccessible or punitive enforcement, overreliance on AI-detection claims, and using unvetted tools with student data.

Stakeholders to Involve

Faculty senate, provost, deans, teaching and learning center, instructional designers, library, accessibility office, academic integrity leadership.

8

Staff

Purpose

To guide responsible staff use of AI in administrative and support functions.

Key questions

Which staff uses are permissible? What approvals are needed? Can staff input confidential, regulated, or sensitive information into public or vendor AI tools? How will staff verify accuracy and bias?

Sample guidance language

“Staff may use institutionally approved AI tools for authorized business purposes in accordance with data-classification, privacy, security, records-retention, and procurement requirements. Staff must not input confidential, regulated, protected, or otherwise restricted institutional data into unapproved AI systems. Human review is required before AI-generated outputs are used in decisions, communications, records, or services.”

Implementation Considerations

Link staff guidance to data-classification policy and security standards. Provide role-based training for HR, advising, enrollment, communications, finance, IT, and student services.

Common Pitfalls

Unauthorized disclosure, hallucinated outputs in official communications, inaccessible student-facing materials, and informal automation of decisions with equity implications.

Stakeholders to Involve

HR, IT, CISO, privacy officer, records management, division leaders, legal, accessibility office.

9

Students

Purpose

To define student rights, responsibilities, and support expectations.

Key questions

What uses are allowed in coursework, advising, co-curricular settings, and campus services? What disclosures are expected? How will institutions support students who need AI literacy but may not have equal access?

Sample guidance language

“Students may use AI only as permitted by institutional policy, program requirements, instructor direction, and applicable law. When AI use is permitted, students may be required to disclose the nature and extent of such use. Students remain responsible for the accuracy, originality, integrity, and appropriateness of work submitted in their name.”

Implementation Considerations

Provide plain-language student guidance, examples of allowed and disallowed use, and AI literacy resources. Institutions should explain why those expectations matter for learning, integrity, and responsible use.

Common Pitfalls

Equity gaps, hidden use caused by unclear rules, punitive approaches that outpace evidence, and failure to teach students how to evaluate AI outputs critically.

Stakeholders to Involve

Student affairs, academic affairs, faculty, libraries, tutoring and learning support, accessibility office, student government.

10

Teaching and Learning

Purpose

To guide pedagogical use of AI in ways that support learning rather than replace it.

Key questions

How should AI be used for course design, feedback, tutoring, assessment, and accessibility? What learning outcomes remain fundamentally human? How should students be taught to use AI critically and responsibly?

Sample guidance language

“The institution encourages pedagogically intentional AI use that advances learning outcomes, disciplinary thinking, and student development. However, AI should not substitute for essential student learning, instructor judgment, or required demonstration of competencies unless such substitution is explicitly justified and approved within program, course, or assessment design.”

Implementation Considerations

Encourage faculty to identify where AI can support practice, feedback, revision, and accessibility — and where direct demonstration of knowledge, reasoning, performance, or authorship is essential. Faculty should explain those distinctions to students at the course and assignment level.

Common Pitfalls

Assessment misalignment, deskilling, student overreliance, inaccessible tool use, and use of AI-generated content without review for accuracy or bias.

Stakeholders to Involve

Provost, faculty, teaching and learning center, instructional design, accessibility office, libraries, academic technology.

11

Research and Scholarship

Purpose

To address AI use in research design, analysis, writing, publishing, peer review, and grant activity.

Key questions

What forms of AI use in research are permitted, restricted, or prohibited? How should researchers disclose AI use? What rules apply to sensitive data, human subjects, controlled data, and sponsor requirements?

Sample guidance language

“Researchers remain responsible for the integrity, validity, originality, and compliance of all scholarly work and sponsored activity. Use of AI in research must comply with sponsor rules, human subjects protections, data-use agreements, export control requirements, information security standards, publication ethics, and applicable intellectual property rules. Researchers must not disclose restricted or sensitive research data to unapproved AI systems.”

Implementation Considerations

Coordinate guidance with sponsored programs, IRB, IACUC where relevant, export control, data governance, libraries, and research computing. Require sponsor-specific review for grant applications and peer review.

Common Pitfalls

Confidentiality breaches, undisclosed AI use, fabricated citations or analyses, sponsor noncompliance, intellectual property disputes, and improper use of AI in peer review or manuscript preparation.

Stakeholders to Involve

Vice president for research, sponsored programs, IRB, compliance, research computing, library and scholarly communications, general counsel, information security, export control.

12

Academic Integrity

Purpose

To align integrity expectations with current realities of AI-supported work.

Key questions

How should academic dishonesty be defined when AI is involved? What should count as unauthorized assistance, misrepresentation, or falsification? What evidence standards should apply in conduct processes?

Sample guidance language

“Academic integrity standards apply regardless of whether work is produced with or without AI assistance. Misrepresentation of AI-generated or AI-assisted work as wholly one’s own, unauthorized use of AI where prohibited, fabrication of sources or evidence, and other deceptive uses may constitute academic misconduct. Institutions should use fair, transparent, evidence-based processes and should not rely solely on automated detection outputs to determine misconduct.”

Implementation Considerations

Update integrity policies to address disclosure, unauthorized assistance, fabricated content, falsified citations, and assignment-specific rules. Train faculty and conduct officers on evidence standards and due process.

Common Pitfalls

Policies that are vague, unenforceable, or overly dependent on AI-detection tools.

Stakeholders to Involve

Academic affairs, faculty senate, student conduct, legal, deans, registrar, teaching and learning center.

13

Data Privacy and Security

Purpose

To protect institutional, personal, and regulated data in AI use.

Key questions

What data may or may not be used with AI tools? What approvals are required for student data, employee data, health data, research data, donor data, or security-sensitive information? How will the institution handle retention, vendor training rights, and cross-border transfers?

Sample guidance language

“No member of the institution may input restricted, confidential, regulated, or otherwise sensitive institutional data into an AI system unless the system has been formally approved for that data category and appropriate contractual, technical, and administrative controls are in place. The institution should require data minimization, purpose limitation, access control, logging, retention controls, and review of model-training and data-use terms.”

Implementation Considerations

Integrate AI use with data-classification policy, privacy review, and security architecture review. Require special review for FERPA-covered data, protected health information, payment data, research data, export-controlled data, and HR records.

Common Pitfalls

Uploading sensitive data into public tools, assuming vendor default settings are acceptable, failing to review retention and training clauses, and not separating experimentation from production environments.

Stakeholders to Involve

Privacy officer, CISO, CIO, legal, records management, research office, registrar, HR, procurement, internal audit.

14

Human Resources and Workforce Development

Purpose

To guide AI use in employment-related processes and workforce capability building.

Key questions

Can AI be used in recruiting, screening, evaluation, performance management, training, or employee support? What uses require review because they affect employment rights or conditions? What workforce development does the institution owe employees?

Sample guidance language

“The institution should exercise heightened caution in using AI in employment-related contexts, especially where AI may influence hiring, screening, performance, discipline, promotion, workload, or workplace monitoring. Human review, legal review, and bias evaluation are required before deployment of AI in consequential employment contexts. The institution should provide workforce development to help employees use approved AI tools responsibly and effectively.”

Implementation Considerations

Train supervisors and HR staff. Separate low-risk productivity use from high-stakes employment decisions. Review any automated employment analytics for disparate impact, explainability, and legal compliance.

Common Pitfalls

Using AI tools in hiring or evaluation without validation, adequate notice, or sufficient review for bias and accessibility.

Stakeholders to Involve

HR, legal, labor relations where applicable, equity leadership where applicable, accessibility office, IT and security.

15

Institutional Operations and Services

Purpose

To guide operational AI use in student services, finance, communications, facilities, advising, and other administrative functions.

Key questions

Which operational use cases are appropriate? Which are too high-risk without formal review? How will service quality, fairness, records retention, and accessibility be maintained?

Sample guidance language

“Operational AI use should be mission-aligned, risk-assessed, and subject to appropriate human oversight. AI may support service delivery, workflow efficiency, and information access, but it should not make final decisions in high-stakes contexts without authorized review, documented controls, and clear accountability.”

Implementation Considerations

Pilot first, measure outcomes, require exception review for consequential uses, and keep manual fallback processes. Consider public records obligations, records retention, and public-facing disclosures when AI is used in service delivery.

Common Pitfalls

Chatbots giving incorrect institutional guidance, automation drift, inaccessible services, and AI quietly shaping decisions that should remain reviewable by humans.

Stakeholders to Involve

Division heads, CIO, CISO, privacy officer, legal, records management, student affairs, communications, accessibility office.

16

Accessibility and Inclusion

Purpose

To ensure AI adoption supports access rather than creating new barriers.

Key questions

Do approved AI tools meet accessibility expectations? How will AI be used to support accommodation without replacing required accessibility processes? How will the institution monitor inequitable impact?

Sample guidance language

“The institution is committed to ensuring that AI-related technologies, content, and services are accessible and inclusive. Accessibility review should be part of AI procurement, implementation, and content governance. AI should be used to expand access where appropriate, but not as a substitute for legal accessibility obligations, individualized accommodations, or universal design practices.”

Implementation Considerations

Require accessibility review in procurement and content workflows. Digital accessibility should be treated as a core control rather than an afterthought.

Common Pitfalls

Assuming AI-generated captions, alt text, summaries, or translations are automatically sufficient, and failing to test tools with disabled users.

Stakeholders to Involve

Accessibility and disability office, procurement, IT, web and digital teams, libraries, instructional design, legal.

17

Procurement and Vendor Considerations

Purpose

To ensure AI tools are contractually, technically, and operationally suitable for higher education use.

Key questions

What vendor review is required? How will the institution evaluate privacy, security, accessibility, model training rights, data retention, auditability, and uptime? What extra questions are needed for AI-enabled products?

Sample guidance language

“All AI-enabled products and services must undergo appropriate institutional review before acquisition or deployment. Review should address privacy, cybersecurity, accessibility, legal terms, data ownership, data retention, model training rights, subcontractors, security architecture, bias and quality claims, and business continuity.”

Implementation Considerations

Add AI-specific procurement questions, including:

  • Is customer data used to train vendor models?
  • Can training on institutional data be disabled contractually and technically?
  • What logs and audit trails are available?
  • What content filtering and abuse protections exist?
  • How are third-party models, APIs, and datasets managed?
  • How are accessibility claims validated?
  • What jurisdictions govern storage and transfers?

Common Pitfalls

Buying AI features embedded in ordinary software without realizing the data and security implications, or accepting broad vendor rights to retain and reuse institutional data.

Stakeholders to Involve

Procurement, CIO, CISO, privacy officer, accessibility office, legal, business owner, records management.

18

Risk Management, Ethics, and Compliance

Purpose

To align AI use with institutional risk governance and legal obligations.

Key questions

What legal and regulatory regimes apply? Which uses are too risky or legally uncertain? How will the institution document, escalate, and monitor AI-related risks?

Sample guidance language

“The institution will manage AI-related risks using a documented, risk-based approach that is proportionate to context, impact, data sensitivity, and the degree of autonomy granted to the system. Certain uses may require legal review, privacy review, security review, accessibility review, ethics review, or executive approval before deployment.”

Implementation Considerations

Adopt a review matrix tied to risk tiers. Require enhanced review for uses affecting rights, opportunities, safety, funding, employment, admission, grading, discipline, disability accommodations, or research compliance.

Common Pitfalls

Treating ethics as optional, overlooking cross-border compliance, and failing to revisit risk classifications as tool capabilities change.

Stakeholders to Involve

General counsel, compliance, privacy officer, CISO, internal audit, research compliance, HR, accessibility office, ethics committees where applicable.

Governance and Resources

19

AI Policy Template Governance, Review, and Continuous Improvement

Purpose

To make the framework durable as AI changes.

Key questions

How often will policies and guidance be reviewed? What signals will trigger revision? How will the institution monitor incidents, adoption, emerging law, and sector guidance?

Sample guidance language

“The institution will review AI-related policy and guidance on a regular schedule and more frequently when material changes occur in law, regulation, accreditation expectations, sponsor rules, technology capabilities, institutional risk, or operational experience.”

Implementation Considerations

Review at least annually, with interim updates for major legal or technical changes. Maintain a central inventory of approved tools, restricted uses, exceptions, incidents, and training completion. Keep policy stable and update procedures and best-practice guidance more frequently.

Common Pitfalls

Writing static policy for fast-moving technology, failing to sunset outdated guidance, and not learning from pilots or incidents.

Stakeholders to Involve

AI steering committee, cabinet sponsor, legal, compliance, IT and security, academic affairs, research office, procurement, accessibility.

20

Periodic Review and Revision for the AI Policy Template

Purpose

To establish a formal cadence for continuous improvement.

Sample guidance language

“The institution should review this framework and any related AI policies, procedures, and guidance no less than annually. Interim review should occur when there are significant legal, regulatory, technological, contractual, security, accessibility, or operational developments.”

Suggested review triggers

New legislation or regulatory guidance, major vendor or platform changes, documented incidents, disciplinary or litigation issues, updated sponsor guidance, accreditation concerns, accessibility findings, and substantial changes in institutional adoption.

21

Key Authoritative Resources and Research Areas to Monitor

Institutions should monitor, at minimum, the following categories of resources. Over time, these sources can help institutions keep their AI policy template current, practical, and aligned with emerging expectations.

🏛️ Core Governance and Risk

NIST AI Risk Management Framework and NIST Generative AI Profile.

🎓 Higher Education Practice

EDUCAUSE AI Landscape Study, EDUCAUSE policy and ethics resources, and HECVAT updates.

🌐 International Guidance

UNESCO guidance for generative AI in education and research. OECD AI Principles and OECD due diligence guidance.

♿ Accessibility

W3C WCAG 2.2 and, for applicable public institutions, relevant digital accessibility requirements.

🔒 Privacy and Student Records

U.S. Department of Education student privacy guidance, FERPA-related resources, and institution-specific state privacy requirements.

🔬 Research and Scholarly Communication

NIH notices and sponsor guidance, NSF policy notices where relevant, ICMJE recommendations, COPE resources, and discipline-specific publisher guidance.

©️ Copyright and Intellectual Property

U.S. Copyright Office AI reports and registration guidance.

🛡️ Security

OWASP guidance for large language model and generative AI applications, plus institutional information security standards.

🇪🇺 International Regulatory Developments

EU AI Act implementation materials for institutions with international activity.

22

AI Policy Template Checklist for Institutions

Use this checklist when developing or revising an institutional AI policy template and related guidance.

Have we clearly distinguished policy, guidance, procedures, and best practices?
Have we defined decision rights and accountable owners?
Have we involved shared-governance bodies and affected stakeholders?
Have we identified approved, restricted, and prohibited use cases?
Have we tied expectations to data-classification and privacy rules?
Have we set review requirements for high-risk and consequential uses?
Have we addressed academic freedom and faculty discretion appropriately?
Have we defined student expectations and disclosure standards clearly?
Have we updated academic integrity language to reflect AI realities?
Have we addressed research, grants, publishing, and sponsor compliance?
Have we built in accessibility review and relevant accessibility expectations?
Have we embedded procurement, HECVAT, privacy, and security review?
Have we addressed vendor training rights, retention, and auditability?
Have we created role-based training for faculty, staff, and students?
Have we established documentation, exception, and incident processes?
Have we identified where legal counsel or compliance review is required?
Have we set a formal review cycle and trigger events for revision?
Have we identified the authoritative resources we will monitor over time?

23

Optional Closing Statement Institutions May Adapt

“This framework is intended to help the institution pursue responsible, effective, and human-centered use of artificial intelligence. Because AI technologies, legal requirements, institutional practices, and scholarly norms continue to evolve, this framework should be interpreted as a living governance resource. Institutional units should adapt it in consultation with appropriate governance bodies and subject-matter experts, including legal counsel, privacy and security leaders, research compliance offices, accessibility experts, and academic leadership, as applicable.”

This AI policy template is intended to serve as a starting point for institutional planning, customization, and continuous improvement.

Mississippi Artificial Intelligence Network

Supporting AI education and governance across Mississippi

Questions about this framework or MAIN’s AI programs? Contact us.

Contact MAIN →