AI Guidance and Best Practices

This administrative guidance directs GW staff and administrators on the responsible use and procurement of artificial intelligence (AI) technologies and informs them about GW-approved AI tools available for administrative functions. Guidance for AI in academic work is provided separately in “GW’s Guidelines for Using Generative Artificial Intelligence in Academic Work."

AI refers to computer systems designed to perform tasks that typically require human intelligence. These systems use advanced analytical and logic-based techniques, including Machine Learning (ML), to interpret patterns in large datasets, make predictions, and inform actions. ML is a core AI component where systems learn from data to improve their performance on specific tasks without being explicitly programmed for every scenario.

Generative AI is a specialized form of AI that can create new, original content such as text, images, or code based on inferences drawn from existing information. Trained on vast amounts of data, these tools (like chatbots and large language models) learn underlying patterns to produce novel outputs in response to prompts. They can assist with administrative tasks like drafting, summarizing, and organizing information, freeing up human expertise for work requiring deeper insight and collaboration. Results of generative AI should always be reviewed by humans for accuracy and context.

While generative AI is a popular example of AI, the principles outlined in this guidance apply broadly to other AI applications used in administrative contexts, such as automation, analytics, and predictive tools.

When used thoughtfully, AI can enhance productivity and support decision-making. When used carelessly, it risks exposing sensitive data, introducing bias, or eroding institutional trust. This guidance promotes appropriate, value-added AI use that reinforces professional judgment and aligns with university policy.

This guidance aligns with GW's information technology, procurement, and privacy policies, and its Code of Ethical Conduct.

Guiding Principles for Administrative Use of AI

  Use Only GW-Approved AI Tools

Use AI tools that have been reviewed and approved through GW’s procurement and risk review process. This practice is vital for ensuring security, compliance, and responsible AI use, especially when handling non-public information* to protect sensitive data.

  • For approved tools, always use your GW UserID to apply required security and data protections.
  • If unsure of a tool’s approval status, you must verify its approval by checking the Approved AI Tools List before use to ensure compliance and adherence to university
    policy.

*Non-public information is data not publicly available. Examples include, but are not limited to: internal communications, student/employee data, unpublished research, drafts, and budget materials. With such data, even if anonymized, it is essential to prevent exposure of sensitive or identifiable details.

This guidance aligns with GW’s Cybersecurity Risk Policy, Identity and Access Management Policy and Acceptable Use of IT Resources Policy. 

  Protect GW Institutional Data and Personal Information

Be vigilant with data input into AI tools. All tools used must be GW-approved, but data suitability within these tools varies; some types require more protection or are unsuitable for certain AI applications. To handle data appropriately with approved tools:

Understand Privacy & Data Classifications

First, determine the classification of your data. Review GW's Privacy Guidance for AI and Data Classification Guide for full definitions and learn to identify:

  • Public Data: e.g., public website content, event information.
  • Restricted Data: e.g., non-public internal communications, plans, contracts – not for general public access.
  • Regulated data: e.g., legally protected data like FERPA/HIPAA records, government IDs – subject to specific laws and representing the highest level of data sensitivity.
Match Approved Tools to Data Sensitivity

Once classified, ensure data is used only in GW-approved AI tools appropriate for its sensitivity level:

  • Public Data: May be used with any GW-approved AI tool, provided all other guidelines are followed.
  • Restricted Data: Must only be used in GW-approved AI tools specifically designated for handling this level of data sensitivity. Consult the Approved AI Tools List to verify a tool’s suitability for restricted data.
  • Regulated Data: Due to its extreme sensitivity and legal protections, regulated data may only be used in the very limited number of GW-approved AI tools explicitly certified by GW for this specific purpose. Extreme caution is required; always verify a tool’s certification for regulated data the Approved AI Tools List before any use.
  • Special Care for Personal Information (PII): This data, regardless of its primary
    classification (which could be Restricted or Regulated), requires extreme caution. Use PII only if the AI tool is explicitly approved for it and regulated data, and only if its use is absolutely essential and minimized.
Protect the Privacy of Personal Information

Before using any data with AI, especially PII, carefully consider whether your objectives can be achieved without using personal information, or if de-identified aggregate data would suffice. Remove any identifying details not strictly necessary for your task. For example, an AI analysis of support tickets can often be performed effectively without including individual GW UserIDs or other PII.

This guidance aligns with GW's Privacy Guidance for use of Artificial Intelligence, Privacy of Personal Information Policy, Data Protection Guide, and Cybersecurity Risk Policy

   Keep a Human in the Loop

AI tools can assist with tasks like brainstorming, summarizing, or drafting in your GW role, but they are designed to complement, not replace, your expertise and judgment. You are ultimately responsible for ensuring all AI-assisted work meets GW’s operational and ethical standards.

To do this effectively:

  • Critically Evaluate & Revise: Always treat AI-generated content as a first draft. Final decisions, interpretations, and quality assurance must come from you. Verify its accuracy, originality, context, completeness, and tone using your own knowledge. Be aware that AI can produce errors or “hallucinations” and should never be accepted at face value. Thoroughly review and revise all output to ensure it aligns with GW’s data quality standards before use or sharing.
  • Check for Copyright Issues: AI content may incorporate copyrighted material from its training data. (the information the AI learned from, such as books, websites, or documents). Do not assume AI-generated content is cleared for use without careful review for potential infringement.
  • Identify and Mitigate Bias: Examine AI outputs for potential bias or discrimination. Apply objectivity and a critical perspective to avoid perpetuating harmful assumptions.
  • Maintain Final Oversight: Your decisions, interpretations, and quality assurance are paramount. You are accountable for the final product.

  Practice Transparency

When you use AI to assist in creating content or deliver services, especially for materials or
interactions involving others, be clear about its involvement.

To practice transparency effectively:

  • Disclose AI Assistance in Content: When sharing content that has been AI-generated or significantly AI-assisted, provide a clear note about AI’s role.
    • Example: A simple statement such as, “Draft created with AI assistance and reviewed by [Your Name]” is often sufficient.
  • Be Transparent About AI in Meetings and Collaboration: If AI tools are utilized during virtual meetings or collaborative sessions (e.g., for AI-powered transcription, summarization, or automated note-taking), review GW’s privacy considerations for virtual platforms. Ensure participants are appropriately notified to uphold transparency and privacy standards.
  • Ensure Team Clarity (Managerial Responsibility): Managers should discuss the use of AI tools with their teams. This includes setting clear expectations for applying AI to tasks and ensuring all team members are informed about and aligned on disclosure practices.

  Uphold GW Brand Consistency

Ensure all AI-generated content strictly adheres to GW’s official branding and communication standards. This involves verifying compliance with:

  Explore, Learn, and Stay Up to Date

By staying informed and collaborative, you can help foster a culture of responsible AI use at GW. Visit the Training and Resources page for:

  • Tutorials on using approved tools.
  • Tips for effectively integrating AI into your role.

  Request a Consultation

GW IT is available to assist if you:

  • Are unsure whether your use of AI aligns with this guidance.
  • Are planning a new AI use case.
  • Are evaluating AI tools.
  • Will be handling sensitive data with AI.

Service owners who identify new AI capabilities within existing GW-acquired products should contact GW IT for a consultation. This step ensures an appropriate administrative AI review is conducted. Such a review will evaluate the new AI capabilities, confirm compliance with GW policies, assess data security and privacy implications, and address any residual risks.

To request a consultation, complete the Technology Consultation Form and a member of GW IT’s AI team will follow up to schedule a discussion.

 

Phone

202-994-4948
24 hours / 7 days a week

Knowledge Base

Explore our knowledge base for how-to articles and guides.

IT Help