The Vendor is required to provide comprehensive AI governance and monitoring solution, to include but not limited to:
• Software
• Implementation services including training, documentation, and knowledge transfer
• Ongoing support
- AI inventory and registration management:
• AI use case intake and approval workflow
o Intake form requiring documentation of applications, training data sources, model type, intended users, and data sensitivity.
o Configurable workflow engine that facilitates communication and coordinated action among diverse stakeholders.
• Automated and manual AI system registration.
• Risk classification (e.g., low, moderate, high, critical)
o Risk scoring logic that flags AI systems using phi and student data for additional review.
o Support for risk-based AI classification.
• Model documentation repository
o Dashboard showing all active AI tools by department and risk tier
• Version tracking and lifecycle status (development, validation, production, retired).
- Policy management and governance framework:
1. The following capabilities are considered required:
• Policy mapping to AI systems.
• Control validation for AI-specific risks (bias, data leakage, trust, privacy, security)
• Policy attestation and user acknowledgment tracking via structured signoffs, attestation, and approval requirement.
• Automated enforcement.
2. The following capabilities are suggested:
• Controls requiring human-in-the-loop review for high-risk AI decisions.
• Enforcement requiring bias testing documentation prior to production deployment.
• Mapping institutional AI policy controls to federal and state regulations.
- Risk assessment and compliance monitoring:
1. The following capabilities are considered required:
• AI risk assessment templates.
• Frameworks to classify, assess, and mitigate AI specific risks and content libraries to address regulations.
• Bias detection and fairness monitoring.
• Data drift and model performance monitoring.
• Provide documentation for trust, risk, and security assessments, testing, and validated results, risk and compliance remediation evidence.
• Provides comprehensive audit trails of actions taken in the platform and automatically logs activities related to the AI life cycle.
• Provide regulatory alignment – since we have diverse use cases – like
2. The following capabilities are suggested:
• Ongoing monitoring of predictive analytics models for performance degradation.
• Alert when model output accuracy drops below predefined threshold.
• Periodic bias audit reports for admissions or hiring-related AI tools.
- Data privacy and security oversight:
1. The following capabilities are considered required:
• Integration with IAM systems (e.g., SSO, role-based access).
o Role-based access controls and SSO integration capability.
• Logging and activity monitoring.
• Data classification alignment.
• Data access control and retention policies.
• Third-party AI risk evaluation support.
2. The following capabilities are suggested:
• Automated notification if AI model accesses restricted phi datasets.
• Dashboard showing AI systems interacting with confidential research data.
• Integration with GRC tools for unified risk view.
• Vendor risk management integration.
- Monitoring of generative AI and external AI tools:
1. The following capabilities are considered required:
• Monitoring of enterprise-approved generative AI tools.
• Shadow AI discovery.
• Prompt logging and usage analytics.
• Data loss prevention alignment.
• Analytics dashboard showing departmental usage trends.
2. The following capabilities are suggested:
• Monitoring API usage volume and sensitive data submissions.
• Alerts for excessive upload of sensitive institutional data into external AI platforms.
Set up free email alerts and get notified when new government bids, tenders and procurement opportunities match your industry and location. Choose daily or weekly delivery.