Skip to main content
HITRUST

Explore the Future of AI Assurance with HITRUST

By April 6, 2024No Comments

In a groundbreaking move, HITRUST continues to spearhead the integration of trustworthy AI assurances into its renowned HITRUST CSF. With the Q4 2023, v11.2.0 update, HITRUST becomes the first and only system to offer control assurances specifically tailored for generative AI and related applications. Excitingly, starting in 2024, HITRUST will introduce AI assurance reports, empowering organizations to showcase their AI cyber maturity.

HITRUST Shared Responsibility and Inheritance Program: A Pillar of Efficiency

Since its establishment in 2007, HITRUST has been at the forefront of developing a trusted assurance system. Among its array of offerings, the HITRUST Shared Responsibility and Inheritance Program stand out, delivering substantial efficiencies. This program precisely delineates shared responsibilities between customers and their service providers, facilitating meticulous planning of security accountabilities.

The Inheritance Program allows organizations to leverage previously certified controls, whether from their own assessments or those of third parties like cloud service providers (CSPs). Given the widespread HITRUST certifications among major CSPs, organizations partnering with them can streamline their certification journeys, saving considerable time and resources. In fact, they can inherit up to 85% of requirements in a HITRUST e1 Validated Assessment and up to 70% in a HITRUST r2 Validated Assessment.

Extending Shared Responsibility and Inheritance to AI: Navigating the AI Landscape with Confidence

Building on these principles, HITRUST is poised to introduce shared responsibility and inheritance capabilities for AI risk management. The AI features integrated into HITRUST CSF v11.2.0 ensure that organizations have the necessary security controls in place. Just as with traditional assessments, the shared responsibility model is integral to AI risk management.

AI service providers and users must collaboratively navigate their shared responsibilities. Providers need to clarify model duties, while customer organizations must assess model suitability and adherence to cybersecurity best practices. Together, they identify AI risks and develop mitigation plans.

Key Considerations for AI Security:

• Ownership of Responsibilities: Clearly define responsibilities for model training, tuning, and testing, identifying the context in which each party assumes control.

• Data Quality Assurance: Scrutinize training data for quality and relevance, implementing proper controls to safeguard it.

• Bias Mitigation: Recognize and minimize biases in the data, ensuring fair and ethical AI practices.

• Continuous Assessment: Regularly assess newly created or modified data to maintain robust security practices.