HITRUST recently announced a new program called The HITRUST AI Assurance Program, which provides a secure and sustainable strategy for trustworthy AI leveraging the HITRUST common security framework (CSF), AI-specific assurances, and shared responsibilities and inheritance. The HITRUST AI Assurance Program is the only assurance program to enable the sharing of security control assurances for generative AI and other emerging AI model applications.
With this exciting initiative comes a few essential details to explore. Let’s take a closer look at HITRUST’s AI Assurance Program as outlined in its strategy report for the secure and sustainable use of AI.
AI foundational models, now available from cloud service providers and other leading organizations, allow businesses to scale AI across industries and specific enterprise needs. However, with any new, disruptive technology comes the possibility of risks.
“The opaque nature of these deep, neural networks introduces data privacy and security challenges that must be met with transparency and accountability,” states HITRUST. In other words, to operate effectively, AI systems must be trustworthy—and risk management is only possible if the multiple organizations involved share responsibility for identifying, managing, and measuring those risks.
HITRUST’s trustworthy approach to AI is aided by existing and proven approaches to risk, security, and compliance management, all supported by the reliable and scalable HITRUST assurance system.
As an early adopter of AI for greater efficiency, HITRUST understands that users of AI systems can leverage the capabilities as part of their overarching risk management program to see an overall increase in efficiency and trustworthiness of their systems.
With their new strategy, HITRUST has already completed a few areas of innovation that provide benefits to the HITRUST community, including:
“AI has tremendous social potential, and the cyber risks that security leaders manage every day extend to AI. Objective security assurance approaches such as the HITRUST CSF and HITRUST certification reports assess the needed security foundation that should underpin AI implementations,” said Omar Khawaja, Field CISO of Databricks.
HITRUST’s strategy for the secure and sustainable use of AI encompasses a series of important elements critical to the delivery of trustworthy AI. According to HITRUST, they, along with “industry leader partners, are identifying and delivering practical and scalable assurance for AI risk and security management through key initiatives providing organizations with the leadership needed to achieve the benefits of AI while managing the risks and security of their AI deployments.”
Take a look at the four key initiatives outlined in the HITRUST AI strategy.
Beginning with the release of HITRUST CSF version 11.2 in Oct. 2023, HITRUST is incorporating AI risk management and security dimensions in the HITRUST CSF. This provides an important foundation that AI system providers and users can use to consider and identify risks and negative outcomes in their AI systems. HITRUST will provide regular updates as new controls and standards are identified and harmonized in the framework.
According to the HITRUST AI strategy, HITRUST CSF 11.2 includes two risk management sources with plans to add additional sources through 2024:
Beginning in 2024, HITRUST assurance reports will include AI risk management so that organizations can address AI risks through a common, reliable, and proven approach. This will allow organizations that are implementing AI systems to understand the risks associated and reliably demonstrate their adherence to AI risk management principles with the same transparency, consistency, accuracy, and quality available through all HITRUST reports.
More specifically, both AI users and AI service providers may add AI risk management dimensions to their existing HITRUST e1, i1, and r2 assurance reports and use the resulting reports to demonstrate the presence of AI risk management on top of robust and provable cybersecurity capabilities. This will support the ever-changing cybersecurity landscape as HITRUST and industry leaders regularly add additional control considerations to the AI Assurance Program.
The HITRUST Shared Responsibility Model will allow AI service providers and their customers to agree on the distribution of AI risks and allocation of shared responsibilities. It’s important to consider those areas where the parties share risk management roles, such as when both parties have responsibility for model training, tuning, and testing with different contexts.
As part of the model, parties must demonstrate that they have considered and addressed important questions, including but not limited to:
HITRUST will use its long-standing experience in control frameworks, assurance, and shared responsibility to drive responsible and industry-led solutions for AI risk management and security.
For example, Microsoft Azure OpenAI Service supports HITRUST maintenance of the CSF and enables accelerated mapping of the CSF to new regulations, data protection laws, and standards. This, in turn, supports the Microsoft Global Healthcare Compliance Scale Program, enabling solution providers to streamline compliance for accelerated solution adoption and time-to-value.
As HITRUST continues to develop their AI Assurance Program, BARR is dedicated to helping you transition with its new initiatives and innovations for trustworthy AI. Contact us today to speak with a HITRUST specialist and begin or continue your HITRUST journey.