Artificial intelligence (AI) presents organizations across industries with the opportunity to streamline their workflows, better secure their systems, and solve some of the world’s most pressing issues. But while AI has the potential to offer huge benefits to businesses, it doesn’t come without risk.
“AI can be a useful tool, but business leaders who want to harness the power of AI in 2024 must take active steps to adequately assess their risk,” says Kyle Helles, partner and attest practice leader at BARR Advisory.
“To ensure lasting cyber resilience, appropriate due diligence should be done with vendors that provide AI-powered tools or use AI to provide a service to your organization,” she advises.
So where do you start?
To adequately assess your organization’s risk when working with third-party vendors that use artificial intelligence to provide their products or services, Helles recommends taking a close look at how those vendors approach three key areas:
We asked Helles to share questions that business and security leaders who are in the throes of risk assessments should discuss with potential vendors. Here’s what she had to say.
According to Helles, the first step in assessing the risks posed by vendors that use AI to provide their products and services is to learn more about their security, privacy, and ethical practices.
Helles recommends asking questions like:
Once you’ve uncovered the answers, you’ll have a better idea of how the vendor approaches cybersecurity and whether their security practices align with your organization’s expectations.
Businesses that want to use new and innovative AI tools must also consider the industry standards and legal requirements that they may be subject to. Depending on where the vendor is headquartered and what kinds of products or services they offer, they may be required to maintain compliance with rules like HIPAA, GDPR, and PCI DSS.
Vendors that prioritize security and privacy might also go beyond these requirements to achieve compliance against other frameworks, like ISO 42001, which was specifically designed to assess the safety, privacy, transparency, and data quality of AI systems.
As part of your vendor risk assessment, Helles suggests finding answers to questions such as:
Finally, don’t forget about culture and values. AI is still an emerging technology, and simply checking off compliance boxes isn’t enough to prove that a vendor is using AI safely and transparently.
Here are two questions Helles recommends exploring:
Uncovering the answers to these questions will help your team make a more informed decision on whether signing on with the vendor is worth the risk—and what steps must be taken to ensure your customers’ and stakeholders’ data stays protected.
Want to learn more about how to manage your organization’s risk while working with AI-powered vendors? Contact us today to speak with a BARR expert.