hitrust certification

Turning AI into Assured Intelligence: Highlights from Our Webinar on AI Security and HITRUST

Webinar recap on AI security and HITRUST. Learn why frameworks matter, a four phase path to assessment, and how SISA helps you move from intent to assurance.

 

The rapid adoption of artificial intelligence across industries has created an unprecedented challenge how can organizations harness AI’s transformative power while maintaining robust security and compliance standards? We recently hosted a discussion with cybersecurity experts to address the critical intersection of AI innovation and security. This recap distills the key takeaways from that conversation and offers practical insights for implementing AI security in regulated environments. It also highlights how SISA can guide your journey from intent to assurance.

The AI Security Imperative

The discussion began with Prakash Hingarani, Head of Strategy and Revenue for SISA North America, emphasizing the transformative impact AI has had across industries. It was noted that AI and GenAI have evolved from buzzwords to powerful engines of transformation, shaping nearly every facet of operations. AI’s influence is undeniable, but with this change comes increased responsibility in maintaining secure and compliant systems.

Ryan Patrick, VP of Adoption for HITRUST Alliance and retired U.S. Army Colonel, highlighted that 78% of organizations are already using AI in at least one business function. However, he pointed out that among those who experienced AI-related breaches, 97% lacked proper access controls for AI. This gap is causing issues like unclear ownership, weak permissions, incomplete approval trails, and inconsistent data lineage. To build a durable response, organizations need to define ownership for models and data, implement strict access controls for AI services, ensure full provenance and consent tracking, and continuously monitor usage and drift.

Carlos Artigga, VCSO and Head of InfoSec for healthcare.com, also shared his industry insights. He highlighted that 80% of companies had adopted AI by 2024, with finance and healthcare leading the charge. As these industries scale AI adoption, they face greater risks, such as broader attack surfaces, heavier API integrations, and significant privacy concerns surrounding PHI and PII.

Why AI Security Frameworks Matter

While traditional security frameworks offer a baseline, they were not designed to handle the complexities AI brings. It was pointed out that AI adoption is swift, integrations are easy, and data flows across multiple APIs, often increasing exposure. This exposure grows quietly until small gaps evolve into serious risks.

The risk profile shifts in two ways- first, the existing problems are amplified due to the increased number of systems and datasets; second, AI introduces its own set of challenges, including poisoned training data, biased outputs, prompt injections, and model drift.

A specialized AI security framework addresses these gaps by ensuring clear ownership, strict API and key management, role-based access with approvals, full data provenance and consent tracking, pre- and post-deployment testing, continuous monitoring for drift and misuse, and well-documented response playbooks for AI-related incidents. It is this steady discipline that transforms AI from intriguing to assured.

Neetin Hedau, Director of Strategic Risk and Compliance at SISA, provided further context on the rising AI market. He noted that the AI market is forecasted to reach $1.85 trillion by 2030, with 98.4% of security leaders acknowledging that attackers are already using AI. His key takeaway was that as AI adoption grows, so does the parallel rise in attacker adoption, making strong access controls, continuous monitoring, and clear governance urgent priorities.

The HITRUST AI Security Framework

  • Collaborative Development of the HITRUST AI Security Framework
    Ryan Patrick shared insights into the development of HITRUST’s AI security framework, which was created through a 15-month task force collaboration with leading industry players like AWS and Microsoft. This effort focused on identifying key threats, exploits, and concerns within the AI security landscape.
  • Building AI Security on a Strong Foundation
    A key takeaway from HITRUST’s approach is that AI security must be built on a solid foundation. Ryan emphasized that HITRUST enhances existing security assessments (E1, I1, and R2) by integrating specific AI controls, creating a comprehensive evaluation framework tailored for AI systems.
  • Commitment to Staying Current in the Evolving Threat Landscape
    What sets HITRUST apart from other frameworks is its commitment to staying up to date with emerging threats. Ryan explained, “We ingest threat data on a monthly basis and map it to the MITRE ATT&CK framework,” showcasing HITRUST’s dynamic, real-time approach in contrast to traditional standards, which may only update every 5 to 7 years.

Implementing AI Security: A Four-Phase Approach

Nathan outlined SISA’s structured methodology for implementing AI security assessments:

Phase 1: Readiness and Scoping
The journey begins with defining the proper scope through collaboration between security, legal, and AI/ML engineering teams. Using HITRUST’s MyCSS platform, assessments are tailored to specific factors, including the type of AI model and data sensitivity.

Phase 2: Control Implementation and Evidence Collection
This phase examines how security controls have been implemented across IT teams, with a focus on policies, procedures, and technical configurations.

Phase 3: Formal Assessment and Remediation
Any gaps identified during Phase 2 are formally recorded and remediated through consultation. HITRUST rubrics provide scoring mechanisms, and corrective action plans are set in motion.

Phase 4: Certification and Continuous Compliance
Once the assessment is complete, materials are submitted to HITRUST for QA. Ongoing compliance monitoring, including interim assessments, ensures that the framework remains effective.

Real-World Benefits and Challenges

Carlos provided insight into the broader strategic benefits of AI security assessments. He noted that these assessments help eliminate the pain points of governance and establish clear roadmaps for AI adoption. By doing so, organizations can implement AI confidently, unlocking new business opportunities while empowering employees to be more productive.

However, many organizations face challenges in determining the starting point, timing, and scope for AI security work. As Carlos observed, selecting the appropriate stage for an AI security assessment remains ambiguous in many environments, which delays progress and leaves risks unaddressed.

Shadow AI, unapproved tools, and models introduced into workflows without oversight only complicate matters. A structured response begins with inventorying all AI use cases, classifying risks based on data sensitivity, and adopting a tiered assessment approach that prioritizes the highest-risk areas.

Ryan stressed the importance of securing organizational buy-in, particularly at the executive level. He mentioned that without executive support, managing AI risks becomes significantly more difficult, as business leaders are often able to override security concerns.

Cloud Considerations and Shared Responsibility

The panel also touched on the reality that most AI tools are deployed in the cloud, raising shared responsibility challenges. Ryan explained that cloud providers manage some security controls on behalf of AI system deployers, but these responsibilities do not extend to the entire system, particularly in AI. HITRUST addresses this by enabling organizations to incorporate certified controls from cloud providers like AWS and Azure into their AI security assessments.

Carlos clarified that even if a partner is SOC 2 or HITRUST certified, this doesn’t guarantee full compliance. He stressed that certifications cover only a part of the security landscape, and organizations must take responsibility for the entire risk management process.

The Path Forward

As the discussion concluded, Ryan offered straightforward advice: “Don’t overthink it. AI is new technology, not new risk management.” Organizations can build upon existing third-party risk management programs by integrating AI-specific considerations. The key is to start assessments early in the project lifecycle, particularly when deciding to adopt AI.

Carlos emphasized that beginning AI security assessments at the inception of AI integration is essential. He recommended starting the assessments now that AI systems are planned for deployment, ensuring that potential risks are identified early on.

The readiness assessment was highlighted as a crucial first step. Ryan noted that conducting an upfront readiness assessment or benchmarking where the organization stands is critical to ensuring effective implementation of AI security practices. Partnering with experts like SISA to guide this process is essential for successful outcomes.

Conclusion

As AI adoption accelerates, the need for robust AI security frameworks becomes even more pressing. The HITRUST AI security framework, combined with expert guidance and structured implementation approaches, gives organizations the tools they need to adopt AI securely.

The message from these experts is clear: AI security is not about reinventing the wheel. It’s about building on proven security foundations while addressing the specific risks AI introduces. With the right assessments, implementation strategies, and ongoing monitoring, organizations can harness AI’s power securely, maintaining trust and safeguarding their stakeholders. The future belongs to organizations that balance innovation with security, and the frameworks to achieve this are available today.

 

 

SISA’s Latest
close slider