SichGate
DocsMethodologyComplianceAbout

Compliance & Standards

Mapping Security Testing to Regulatory Requirements

Organizations deploying AI/ML systems face a complex web of compliance obligations. SichGate's testing framework directly addresses requirements from major regulations and standards, providing auditable evidence of due diligence.


01

EU AI Act

Regulation (EU) 2024/1689

Regulatory Timing: Entered into force July/August 2024, implemented in phases. Some prohibitions apply from Feb-Aug 2025, with high-risk system obligations phased in 2026-2027 and broad applicability by 2 August 2027.

Penalties: Up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher

Important Note on Conformity Assessment

Our testing provides auditable evidence that supports conformity assessments and Data Protection Impact Assessments (DPIAs). However, for certain Annex III high-risk use cases, a formal conformity assessment under Article 43 involving notified bodies may still be required. Our reports and artifacts can reduce the scope of that formal assessment but do not replace it where notified-body involvement is mandatory.

Article 9 - Risk Management System

Requirement: "Identify and analyze known or reasonably foreseeable risks"

SichGate Testing: Behavioral subversion tests identify foreseeable adversarial risks (prompt injection, jailbreaking)

Deliverable: Risk assessment report mapping test failures to risk categories

Evidence for Auditors: Documented test results showing identified vulnerabilities and severity levels

Article 10 - Data and Data Governance

Requirement: "Examine possible biases and ensure training data quality"

SichGate Testing: Capability failure tests verify model handles edge cases and doesn't exhibit systematic errors

Deliverable: Robustness report showing model performance across input variations

Evidence for Auditors: Test battery results demonstrating quality assurance process

Article 11 - Technical Documentation (Annex IV)

Requirement: "Maintain technical documentation including testing procedures, data provenance, and post-market monitoring plan"

SichGate Testing: Our test reports and methodology documentation satisfy multiple Annex IV requirements

Deliverable: Annex IV mapping table showing which documentation elements our reports provide

Evidence for Auditors: See Annex IV mapping table below

Article 15 - Accuracy, Robustness, and Cybersecurity

Requirement: "Achieve appropriate levels of accuracy, robustness against errors and faults"

SichGate Testing: Comprehensive adversarial testing measuring accuracy under attack

Deliverable: Security assessment report with pass/fail rates and severity breakdown

Evidence for Auditors: Third-party test results showing robustness validation

Article 72 - Post-Market Monitoring

Requirement: "Establish and document a post-market monitoring system"

SichGate Testing: SichGate Pro: Continuous testing and regression detection (automated monitoring)

Deliverable: Ongoing compliance reports tracking model security over time

Evidence for Auditors: Historical testing data showing monitoring activities

EU AI Act Compliance Checklist

  • Pre-deployment security assessment (Article 9)
  • Documentation of known risks and testing results (Article 11)
  • Robustness testing against adversarial inputs (Article 15)
  • Third-party validation (Article 43 - supporting evidence, though notified body may still be required)
  • Post-market monitoring plan (Article 72, with SichGate Pro)

Annex IV Technical Documentation Mapping

Our test reports satisfy the following Annex IV documentation requirements (per Article 11):

Annex IV ElementSichGate DeliverableNotes
Testing procedures and resultsFull test report with methodologyIncludes test inputs, outputs, pass/fail criteria
Validation and testing resultsValidation report with reproducible testsRaw data + baseline comparisons included
Cybersecurity measuresAdversarial testing resultsDocuments resilience against known attack techniques
Post-market monitoring systemContinuous testing plan (Pro tier)Automated regression detection and alerting
Information about changes to the AI systemPartial (retest reports)We provide retest results; version control is customer responsibility
Performance metricsRobustness metrics and pass ratesQuantitative assessment of model behavior

What SichGate Does NOT Provide (customer or notified body responsibility):

  • - Training data provenance documentation
  • - Complete quality management system records
  • - Supply chain documentation for third-party components
  • - Human oversight procedures documentation

02

GDPR

General Data Protection Regulation

Enforcement: Active since 2018

Penalties: Up to €20M or 4% of global revenue

Article 25 - Data Protection by Design and by Default

Requirement: "Implement appropriate technical measures"

SichGate Testing: Information disclosure tests detect training data leakage

Deliverable: Data privacy assessment identifying extraction vulnerabilities

Evidence for Auditors: Test results showing model doesn't leak PII or training data

Article 32 - Security of Processing

Requirement: "Implement appropriate security measures"

SichGate Testing: Comprehensive adversarial testing validates security controls

Deliverable: Security test report documenting implemented safeguards

Evidence for Auditors: Third-party assessment of security posture

Article 35 - Data Protection Impact Assessment (DPIA)

Requirement: "Carry out assessment of impact on data protection (required when processing is likely to result in a high risk)"

SichGate Testing: Risk-based testing framework identifies data protection risks

Deliverable: DPIA section on AI security testing and results

Evidence for Auditors: Systematic risk identification methodology

Note: Whether a DPIA is required depends on your specific processing activities and risk assessment

GDPR Compliance Checklist

  • Technical measures to prevent unauthorized data disclosure (Article 25)
  • Security testing appropriate to the risk (Article 32)
  • Documentation of security assessment process (Article 30)
  • Third-party validation of security controls (Article 42)

03

NIST AI RMF

AI Risk Management Framework

Status: Voluntary guidance, increasingly referenced in regulations

Adoption: Required for US federal contractors, widely adopted in industry

GOVERN

Governance and oversight structures

SichGate: Provides objective metrics for governance decisions

Deliverable: Executive summary report suitable for board/leadership review

MAP

Context, risks, and impacts identification

SichGate: Threat modeling through behavioral subversion and capability failure tests

Deliverable: Risk map showing identified vulnerabilities by category and severity

MEASURE

Identified risks are analyzed and assessed

SichGate: Quantitative measurement of attack success rates and robustness

Deliverable: Metrics report with pass/fail rates, severity scores, and trend analysis

MANAGE

Risks are prioritized and responded to

SichGate: Severity classification enables risk prioritization

Deliverable: Remediation guidance with risk-based prioritization

NIST AI RMF Trustworthiness Characteristics

CharacteristicSichGate Test Coverage
Valid and ReliableCapability failure testing validates reliability under variations
SafeBehavioral subversion testing identifies safety policy violations
Secure and ResilientComprehensive adversarial testing measures security posture
Accountable and TransparentTest methodology is open source and auditable
Explainable and InterpretableTest reports explain why failures occur and what they mean
Privacy-EnhancedInformation disclosure testing prevents training data leakage
Fair (Bias Management)Edge case testing can reveal systematic biases (roadmap item)

04

ISO/IEC Standards

International standards for AI systems

ISO/IEC 42001:2023 - AI Management System

Requirement: Establish risk assessment process for AI systems

Alignment: Provides structured risk assessment methodology aligned with ISO/IEC 42001 principles

Note: SichGate methodology is aligned with ISO/IEC 42001 but is not itself certified under the standard

ISO/IEC 23894:2023 - AI Risk Management

Requirement: Identify and assess AI-specific risks

Alignment: Tests map directly to adversarial risk categories defined in the standard

ISO/IEC 27001 - Information Security (AI Controls)

Requirement: Security controls for AI/ML systems

Alignment: Adversarial testing validates security control effectiveness


05

Industry-Specific Regulations

Sector-specific compliance requirements

HIPAA

Healthcare - § 164.308(a)(1)(ii)(A)

Requirement: Conduct an accurate and thorough assessment of potential risks

Testing: Information disclosure tests prevent patient data leakage

Use Case: Healthcare chatbots, diagnosis support, patient triage AI

GLBA

Financial Services - Safeguards Rule

Requirement: Identify reasonably foreseeable internal and external risks

Testing: Adversarial testing identifies external attack risks

Use Case: Credit scoring, fraud detection, trading algorithms

PCI DSS v4.0

Payment Systems - Requirement 6

Requirement: Security testing before deployment

Testing: Pre-deployment adversarial testing for payment fraud detection

Use Case: Transaction classification, fraud detection models


06

OWASP ML Top 10

Machine Learning Security Top 10

RiskNameCoverageTest Category
ML01Input Manipulation Attack ComprehensiveBehavioral Subversion
ML02Data Poisoning IndirectCapability Failure
ML03Model Inversion CoveredInformation Disclosure
ML04Membership Inference CoveredInformation Disclosure
ML05Model Theft Out of scopeN/A
ML06AI Supply Chain IndirectConfiguration Testing
ML07Transfer Learning PartialCapability Failure
ML08Model Skewing IndirectCapability Failure
ML09Output Integrity ComprehensiveAll Categories
ML10Model Poisoning Out of scopeN/A

Direct testing | Indirect or partial coverage


07

MITRE ATLAS

Adversarial Threat Landscape for AI Systems

Reconnaissance

Technique: AML.T0024 - Discover ML Artifacts

SichGate Test: Capability probing tests

Resource Development

Technique: AML.T0043 - Craft Adversarial Data

SichGate Test: Automated adversarial example generation (Pro)

Initial Access

Technique: AML.T0051 - Evade Defenses

SichGate Test: Behavioral subversion tests (prompt injection, jailbreaking)

Exfiltration

Technique: AML.T0024 - Infer Training Data

SichGate Test: Information disclosure tests (training data extraction)

Impact

Technique: AML.T0048 - Denial of Service

SichGate Test: Capability failure tests (edge case discovery)


08

Auditor Evidence Package

Every SichGate report includes a complete evidence package designed for regulatory and third-party audits.

Standard Artifacts (All Tiers)

1.
Reproducible Test Cases
  • - Exact test inputs (text, prompts, or data files)
  • - Random seeds used for any stochastic elements
  • - Test execution scripts or CLI commands
  • - Expected outputs and actual outputs (side-by-side)
2.
Model Identification
  • - Model name and version number
  • - Model weights checksum (SHA-256 hash) or container image digest
  • - HuggingFace model card reference (if applicable)
  • - Fine-tuning details if model was customized
3.
Raw Test Outputs
  • - Complete test logs in machine-readable format (JSON/CSV)
  • - Baseline comparison data (model behavior on clean inputs)
  • - Delta calculations (change in behavior under attack)
  • - Timestamps for every test execution
4.
Test Environment Manifest
  • - Operating system and version
  • - Python version and key library versions
  • - Hardware specifications (CPU/GPU used)
  • - Test framework version (SichGate CLI version)
5.
Reproducibility Evidence
  • - Results from 3+ independent test runs demonstrating consistency
  • - Statistical analysis of variance across runs
  • - Instructions for auditor to rerun tests independently
6.
Post-Remediation Verification
  • - Retest results after fixes implemented
  • - Comparison showing improvement or resolution
  • - Remaining open issues if not fully remediated
7.
Compliance Mapping Table
  • - Which tests satisfy which regulatory requirements
  • - Article-by-article mapping (EU AI Act, GDPR, etc.)
  • - NIST AI RMF function alignment
  • - ISO/IEC standard clause references
8.
Audit Trail
  • - Signed test manifest with cryptographic hash
  • - Test execution date and time stamps
  • - Tester identity (for Enterprise tier with human review)
  • - Chain of custody for any data accessed
Additional Artifacts (Enterprise Tier)
  • - Expert analysis report with manual review findings
  • - Custom test scenario documentation
  • - Integration logs if tested via CI/CD
  • - Comparison with industry benchmarks
For Auditors Quick Reference

All evidence is provided in standard formats:

  • - Structured data: JSON/CSV for programmatic analysis
  • - Human-readable: PDF/Markdown reports for review
  • - Reproducible: Scripts and instructions to rerun tests
  • - Traceable: Git commit hashes, Docker image tags, or equivalent

09

Compliance Report Formats

SichGate generates reports tailored to different regulatory contexts.

Standard Compliance Report

  • - Executive summary suitable for non-technical stakeholders
  • - Test methodology explanation with regulatory mapping
  • - Detailed findings with severity classification
  • - Risk assessment and business impact analysis
  • - Remediation recommendations with prioritization
  • - Compliance checklist showing satisfied requirements

EU AI Act Compliance Package

Includes everything in Standard Report, plus:

  • - Article-by-article requirement mapping
  • - High-risk system classification justification
  • - Conformity assessment preparation documentation
  • - Technical documentation package (Annex IV)
  • - Post-market monitoring recommendations

GDPR Data Protection Assessment

Includes everything in Standard Report, plus:

  • - Article 32 security measures documentation
  • - Data Protection Impact Assessment (DPIA) section
  • - PII/training data leakage test results
  • - Data minimization recommendations
  • - Cross-border transfer considerations

NIST AI RMF Alignment Report

Includes everything in Standard Report, plus:

  • - Function-by-function RMF mapping (Govern/Map/Measure/Manage)
  • - Trustworthiness characteristic assessment
  • - Risk treatment recommendations
  • - Continuous monitoring plan
  • - Governance metrics dashboard

10

Third-Party Audit Support

SichGate results are designed to withstand scrutiny from:

External Auditors

  • - All tests documented and reproducible
  • - Methodology publicly available and peer-reviewed
  • - Results include raw data, not just summaries
  • - Independent third-party testing (not self-assessment)

Regulatory Inspectors

  • - Clear mapping to specific regulatory requirements
  • - Audit trail showing when tests were run and by whom
  • - Version control for test battery and model under test
  • - Evidence package ready for regulatory submission

Enterprise Security Reviews

  • - Technical depth satisfies InfoSec teams
  • - Executive summary satisfies procurement teams
  • - Compliance mapping satisfies legal teams
  • - Remediation guidance satisfies engineering teams

11

Continuous Compliance

SichGate Pro - Regulations like EU AI Act Article 72 require ongoing monitoring.

  • Automated Regression Testing

    Retest models after updates/retraining

  • Compliance Drift Detection

    Alert when model behavior violates policies

  • Historical Compliance Tracking

    Demonstrate continuous monitoring to auditors

  • CI/CD Integration

    Block deployments that fail security gates

  • Annual Compliance Reports

    Automated generation for regulatory filing


12

For Auditors: Quick Index

Fast reference for auditors reviewing SichGate test reports.

EU AI Act Compliance Map

ArticleRequirementCoverageReport Section
Article 9Risk management systemAdversarial risk identificationExecutive Summary, Findings
Article 10Data governanceRobustness testingCapability Failure Results
Article 11Technical documentationTest methodology + resultsFull Report + Annex IV
Article 15Accuracy, robustness, cybersecurityComprehensive adversarial testingAll Test Categories
Article 43Conformity assessmentSupporting evidenceComplete Evidence Package
Article 72Post-market monitoringContinuous testing (Pro)Monitoring Recommendations

GDPR Compliance Map

ArticleRequirementCoverageReport Section
Article 25Data protection by designInformation disclosure testingData Leakage Tests
Article 32Security of processingAdversarial security assessmentSecurity Posture Summary
Article 35DPIA (when required)Risk-based testing methodologyRisk Assessment Section

NIST AI RMF Map

FunctionCategoryCoverageReport Section
GOVERNPolicies and oversightObjective metrics for governanceExecutive Summary
MAPContext and risk identificationThreat modeling via test scenariosThreat Model Section
MEASURERisk analysis and assessmentQuantitative vulnerability measurementMetrics Dashboard
MANAGERisk prioritization and responseSeverity classification and remediationRecommendations

OWASP ML Top 10 Map

Risk IDNameCoverageTest Category
ML01Input Manipulation ComprehensiveBehavioral Subversion
ML03Model Inversion TestedInformation Disclosure
ML04Membership Inference TestedInformation Disclosure
ML09Output Integrity ComprehensiveAll Categories

ISO/IEC Standards Map

StandardClauseAlignmentReport Section
ISO/IEC 42001:2023Risk assessment processStructured testing methodologyMethodology Appendix
ISO/IEC 23894:2023AI risk managementRisk categorization frameworkRisk Matrix
ISO/IEC 27001Information security controlsSecurity testing validationSecurity Controls Assessment

Where to Find Key Evidence

Reproducibility:See "Test Execution Logs" section and attached JSON files
Model Identification:See "System Under Test" section with checksums
Test Methodology:See "Methodology Appendix" with research citations
Compliance Mapping:See this section and "Regulatory Requirements" appendix
Remediation Steps:See "Recommendations" section with prioritization
Raw Data:See attached CSV/JSON files in evidence package

Common Auditor Questions

Q: How do I verify these tests were actually run on our model?

A: Check model checksum in "System Under Test" section matches your production model, and review timestamps in audit trail.

Q: Can I reproduce these results?

A: Yes, follow instructions in "Reproducibility" section using provided scripts and test inputs.

Q: What wasn't tested?

A: See "Limitations and Scope" section for explicit blind spots.

Q: Does this satisfy [specific regulation]?

A: See compliance map above and "Regulatory Requirements" appendix for detailed mapping.

Q: What happens if we make changes to the model?

A: Retest using same methodology (included in Enterprise tier, or rerun CLI for free tier users).