Compliance & Standards
Mapping Security Testing to Regulatory Requirements
Organizations deploying AI/ML systems face a complex web of compliance obligations. SichGate's testing framework directly addresses requirements from major regulations and standards, providing auditable evidence of due diligence.
EU AI Act
Regulation (EU) 2024/1689
Regulatory Timing: Entered into force July/August 2024, implemented in phases. Some prohibitions apply from Feb-Aug 2025, with high-risk system obligations phased in 2026-2027 and broad applicability by 2 August 2027.
Penalties: Up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher
Important Note on Conformity Assessment
Our testing provides auditable evidence that supports conformity assessments and Data Protection Impact Assessments (DPIAs). However, for certain Annex III high-risk use cases, a formal conformity assessment under Article 43 involving notified bodies may still be required. Our reports and artifacts can reduce the scope of that formal assessment but do not replace it where notified-body involvement is mandatory.
Article 9 - Risk Management System
Requirement: "Identify and analyze known or reasonably foreseeable risks"
SichGate Testing: Behavioral subversion tests identify foreseeable adversarial risks (prompt injection, jailbreaking)
Deliverable: Risk assessment report mapping test failures to risk categories
Evidence for Auditors: Documented test results showing identified vulnerabilities and severity levels
Article 10 - Data and Data Governance
Requirement: "Examine possible biases and ensure training data quality"
SichGate Testing: Capability failure tests verify model handles edge cases and doesn't exhibit systematic errors
Deliverable: Robustness report showing model performance across input variations
Evidence for Auditors: Test battery results demonstrating quality assurance process
Article 11 - Technical Documentation (Annex IV)
Requirement: "Maintain technical documentation including testing procedures, data provenance, and post-market monitoring plan"
SichGate Testing: Our test reports and methodology documentation satisfy multiple Annex IV requirements
Deliverable: Annex IV mapping table showing which documentation elements our reports provide
Evidence for Auditors: See Annex IV mapping table below
Article 15 - Accuracy, Robustness, and Cybersecurity
Requirement: "Achieve appropriate levels of accuracy, robustness against errors and faults"
SichGate Testing: Comprehensive adversarial testing measuring accuracy under attack
Deliverable: Security assessment report with pass/fail rates and severity breakdown
Evidence for Auditors: Third-party test results showing robustness validation
Article 72 - Post-Market Monitoring
Requirement: "Establish and document a post-market monitoring system"
SichGate Testing: SichGate Pro: Continuous testing and regression detection (automated monitoring)
Deliverable: Ongoing compliance reports tracking model security over time
Evidence for Auditors: Historical testing data showing monitoring activities
EU AI Act Compliance Checklist
- Pre-deployment security assessment (Article 9)
- Documentation of known risks and testing results (Article 11)
- Robustness testing against adversarial inputs (Article 15)
- Third-party validation (Article 43 - supporting evidence, though notified body may still be required)
- Post-market monitoring plan (Article 72, with SichGate Pro)
Annex IV Technical Documentation Mapping
Our test reports satisfy the following Annex IV documentation requirements (per Article 11):
| Annex IV Element | SichGate Deliverable | Notes |
|---|---|---|
| Testing procedures and results | Full test report with methodology | Includes test inputs, outputs, pass/fail criteria |
| Validation and testing results | Validation report with reproducible tests | Raw data + baseline comparisons included |
| Cybersecurity measures | Adversarial testing results | Documents resilience against known attack techniques |
| Post-market monitoring system | Continuous testing plan (Pro tier) | Automated regression detection and alerting |
| Information about changes to the AI system | Partial (retest reports) | We provide retest results; version control is customer responsibility |
| Performance metrics | Robustness metrics and pass rates | Quantitative assessment of model behavior |
What SichGate Does NOT Provide (customer or notified body responsibility):
- - Training data provenance documentation
- - Complete quality management system records
- - Supply chain documentation for third-party components
- - Human oversight procedures documentation
GDPR
General Data Protection Regulation
Enforcement: Active since 2018
Penalties: Up to €20M or 4% of global revenue
Article 25 - Data Protection by Design and by Default
Requirement: "Implement appropriate technical measures"
SichGate Testing: Information disclosure tests detect training data leakage
Deliverable: Data privacy assessment identifying extraction vulnerabilities
Evidence for Auditors: Test results showing model doesn't leak PII or training data
Article 32 - Security of Processing
Requirement: "Implement appropriate security measures"
SichGate Testing: Comprehensive adversarial testing validates security controls
Deliverable: Security test report documenting implemented safeguards
Evidence for Auditors: Third-party assessment of security posture
Article 35 - Data Protection Impact Assessment (DPIA)
Requirement: "Carry out assessment of impact on data protection (required when processing is likely to result in a high risk)"
SichGate Testing: Risk-based testing framework identifies data protection risks
Deliverable: DPIA section on AI security testing and results
Evidence for Auditors: Systematic risk identification methodology
Note: Whether a DPIA is required depends on your specific processing activities and risk assessment
GDPR Compliance Checklist
- Technical measures to prevent unauthorized data disclosure (Article 25)
- Security testing appropriate to the risk (Article 32)
- Documentation of security assessment process (Article 30)
- Third-party validation of security controls (Article 42)
NIST AI RMF
AI Risk Management Framework
Status: Voluntary guidance, increasingly referenced in regulations
Adoption: Required for US federal contractors, widely adopted in industry
GOVERN
Governance and oversight structures
SichGate: Provides objective metrics for governance decisions
Deliverable: Executive summary report suitable for board/leadership review
MAP
Context, risks, and impacts identification
SichGate: Threat modeling through behavioral subversion and capability failure tests
Deliverable: Risk map showing identified vulnerabilities by category and severity
MEASURE
Identified risks are analyzed and assessed
SichGate: Quantitative measurement of attack success rates and robustness
Deliverable: Metrics report with pass/fail rates, severity scores, and trend analysis
MANAGE
Risks are prioritized and responded to
SichGate: Severity classification enables risk prioritization
Deliverable: Remediation guidance with risk-based prioritization
NIST AI RMF Trustworthiness Characteristics
| Characteristic | SichGate Test Coverage |
|---|---|
| Valid and Reliable | Capability failure testing validates reliability under variations |
| Safe | Behavioral subversion testing identifies safety policy violations |
| Secure and Resilient | Comprehensive adversarial testing measures security posture |
| Accountable and Transparent | Test methodology is open source and auditable |
| Explainable and Interpretable | Test reports explain why failures occur and what they mean |
| Privacy-Enhanced | Information disclosure testing prevents training data leakage |
| Fair (Bias Management) | Edge case testing can reveal systematic biases (roadmap item) |
ISO/IEC Standards
International standards for AI systems
ISO/IEC 42001:2023 - AI Management System
Requirement: Establish risk assessment process for AI systems
Alignment: Provides structured risk assessment methodology aligned with ISO/IEC 42001 principles
Note: SichGate methodology is aligned with ISO/IEC 42001 but is not itself certified under the standard
ISO/IEC 23894:2023 - AI Risk Management
Requirement: Identify and assess AI-specific risks
Alignment: Tests map directly to adversarial risk categories defined in the standard
ISO/IEC 27001 - Information Security (AI Controls)
Requirement: Security controls for AI/ML systems
Alignment: Adversarial testing validates security control effectiveness
Industry-Specific Regulations
Sector-specific compliance requirements
HIPAA
Healthcare - § 164.308(a)(1)(ii)(A)
Requirement: Conduct an accurate and thorough assessment of potential risks
Testing: Information disclosure tests prevent patient data leakage
Use Case: Healthcare chatbots, diagnosis support, patient triage AI
GLBA
Financial Services - Safeguards Rule
Requirement: Identify reasonably foreseeable internal and external risks
Testing: Adversarial testing identifies external attack risks
Use Case: Credit scoring, fraud detection, trading algorithms
PCI DSS v4.0
Payment Systems - Requirement 6
Requirement: Security testing before deployment
Testing: Pre-deployment adversarial testing for payment fraud detection
Use Case: Transaction classification, fraud detection models
OWASP ML Top 10
Machine Learning Security Top 10
| Risk | Name | Coverage | Test Category |
|---|---|---|---|
| ML01 | Input Manipulation Attack | Comprehensive | Behavioral Subversion |
| ML02 | Data Poisoning | Indirect | Capability Failure |
| ML03 | Model Inversion | Covered | Information Disclosure |
| ML04 | Membership Inference | Covered | Information Disclosure |
| ML05 | Model Theft | Out of scope | N/A |
| ML06 | AI Supply Chain | Indirect | Configuration Testing |
| ML07 | Transfer Learning | Partial | Capability Failure |
| ML08 | Model Skewing | Indirect | Capability Failure |
| ML09 | Output Integrity | Comprehensive | All Categories |
| ML10 | Model Poisoning | Out of scope | N/A |
Direct testing | Indirect or partial coverage
MITRE ATLAS
Adversarial Threat Landscape for AI Systems
Reconnaissance
Technique: AML.T0024 - Discover ML Artifacts
SichGate Test: Capability probing tests
Resource Development
Technique: AML.T0043 - Craft Adversarial Data
SichGate Test: Automated adversarial example generation (Pro)
Initial Access
Technique: AML.T0051 - Evade Defenses
SichGate Test: Behavioral subversion tests (prompt injection, jailbreaking)
Exfiltration
Technique: AML.T0024 - Infer Training Data
SichGate Test: Information disclosure tests (training data extraction)
Impact
Technique: AML.T0048 - Denial of Service
SichGate Test: Capability failure tests (edge case discovery)
Auditor Evidence Package
Every SichGate report includes a complete evidence package designed for regulatory and third-party audits.
Standard Artifacts (All Tiers)
Reproducible Test Cases
- - Exact test inputs (text, prompts, or data files)
- - Random seeds used for any stochastic elements
- - Test execution scripts or CLI commands
- - Expected outputs and actual outputs (side-by-side)
Model Identification
- - Model name and version number
- - Model weights checksum (SHA-256 hash) or container image digest
- - HuggingFace model card reference (if applicable)
- - Fine-tuning details if model was customized
Raw Test Outputs
- - Complete test logs in machine-readable format (JSON/CSV)
- - Baseline comparison data (model behavior on clean inputs)
- - Delta calculations (change in behavior under attack)
- - Timestamps for every test execution
Test Environment Manifest
- - Operating system and version
- - Python version and key library versions
- - Hardware specifications (CPU/GPU used)
- - Test framework version (SichGate CLI version)
Reproducibility Evidence
- - Results from 3+ independent test runs demonstrating consistency
- - Statistical analysis of variance across runs
- - Instructions for auditor to rerun tests independently
Post-Remediation Verification
- - Retest results after fixes implemented
- - Comparison showing improvement or resolution
- - Remaining open issues if not fully remediated
Compliance Mapping Table
- - Which tests satisfy which regulatory requirements
- - Article-by-article mapping (EU AI Act, GDPR, etc.)
- - NIST AI RMF function alignment
- - ISO/IEC standard clause references
Audit Trail
- - Signed test manifest with cryptographic hash
- - Test execution date and time stamps
- - Tester identity (for Enterprise tier with human review)
- - Chain of custody for any data accessed
Additional Artifacts (Enterprise Tier)
- - Expert analysis report with manual review findings
- - Custom test scenario documentation
- - Integration logs if tested via CI/CD
- - Comparison with industry benchmarks
For Auditors Quick Reference
All evidence is provided in standard formats:
- - Structured data: JSON/CSV for programmatic analysis
- - Human-readable: PDF/Markdown reports for review
- - Reproducible: Scripts and instructions to rerun tests
- - Traceable: Git commit hashes, Docker image tags, or equivalent
Compliance Report Formats
SichGate generates reports tailored to different regulatory contexts.
Standard Compliance Report
- - Executive summary suitable for non-technical stakeholders
- - Test methodology explanation with regulatory mapping
- - Detailed findings with severity classification
- - Risk assessment and business impact analysis
- - Remediation recommendations with prioritization
- - Compliance checklist showing satisfied requirements
EU AI Act Compliance Package
Includes everything in Standard Report, plus:
- - Article-by-article requirement mapping
- - High-risk system classification justification
- - Conformity assessment preparation documentation
- - Technical documentation package (Annex IV)
- - Post-market monitoring recommendations
GDPR Data Protection Assessment
Includes everything in Standard Report, plus:
- - Article 32 security measures documentation
- - Data Protection Impact Assessment (DPIA) section
- - PII/training data leakage test results
- - Data minimization recommendations
- - Cross-border transfer considerations
NIST AI RMF Alignment Report
Includes everything in Standard Report, plus:
- - Function-by-function RMF mapping (Govern/Map/Measure/Manage)
- - Trustworthiness characteristic assessment
- - Risk treatment recommendations
- - Continuous monitoring plan
- - Governance metrics dashboard
Third-Party Audit Support
SichGate results are designed to withstand scrutiny from:
External Auditors
- - All tests documented and reproducible
- - Methodology publicly available and peer-reviewed
- - Results include raw data, not just summaries
- - Independent third-party testing (not self-assessment)
Regulatory Inspectors
- - Clear mapping to specific regulatory requirements
- - Audit trail showing when tests were run and by whom
- - Version control for test battery and model under test
- - Evidence package ready for regulatory submission
Enterprise Security Reviews
- - Technical depth satisfies InfoSec teams
- - Executive summary satisfies procurement teams
- - Compliance mapping satisfies legal teams
- - Remediation guidance satisfies engineering teams
Continuous Compliance
SichGate Pro - Regulations like EU AI Act Article 72 require ongoing monitoring.
Automated Regression Testing
Retest models after updates/retraining
Compliance Drift Detection
Alert when model behavior violates policies
Historical Compliance Tracking
Demonstrate continuous monitoring to auditors
CI/CD Integration
Block deployments that fail security gates
Annual Compliance Reports
Automated generation for regulatory filing
For Auditors: Quick Index
Fast reference for auditors reviewing SichGate test reports.
EU AI Act Compliance Map
| Article | Requirement | Coverage | Report Section |
|---|---|---|---|
| Article 9 | Risk management system | Adversarial risk identification | Executive Summary, Findings |
| Article 10 | Data governance | Robustness testing | Capability Failure Results |
| Article 11 | Technical documentation | Test methodology + results | Full Report + Annex IV |
| Article 15 | Accuracy, robustness, cybersecurity | Comprehensive adversarial testing | All Test Categories |
| Article 43 | Conformity assessment | Supporting evidence | Complete Evidence Package |
| Article 72 | Post-market monitoring | Continuous testing (Pro) | Monitoring Recommendations |
GDPR Compliance Map
| Article | Requirement | Coverage | Report Section |
|---|---|---|---|
| Article 25 | Data protection by design | Information disclosure testing | Data Leakage Tests |
| Article 32 | Security of processing | Adversarial security assessment | Security Posture Summary |
| Article 35 | DPIA (when required) | Risk-based testing methodology | Risk Assessment Section |
NIST AI RMF Map
| Function | Category | Coverage | Report Section |
|---|---|---|---|
| GOVERN | Policies and oversight | Objective metrics for governance | Executive Summary |
| MAP | Context and risk identification | Threat modeling via test scenarios | Threat Model Section |
| MEASURE | Risk analysis and assessment | Quantitative vulnerability measurement | Metrics Dashboard |
| MANAGE | Risk prioritization and response | Severity classification and remediation | Recommendations |
OWASP ML Top 10 Map
| Risk ID | Name | Coverage | Test Category |
|---|---|---|---|
| ML01 | Input Manipulation | Comprehensive | Behavioral Subversion |
| ML03 | Model Inversion | Tested | Information Disclosure |
| ML04 | Membership Inference | Tested | Information Disclosure |
| ML09 | Output Integrity | Comprehensive | All Categories |
ISO/IEC Standards Map
| Standard | Clause | Alignment | Report Section |
|---|---|---|---|
| ISO/IEC 42001:2023 | Risk assessment process | Structured testing methodology | Methodology Appendix |
| ISO/IEC 23894:2023 | AI risk management | Risk categorization framework | Risk Matrix |
| ISO/IEC 27001 | Information security controls | Security testing validation | Security Controls Assessment |
Where to Find Key Evidence
Common Auditor Questions
Q: How do I verify these tests were actually run on our model?
A: Check model checksum in "System Under Test" section matches your production model, and review timestamps in audit trail.
Q: Can I reproduce these results?
A: Yes, follow instructions in "Reproducibility" section using provided scripts and test inputs.
Q: What wasn't tested?
A: See "Limitations and Scope" section for explicit blind spots.
Q: Does this satisfy [specific regulation]?
A: See compliance map above and "Regulatory Requirements" appendix for detailed mapping.
Q: What happens if we make changes to the model?
A: Retest using same methodology (included in Enterprise tier, or rerun CLI for free tier users).