AI & Automation Safeguards
How Govula uses AI responsibly with appropriate human oversight.
This section is intended for: Technical Team, Auditor, Management. Unauthorised access is restricted.
AI Transparency
Govula uses AI to assist with compliance tasks. This is not "black box AI." Every AI-assisted decision is traceable, explainable, and subject to human review.
We believe in transparency about where AI is used, where it is not used, and what safeguards are in place to ensure appropriate human oversight.
Where AI Is Used
AI is used in specific, well-defined areas where it provides value while maintaining human accountability:
Justification Generation
AI generates draft justifications for applicability decisions based on organizational context and control requirements. These drafts explain why a control applies or does not apply in plain language.
Safeguard: All generated justifications are queued for human review before becoming part of the official SoA.
Remediation Recommendations
AI suggests remediation actions for control gaps based on the control requirements and organizational context. These are recommendations, not mandates.
Safeguard: Recommendations are presented as suggestions. Implementation decisions remain with human teams.
Evidence Analysis
AI assists in analyzing uploaded evidence to suggest which controls it might support. This helps organize evidence efficiently.
Safeguard: Evidence association suggestions require human confirmation before being applied.
Report Summarization
AI generates executive summaries that distill compliance status into digestible overviews for leadership.
Safeguard: Summaries include links to underlying data for verification.
Where AI Is NOT Used
Certain decisions are explicitly excluded from AI involvement. These require human judgment and accountability:
- Final applicability decisions: AI suggests, humans decide
- Risk acceptance: Risk decisions require human sign-off
- Evidence attestation: Humans attest to evidence validity
- Control implementation: Actual security controls are implemented by humans
- Compliance certification: The platform does not certify compliance
- User management: No AI involvement in access decisions
Guardrails
Multiple safeguards ensure AI-assisted functions remain under human control:
Review Queues
All AI-generated content enters a review queue. Nothing is published or applied without explicit human approval.
Confidence Scoring
AI outputs include confidence scores. Low-confidence items are flagged for closer review.
Override Capability
Humans can override any AI suggestion. Overrides are logged and become the authoritative decision.
Audit Trail
All AI involvement is logged. You can trace which outputs were AI-generated and who approved them.
Feedback Loop
Corrections and overrides inform future suggestions, improving relevance over time.
Determinism vs. Probabilistic Output
It is important to understand the nature of AI-generated outputs:
Deterministic Functions
Given the same inputs, these always produce the same outputs:
- Compliance score calculation
- Drift detection
- Evidence freshness tracking
- Report generation (data content)
Probabilistic Functions
These may produce slightly different outputs on repeated runs:
- Justification wording
- Remediation suggestions
- Executive summaries
- Evidence association suggestions
Probabilistic outputs are always subject to human review. The underlying data and decisions are deterministic; only the prose generation has probabilistic elements.
Human Override and Review
The review process is structured to ensure thorough human oversight:
- 1AI generates output (justification, recommendation, etc.)
- 2Output enters review queue with confidence score
- 3Authorized user reviews the output
- 4User approves, modifies, or rejects the output
- 5Decision is logged with user attribution
- 6Approved output becomes part of official record
Key Principle
AI assists. Humans decide. Accountability remains with people, not algorithms.