Appendix 2 - Viking Potatoes sp. z o.o.
AI Systems Monitoring
Procedures for monitoring, auditing, and quality control of artificial intelligence systems used at Viking Potatoes sp. z o.o. Version 2.0 / April 2026.
1. Purpose of monitoring
AI systems monitoring ensures that the tools we use operate in accordance with accepted ethical, legal, and quality standards. Regular checks allow us to detect and eliminate potential issues early.
2. Monitoring process diagram
Monitoring and feedback collection
Parallel monitoring processes
AI tool verification
Working correctly?
Yes
OK, operational
OK, operational
No
Issue reported
Issue reported
Data flow verification
Data correct?
Yes
Documentation OK
Documentation OK
No
Error analysis
Error analysis
Feedback collection
Sources: Clients, Staff, Systems
Type of report
Document actionsIssue description → Steps taken → Results
Conclusions and recommendations
3. Monitoring scope
| Area | What we monitor | Frequency |
|---|---|---|
| Output quality | Accuracy, consistency, and relevance of AI-generated content | Every use |
| Data security | Whether data entered into AI contains sensitive information | Every use |
| Policy compliance | Whether AI use follows internal procedures | Monthly |
| Tool currency | Whether AI tools in use are up-to-date and secure | Quarterly |
| Legal regulations | Whether the company complies with applicable law (AI Act, GDPR) | Quarterly |
| Client satisfaction | Client feedback on materials created with AI | Ongoing |
3. Control procedures
3.1 Ongoing checks (every AI use)
- Factual review of generated content
- Check for hallucinations (false or fabricated information)
- Assessment of whether the output meets the project's needs
- Confirmation that no sensitive data was entered
3.2 Monthly review
- Analysis of the number and types of AI applications in projects
- Identification of recurring issues
- Effectiveness assessment — does AI actually speed up the work?
- Compliance check against the procedure in Appendix 1
3.3 Quarterly audit
- Review of the approved AI tools list
- Updated risk assessment for each tool
- Review of regulatory changes (AI Act, GDPR, supervisory authority guidelines)
- Update of internal documentation
- Team training on new guidelines (if applicable)
4. Risk matrix
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| AI hallucinations (false information) | Medium | High | Mandatory human verification |
| Sensitive data leak | Low | High | Data check procedure before entry |
| Copyright infringement | Medium | Medium | Uniqueness verification, avoiding reproductions |
| Non-compliance with AI Act | Low | Medium | Quarterly regulatory reviews |
| Bias in outputs | Medium | Medium | Diverse tools, human verification |
5. Escalation procedure
When a significant issue with an AI system is detected:
- Immediate suspension — stop using the problematic tool
- Incident documentation — detailed description of the issue, circumstances, and potential consequences
- Notify supervisor — escalate to the person responsible for AI at the company
- Damage assessment — analyse impact on client projects and data
- Corrective actions — implement fixes, notify clients if applicable
- Update procedures — draw conclusions and update documentation
6. Reporting
Monitoring reports are retained for 3 years and made available on request by clients or supervisory authorities. Reports include: period covered, number of AI applications, detected incidents, corrective actions taken.
7. Final provisions
Monitoring is an integral part of AI management at the company. Every employee and collaborator has an obligation to report any irregularities related to AI system operation. This document is subject to review every 6 months.