Appendix 2 - Viking Potatoes sp. z o.o.

AI Systems Monitoring

Procedures for monitoring, auditing, and quality control of artificial intelligence systems used at Viking Potatoes sp. z o.o. Version 2.0 / April 2026.

1. Purpose of monitoring

AI systems monitoring ensures that the tools we use operate in accordance with accepted ethical, legal, and quality standards. Regular checks allow us to detect and eliminate potential issues early.

2. Monitoring process diagram

Monitoring and feedback collection
Parallel monitoring processes
AI tool verification
Working correctly?
Yes
OK, operational
No
Issue reported
Data flow verification
Data correct?
Yes
Documentation OK
No
Error analysis
Feedback collection
Sources: Clients, Staff, Systems
Type of report
Document actionsIssue description → Steps taken → Results
Conclusions and recommendations

3. Monitoring scope

AreaWhat we monitorFrequency
Output qualityAccuracy, consistency, and relevance of AI-generated contentEvery use
Data securityWhether data entered into AI contains sensitive informationEvery use
Policy complianceWhether AI use follows internal proceduresMonthly
Tool currencyWhether AI tools in use are up-to-date and secureQuarterly
Legal regulationsWhether the company complies with applicable law (AI Act, GDPR)Quarterly
Client satisfactionClient feedback on materials created with AIOngoing

3. Control procedures

3.1 Ongoing checks (every AI use)

  • Factual review of generated content
  • Check for hallucinations (false or fabricated information)
  • Assessment of whether the output meets the project's needs
  • Confirmation that no sensitive data was entered

3.2 Monthly review

  • Analysis of the number and types of AI applications in projects
  • Identification of recurring issues
  • Effectiveness assessment — does AI actually speed up the work?
  • Compliance check against the procedure in Appendix 1

3.3 Quarterly audit

  • Review of the approved AI tools list
  • Updated risk assessment for each tool
  • Review of regulatory changes (AI Act, GDPR, supervisory authority guidelines)
  • Update of internal documentation
  • Team training on new guidelines (if applicable)

4. Risk matrix

RiskLikelihoodImpactMitigation
AI hallucinations (false information) Medium High Mandatory human verification
Sensitive data leak Low High Data check procedure before entry
Copyright infringement Medium Medium Uniqueness verification, avoiding reproductions
Non-compliance with AI Act Low Medium Quarterly regulatory reviews
Bias in outputs Medium Medium Diverse tools, human verification

5. Escalation procedure

When a significant issue with an AI system is detected:

  1. Immediate suspension — stop using the problematic tool
  2. Incident documentation — detailed description of the issue, circumstances, and potential consequences
  3. Notify supervisor — escalate to the person responsible for AI at the company
  4. Damage assessment — analyse impact on client projects and data
  5. Corrective actions — implement fixes, notify clients if applicable
  6. Update procedures — draw conclusions and update documentation

6. Reporting

Monitoring reports are retained for 3 years and made available on request by clients or supervisory authorities. Reports include: period covered, number of AI applications, detected incidents, corrective actions taken.

7. Final provisions

Monitoring is an integral part of AI management at the company. Every employee and collaborator has an obligation to report any irregularities related to AI system operation. This document is subject to review every 6 months.

← Back to Ethical AI Use Declaration