
ISO/IEC 27701 Toolkit: Version 2
![]()

ISO/IEC 27701 Toolkit: Version 2
The header page and this section, up to and including Disclaimer, must be removed from the final version of the document. For more details on replacing the logo, yellow highlighted text and certain generic terms, see the Completion Instructions document.
This document sets out the organization’s policy with respect to the development, deployment and use of artificial intelligence.
The following areas of the ISO/IEC 27701 standard are addressed by this document:
• A.3 Security considerations for PII controllers and processors
o A.3.3 Policies for information security
AI is a relatively new development within many organizations, and there is potential to fall foul of existing legislation, as well as the danger of reputational damage from its uncontrolled usage.
There is an ISO standard, ISO/IEC 42001, which deals with the management of AI and this policy takes account of some of this content.
We would recommend that this document is reviewed annually.
This document may contain fields which need to be updated with your own information, including a field for Organization Name that is linked to the custom document property “Organization Name”.
To update this field (and any others that may exist in this document):
1. Update the custom document property “Organization Name” by clicking File > Info > Properties > Advanced Properties > Custom > Organization Name.
2. Press Ctrl A on the keyboard to select all text in the document (or use Select, Select All via the Editing header on the Home tab).
3. Press F9 on the keyboard to update all fields.
4. When prompted, choose the option to just update TOC page numbers.
If you wish to permanently convert the fields in this document to text, for instance, so that they are no longer updateable, you will need to click into each occurrence of the field and press Ctrl Shift F9.
If you would like to make all fields in the document visible, go to File > Options > Advanced > Show document content > Field shading and set this to “Always”. This can be useful to check you have updated all fields correctly.
Further detail on the above procedure can be found in the toolkit Completion Instructions. This document also contains guidance on working with the toolkit documents with an Apple Mac, and in Google Docs/Sheets.
Copyright notice
Except for any specifically identified third-party works included, this document has been authored by CertiKit and is ©CertiKit except as stated below. CertiKit is a company registered in England and Wales with company number 6432088.
This document is licensed on and subject to the standard licence terms of CertiKit, available on request, or by download from our website. All other rights are reserved. Unless you have purchased this product you only have an evaluation licence.
If this product was purchased, a full licence is granted to the person identified as the licensee in the relevant purchase order. The standard licence terms include special terms relating to any third-party copyright included in this document.
Please Note: Your use of and reliance on this document template is at your sole risk. Document templates are intended to be used as a starting point only from which you will create your own document and to which you will apply all reasonable quality checks before use.
Therefore, please note that it is your responsibility to ensure that the content of any document you create that is based on our templates is correct and appropriate for your needs and complies with relevant laws in your country.
You should take all reasonable and proper legal and other professional advice before using this document.
CertiKit makes no claims, promises, or guarantees about the accuracy, completeness or adequacy of our document templates; assumes no duty of care to any person with respect its document templates or their contents; and expressly excludes and disclaims liability for any cost, expense, loss or damage suffered or incurred in reliance on our document templates, or in expectation of our document templates meeting your needs, including (without limitation) as a result of misstatements, errors and omissions in their contents.

[Insert classification] Version 1
DOCUMENT CLASSIFICATION [Insert classification]
DOCUMENT REF PIMS-DOC-A3-3-3
VERSION 1 DATED [Insert date]
DOCUMENT AUTHOR [Insert name]
DOCUMENT OWNER [Insert name/role]
5 of 11 [Insert date]
Approval
This policy outlines guidelines and best practices for the secure and responsible use of Artificial Intelligence (AI) within [Organization Name]. AI represents a significant opportunity for [Organization Name] in a number of areas, such as improving existing products and services, enhancing and automating business processes, and increasing cost-effectiveness. However, our use of AI must be managed so that risks are minimised, particularly in cases where AI is used:
• For automated decision-making, where the rationale for a decision is not clear or transparent
• Within systems where machine learning is deployed rather than logic designed by humans
• In a continuous learning mode, where the AI model’s behaviour may adapt over time without human guidance
For our use of AI to be successful, we need to strike a balance between effective governance and innovation and apply an appropriate level of control for specific AI use cases.
The purpose of this policy is to establish a framework that ensures the ethical and secure use of AI technologies while safeguarding sensitive information, particularly with respect to privacy and intellectual property.
This policy applies to all systems, people and processes that constitute the organization’s information systems, including board members, directors, employees, suppliers and other third parties who have access to [Organization Name] systems.
The intended audience for this policy is employees responsible for designing systems and managing service delivery.
Failure to comply with the contents of this policy may result in disciplinary action being taken by [Organization Name] against the individual(s) concerned.
Terms used in this policy are defined as follows:
• Artificial intelligence (AI) means the simulation of human intelligence by machines, particularly computer systems, enabling them to perform tasks like learning, reasoning, problem-solving, and decision-making.
• Personally identifiable information (PII) means any data that can be used to identify an individual, either on its own or when combined with other information. Examples include names, addresses, social security numbers, and email addresses.
• Anonymization means the process of removing or altering personal data in a way that prevents individuals from being identified, ensuring their privacy is protected.
• Pseudonymization means the process of replacing personally identifiable information with pseudonyms or other identifiers, making it difficult to identify individuals without additional information that links the pseudonym to the original data.
• Encryption means the process of converting data into a coded format that can only be read or accessed by someone with the correct decryption key, ensuring the data's confidentiality and security.
The following PIMS documents and external references are relevant to this document:
• Privacy Policy
• Secure Development Policy
• Privacy Risk Assessment and Treatment Process
• Access Control Policy
• Privacy and Data Protection Policy
• Principles for Engineering Secure Systems
[Organization Name] policies and procedures in relation to information security and privacy, systems design and development (such as access control, secure coding and data protection) apply equally well to systems that make use of AI. This policy emphasises the additional issues that arise from using AI specifically.
Roles and responsibilities must be clearly defined when implementing and operating systems that make use of AI. These should cover areas such as governance, risk and impact management, security, privacy, data quality management and supplier relationships.
A process must be in place to allow employees, contractors and other interested parties to report concerns about the organization’s use of AI, with appropriate assurances of confidentiality and anonymity.
The resources used by an AI system must be appropriately understood and documented, to act as input to risk and impact assessment. These may include data, tools, AI models, processing and human resources.
The risks and impact (both positive and negative) of AI-based systems (such as natural language processing, machine learning and artificial neural networks) must be assessed as part of their design and development, including potential effects on individuals (such as safety, health and financial wellbeing), interested parties (such as customers and employees), and society as a whole (for example economic and environmental impacts).
Under no circumstances should sensitive data held within [Organization Name] be uploaded or otherwise added to an AI-based system to which there is public access. Care must be taken that the models used are private, and that access is restricted to authorised users only.
Use of personally identifiable information (PII) within an AI model must be subject to an appropriately-defined lawful basis and comply with privacy-related requirements, such as privacy notices, retention policies and data protection impact assessments.
Where feasible, techniques such as anonymization, pseudonymization and encryption should be used to protect PII and reduce the risk to [Organization Name] and PII principals of sensitive data being leaked or otherwise exposed.
The process of training and fine-tuning an AI model must be subject to appropriate security controls to prevent unauthorised access to data. Model deployment and ongoing monitoring must also be achieved using appropriately-secure methods.
Under no circumstances should the intellectual property of [Organization Name] be uploaded or otherwise added to an AI-based system to which there is public access, unless specific permission has been given for this to happen by the intellectual property owner. To do so may make our intellectual property part of the model and so accessible by a public user without constraints on issues such as copyright.
Similarly, care must be taken when using the output of public models to ensure that our use complies with intellectual property law and respects the rights of other parties.
[Organization Name] shall stay informed about relevant existing and planned laws and regulations regarding the application and use of AI (such as the European Union AI Act 2023) and ensure that they are complied with. The relevant provisions of laws affecting the collection and use of data that is then used within an AI system must also be considered, such as privacy and intellectual property legislation.
Care must also be taken to ensure that [Organization Name]’s use of AI remains within the boundaries of acceptable ethical practice.
If any employee believes that through the use of AI models there is a possibility that information may have been leaked then it shall be reported with immediate effect.
The [Organization Name] incident management process shall be used for reporting any incidents related to the misuse of AI, privacy breaches, or intellectual property concerns.
[Organization Name] shall provide regular training to employees on AI security best practices, privacy protection, and intellectual property management.
Such training will include the dangers of exposing personal data and sensitive company information to AI models.