Resolver’s Use of AI Tools In Our Products and Services
Version: 1.0
Last updated: March 1, 2024
Resolver is constantly looking for opportunities to enhance the benefits of our products and services to our customers. That includes integrating selected, industry-leading Artificial Intelligence (“AI”) features and tools (“AI Tools”) into our Software, to help you navigate and understand the complex world of risk intelligence. Below we discuss the nature of those AI Tools, how we use them, and any features or risks of AI of which you should be aware.
Index
- What Is Artificial Intelligence?
- How Does Resolver Use AI Tools In Our Software?
- What Are the Risks of Artificial Intelligence?
- How Should You Use AI Responsibly?
- Resolver’s Terms of Use for Artificial Intelligence Tools
What Is Artificial Intelligence?
Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention. Digital assistants, GPS guidance, autonomous vehicles, and generative AI tools are just a few examples of AI in the daily news and our daily lives.
Deep Learning. As a field of computer science, deep learning involves the development of AI algorithms, modeled after the decision-making processes of the human brain. They have demonstrated the ability to ‘learn’ from huge amounts of available “training” data and make increasingly more accurate classifications or predictions over time. Deep learning is made possible by the use of “neural networks”, layers of interconnected “nodes” which are able to autonomously extract features from large, unlabeled and unstructured data sets to make predictions about what the data represents. As those systems learn to iteratively check and refine those predictions based on real-world data, their outputs also become increasingly more accurate and reliable.
Generative AI. Generative AI refers specifically to deep-learning models that can take raw training data and “learn” to generate new but statistically similar outputs. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original item(s). Generative models have been used in statistics to analyze numerical data, but are now also being used to generate images, speech, and other complex data types.
Large Language Models. Large Language Models (LLMs) are Generative AI systems specifically designed to process and analyze natural human language (also known as Natural Language Processing (NLP)). NLP systems are able to overcome many of the challenges posed by human languages, such as complicated syntax, ambiguity, context-dependent meanings, and personal variations in natural expression. They can infer meaning from context, generate coherent and contextually relevant responses, translate to other languages, summarize text, and even answer questions. LLMs have revolutionized applications across various fields, including chatbots, virtual assistants, content generation, research assistance, and language translation.
AI Regulation. Different regions around the world, including Canada1 and the European Union2, have recognized the power of AI, both for good and for harm, and are generating new regulations and standards for the development and deployment of AI Tools globally. Resolver is committed to complying with those emerging regulatory regimes as we increasingly make new and emerging AI Tools available with our Software.
How Does Resolver Use AI Tools In Our Software?
Resolver is offering only “limited risk” AI systems in its Software, and does not develop, deploy or distribute AI Tools in its product offerings which would be considered “high-impact systems” under the Canadian AIDA3, nor which would pose either an “unacceptable risk” or “high risk” under the EU AI Act.4
Product Descriptions
For a detailed description of the AI Tools and features offered with the Resolver Software, see here.
Data Security
Resolver implements appropriate technical and organizational measures to protect Your data from accidental or unlawful destruction, loss, alteration, as well as unauthorized disclosure, access or use, in accordance with Our security standards as set forth on our Trust Page, located here. Those security standards will continue to apply to Your data throughout its processing at Your request by the third-party providers of the AI Tools which We offer with Our Software. However, You should be aware that the transfer and temporary processing of Your data to that third-party provider will carry certain inherent risks of interception or loss which may not be fully mitigated by Our existing security measures.
You and Your Users will have sole discretion as to which data You wish to submit to an AI Tool for processing. To the extent that Your data includes proprietary or highly sensitive information (such as personal data revealing a person’s racial or ethnic origin, political opinions, religious or philosophical beliefs, genetic or biometric data, medical or health information, or information about a person’s sex life or sexual orientation), You should carefully weigh the additional risks of submitting such information to the AI Tool, and not do so unnecessarily or without the informed consent of the persons involved.
Consent to Data Processing
When You or Your organization set up Your account with Resolver, You also directed Resolver to host, store and process Your Customer Data in a specific region (e.g. in the U.S., Canada, the EU (Germany), UK, or Australia) (“Selected Region”) where AWS operates a secure datacentre as our supplier of hosted services. When you activate one of the AI Tools in the Resolver Software, You will be prompted to acknowledge that You are activating an AI Tool, and that You are sending the selected text to another AWS datacentre for temporary processing which may be located outside of Your Selected Region. Note that such processing would be limited to the time required for the AI Tool to process Your request, and that no copy of that selected text would remain in (be transferred to) the alternative location or be used for any other purpose. The below chart shows You where Your data would go for processing by the AI Tool.
Selected Processing Region | Temporary AI Processing Location |
Americas (Canada, U.S., Central and South America) | AWS U.S. (North Virginia) |
United Kingdom | AWS Europe (Frankfurt, Germany) |
Europe (EEA and Eastern Europe) (where available) | AWS Europe (Frankfurt, Germany) |
Middle East and Africa (where available) | AWS Europe (Frankfurt, Germany) |
Australia and Oceania | AWS Europe (Frankfurt, Germany) |
China and the Asia-Pacific Region (where available) | AWS Europe (Frankfurt, Germany) |
By activating the AI Tool provided within the Software, You are directing and authorizing Resolver to deliver that text to the AI PROVIDER IN THE above-indicated region, and consent to that data being temporarily processed by Resolver’s supplier in that region for the purpose of providing You with the requested AI service. If You do not wish to deliver that direction, or do not have the authority to grant that consent, then DO NOT ACTIVATE that AI Tool. You or the entity on whose behalf you have given consent, understand and agree that You or the entity on whose behalf you have given consent are solely responsible for that decision.
What Are the Risks of Artificial Intelligence?
AI Tools have become powerful assistants in performing routine tasks which involve the assimilation and processing of large amounts of information. However, given the current state of those rapidly-emerging technologies, their use does pose some risks. Below is a discussion of some of the known AI risks.
Accuracy. AI Tools are still “learning” about the real world. As such, they may, in some situations, produce outputs that does not accurately reflect real people, places, or facts. Output may not always be accurate. You should not rely on AI outputs as a sole source of truth or factual information, or as a substitute for professional advice. Your judgment still matters! You must evaluate those outputs for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing that output. You must not use any output relating to an individual for any purpose that could have a legal or material impact on that person, including but not limited to making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. AI Tools may provide incomplete, incorrect, or offensive outputs. If an output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with Resolver.
Bias and Fairness. AI systems can inherit and perpetuate biases present in the data used to train them. If the training data is biased, the AI model may make biased decisions, leading to unfair outcomes and discrimination.
Data security and privacy. AI systems rely on large data sets for training. If these data sets contain sensitive or personal identifiable information (PII), there could be a risk of misuse of such information in the generation of that output.
Deployment and integration issues. Integrating AI tools with your existing systems may introduce security gaps if poorly managed and if proper security protocols and controls are not followed.
Lack of explainability. Many AI models are considered “black boxes” because it can be difficult to interpret their decision-making processes. Without a clear understanding of why an AI system makes a particular decision, it can be difficult to evaluate the objective value of that output.
Ethical Concerns. AI applications may raise ethical dilemmas, especially in areas like autonomous vehicles, healthcare, and military applications. Decisions made by AI systems may conflict with human values, and establishing universally accepted ethical guidelines is a complex challenge.
How Should You Use AI Responsibly?
If you use the AI Tools to make consequential decisions, you must evaluate the potential risks of your use case and implement appropriate human oversight, testing, and other use case-specific safeguards to mitigate such risks. Consequential decisions include those impacting a person’s fundamental rights, health, or safety (e.g., medical diagnosis, judicial proceedings, access to critical benefits like housing or government benefits, opportunities like education, decisions to hire or terminate employees, or access to lending/credit, and providing legal, financial, or medical advice). You and your end users are responsible for all decisions made, advice given, actions taken, and failures to take action based on your use of the AI Tools. AI Tools use machine learning models that generate predictions based on patterns in data. Output generated by a Machine Learning (ML) model is probabilistic, and AI Tools may produce inaccurate or inappropriate content. Outputs should be evaluated for accuracy and appropriateness for Your use case.
you acknoweldge that the ai outputs are for convenience only. By activating the AI Tool provided within the Software, You UNDERSTAND AND ACCEPT THE ABOVE RISKS. you realize that AI Tools may produce inaccurate or inappropriate content, and you will institute appropriate safeguards to protect against such risks. You understand and agree that You are solely responsible for any actions which you may take in reliance upon that output.
Resolver Terms of Use for Artificial Intelligence Tools
Your usage of AI Tools in our Software is already subject to the terms and conditions in your Terms of Service agreement with Resolver in which, among other things, you agree generally not to use the Software contrary to the specific rights of use granted therein . Resolver reserves the right to suspend or terminate your use of the AI Tools and/or the Software if you violate those terms.
In the context of AI, a fundamental “do no harm” requirement includes the understanding that You may not use, or facilitate, or allow others to use, the AI Tools to do the following:
Don’t break the law – You will not promote or engage in any illegal activity, including the promotion of harm or economic loss to individuals, especially to vulnerable persons. That would include engaging activities to defraud, scam, spam, mislead, bully, harass, defame other persons or any other malicious purposes. You will not facilitate the exploitation or harm of children, such as by grooming or child sexual exploitation. You will not use the AI Tools to develop or distribute of illegal materials, goods, or services or to violate national export/import laws; or to develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or computer system. You will not compromise the privacy of others, including by unlawful tracking, monitoring, and identification. You will not engage in a regulated activity, without complying with applicable regulations.
Don’t use our service to harm yourself or others – You will not use the AI Tools to promote violence, hatred or the suffering of others, or to harass, harm, or encourage the harm of individuals or specific groups, including by targeting for violence or promoting suicide or self-harm. You will not use the AI Tools for purposes of intentional disinformation or deception, such as to depict a person’s voice or likeness without their consent; or to develop non-consensual imagery (particularly of a sexual nature).
Respect our safeguards – You will not use the AI Tools to circumvent safeguards and filters or other safety mitigations which we have built into our Software, including to harm Us, Our systems or technology, or Our other customers and other users of the Software.
Resolver offers these AI Tools to you to enhance the benefits of your use of our Software. That usage is also subject to certain conditions:
- You acknowledge that your use of an AI Tool in the Resolver Software is subject to inherent limitations and uncertainties. The AI system’s outputs are based on available data and algorithms, may not be error-free and may include possible inaccuracies. The AI’s results are provided ‘as is’ without any warranties, express or implied, and Resolver does not guarantee the completeness, accuracy, fitness for a particular purposes or reliability of any AI-generated content. Reliance on AI-generated content is at your own risk.
- Resolver expressly disclaims any liability for decisions made, actions taken, or consequences based on such AI generated content and in no event shall Resolver or our suppliers be liable for any direct, indirect, incidental, special, or consequential damages arising out of or in any way connected with the use of the AI system.
- Furthermore, you understand that the information provided by the AI system does not replace professional advice. You are encouraged to seek relevant guidance, verification and validation of AI outputs before making critical decisions. Resolver is committed to data security, but is not responsible for unauthorized access or alterations of your data within the AI system.
- Notwithstanding any other consent provided by you with respect to the processing of your data, to the extent that the AI system will have access to personal data which you have input into the Software, you acknowledge that the AI system may process such PII from other locations around the world, and you hereby consent to (or have obtained informed consent from the data subject for) such personal data being processed in such jurisdiction(s).
Notes
- The proposed Canadian Artificial Intelligence and Data Act (“AIDA”), defines an “artificial intelligence system” as a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions. To learn more about the Canadian AIDA, see here.
- The European Union’s Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (“AI Act”) defines an “AI system” is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. To learn more about the EU AI Act, see here.
- Under the Canadian AIDA, a “high-impact system” will be defined by regulation, but is expected to include any artificial intelligence system which could cause significant harm, where “harm” means physical or psychological harm to an individual; damage to an individual’s property; or economic loss to an individual. Only high-impact systems will be regulated under the proposed Canadian law.
- Under the EU AI Act, AI Systems will be subject to different rules, depending on an assessment of what level(s) of risk they may pose. Subject to some exceptions, all Unacceptable Risk AI Systems would be considered a threat to people, and will be banned. They include: Cognitive behavioral manipulation of people or specific vulnerable groups: Social scoring: classifying people based on behavior, socio-economic status or personal characteristics; Biometric identification and categorization of people; and Real-time and remote biometric identification systems, such as facial recognition. High Risk AI Systems are those which could negatively affect safety or fundamental rights, and will be divided into two categories: (1) AI systems that are used in products falling under the EU’s product safety legislation; or AI systems falling into specific areas that will have to be registered in an EU database, such as Management and operation of critical infrastructure; Education and vocational training; Employment, worker management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; and Assistance in legal interpretation and application of the law. Limited Risk AI Systems would be required to comply with minimal transparency requirements allowing users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
Contact Us
For any inquiries or concerns regarding this the information on this page, please contact Resolver by email at legal@resolver.com, or by mail at:
Resolver Inc.
111 Peter Street, Suite 804,
Toronto, Ontario, M5V 2H1 Canada
Attention: Law Department