CyberArk Responsible AI

CyberArk Responsible AI Policy

This CyberArk Responsible AI Policy, including the CyberArk AI Features FAQs, applies to use of those artificial intelligence services, features, and functionality that we provide within the following products – Identity Security Platform Shared Services (ISPSS), Privilege Cloud, Dynamic Privileged Access (DPA), Secure Web Sessions (SWS),  Secrets Hub and Endpoint Privilege Manager (EPM) (collectively, “AI Features”). This policy supplements CyberArk’s SaaS Terms of Service. Reference to “you” or “your” in this Policy means the “Customer”, as such term is defined in CyberArk’s SaaS Terms of Service.

CyberArk is committed to developing safe, fair, and accurate AI Features and providing you with tools designed to enhance your security and user experience. An AI Impact Assessment is conducted for AI Features in accordance with standard market practice, designed to avoid biases or unethical use of AI and to address transparency, fairness, accountability, and privacy and security by design.

Certain AI Features use third-party Generative AI services (currently Azure Open AI and Anthropic Claude through AWS Bedrock), and some AI Features may be provided from a different region to the region where your SaaS Products are hosted. The third-party Generative AI services that CyberArk uses do not store your data, nor do they use your data to train their model. CyberArk or the third-party Generative AI services may detect and mitigate instances of recurring content and/or behaviors that suggest use of the product in a manner that may violate this Policy or other applicable product terms.

AI Features utilize models to generate predictions or recommendations based on data patterns, producing probabilistic outputs. AI introduces the potential for inaccurate or inappropriate content. It is crucial to evaluate these outputs for accuracy and suitability in your specific use case. Both you and your end users are accountable for decisions, advice, actions, and failures to act resulting from the use of AI Features.

Please refer to the specific AI Feature page in CyberArk’s documentation for detailed information about each feature.

You may not use, or facilitate or allow others to use, the AI Features:

  • for any illegal or fraudulent activity;
  • for intentional disinformation or deception;
  • to violate the rights of others, including privacy rights, unlawful tracking, monitoring, and identification;
  • to harass, harm, or encourage the harm of individuals or specific groups;
  • to intentionally circumvent safety filters and functionality or prompt models to act in a manner that violates law, regulation or this Policy;
  • to violate the security, integrity, or availability of any user, network, computer or communications system, software application, or network or computing device.

If you do not wish to receive the AI Features, your authorized system admin may opt out of the AI Features on behalf of your organization by contacting CyberArk support. The opt out will typically take effect within 2-3 business days of your instruction to opt out.

CyberArk may update this policy from time to time without prior notice.

CyberArk AI Features FAQs

These CyberArk AI Feature FAQs apply to use of those artificial intelligence services, features, and functionality that we provide within the following products – Identity Security Platform Shared Services (ISPSS), Privilege Cloud, Dynamic Privileged Access (DPA), Secure Web Sessions (SWS),  Secrets Hub and Endpoint Privilege Manager (EPM)  (collectively, “AI Features”).

Q: Does CyberArk have a Responsible AI Policy?

A: Yes. CyberArk’s Responsible AI Policy applies to the use of artificial intelligence services, features, and functionality provided within certain CyberArk’s products. The policy is designed to ensure the safe, fair, and accurate use of AI Features.

Q: How does CyberArk ensure responsible use of AI and bias mitigation?

A: CyberArk conducts AI Impact Assessments for AI Features in accordance with standard market practice. This is designed to avoid biases or unethical use of AI and to address transparency, fairness, privacy and security by design, and accountability.

Q: Can I opt-out of using AI Features?

A: Yes, if you do not wish to receive the AI Features, your authorized system admin may opt out on behalf of your organization by contacting CyberArk support. The opt-out will typically take effect within 2-3 business days of your instruction to opt out.

Q: Where can I find detailed information about each AI Feature?

A: Please refer to the specific AI Feature page in CyberArk’s documentation for detailed information about each feature.

Q: Will I know when AI technology is used within the CyberArk product?

A: CyberArk will inform users when AI is used to produce results or recommendations, by indicating this in-product, as well as in the supporting documentation. Please note, EPM ARA does not currently include an in-product notification.

Q: Do the AI Features use any third-party Generative AI services?

A: Yes, certain AI Features use third-party Generative AI services, currently Azure OpenAI or Anthropic Claude through AWS Bedrock, as further detailed in each specific AI Feature page. CyberArk’s use of these services within the products is subject to CyberArk’s contractual agreements with the respective vendors of these services.

Q: Does CyberArk or any third party monitor my use of the AI Feature?

A: CyberArk or the third-party Generative AI services may detect and mitigate instances of recurring content and/or behaviors that suggest use of the product in a manner that may violate CyberArk’s responsible AI Policy or other applicable product terms.

Q: Will CyberArk process personal data for the purposes of providing AI Features? 

A: CyberArk will process personal data for the purpose of providing AI Features to you in accordance with your Data Processing Agreement with CyberArk.

Q: Is human oversight required prior to applying the AI Feature’s recommendation?

A: Yes. AI Features utilize models to generate predictions or recommendations based on data patterns, producing probabilistic outputs. AI introduces the potential for inaccurate or inappropriate content. It is crucial to evaluate these outputs for accuracy and suitability in your specific use case. Both you and your end users are accountable for decisions, advice, actions, and failures to act resulting from the use of AI Features.

Last updated March 24, 2024