Usage Policies

Date of Last Revision: August 20, 2024

Our Usage Policy (also referred to as our “Acceptable Use Policy” or “AUP”) applies to anyone who uses products and services provided by RiskOn International, Inc., through its askROI.com website (“askROI”), and is intended to help our users stay safe and ensure our products and services are being used responsibly. The Usage Policy is categorized according to who can use our products and for what purposes. We will update our policy as our technology and the associated risks evolve or as we learn about unanticipated risks from our users.

Universal Usage Standards

Our Universal Usage Standards (as disclosed below) apply to all users including individuals, developers, and businesses.

High-Risk Use Case Requirements

Our High-Risk Use Case Requirements (as disclosed below) apply to specific use cases that pose an elevated risk of harm.

Disclosure Requirements

Our Disclosure Requirements (as disclosed below) apply to specific use cases where it is especially important for users to understand that they are interacting with an artificial intelligence (“AI”) system.

AskROI’s Trust and Safety Team will implement detections and monitoring to enforce our Usage Policy so please review these policies carefully before using our products and services. If we learn that you have violated our Usage Policy, we may throttle, suspend, or terminate your access to our products and services. If you discover that our model outputs are inaccurate, biased, or harmful, please notify us at usersafety@askroi.com.

Universal Usage Standards

Do Not Compromise Children’s Safety

  • Create, distribute, or promote child sexual abuse material. We strictly prohibit and will report to relevant authorities and organizations where appropriate any content that exploits or abuses minors.
  • Facilitate the trafficking, sextortion, or any other form of exploitation of a minor.
  • Facilitate minor grooming, including generating content designed to impersonate a minor.
  • Facilitate or depict child abuse of any form, including instructions for how to conceal abuse.
  • Promote or facilitate pedophilic relationships, including via roleplay with the model.
  • Fetishize minors.

Do Not Compromise Critical Infrastructure

  • Facilitate the destruction or disruption of critical infrastructure such as power grids, water treatment facilities, telecommunication networks, or air traffic control systems.
  • Obtain unauthorized access to critical systems such as voting machines, healthcare databases, and financial markets.
  • Interfere with the operation of military bases and related infrastructure.

Do Not Incite Violence or Hateful Behavior

  • Incite, facilitate, or promote violent extremism, terrorism, or hateful behavior.
  • Depict support for organizations or individuals associated with violent extremism, terrorism, or hateful behavior.
  • Facilitate or promote any act of violence or intimidation targeting individuals, groups, animals, or property.
  • Promote discriminatory practices or behaviors against individuals or groups on the basis of one or more protected attributes such as race, ethnicity, religion, nationality, gender, sexual orientation, or any other identifying trait.

Do Not Compromise Someone’s Privacy or Identity

  • Compromise security or gain unauthorized access to computer systems or networks, including spoofing and social engineering.
  • Violate the security, integrity, or availability of any user, network, computer, device, or communications system, software application, or network or computing device.
  • Violate any person’s privacy rights as defined by applicable privacy laws, such as sharing personal information without consent.
  • Misuse, collect, solicit, or gain access to private information without permission.
  • Impersonate a human by presenting results as human-generated, or using results in a manner intended to convince a natural person that they are communicating with a natural person when they are not.

Do Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods

  • Produce, modify, design, market, or distribute weapons, explosives, dangerous materials, or other systems designed to cause harm to or loss of human life.
  • Engage in or facilitate any illegal activity, such as the use, acquisition, or exchange of illegal and controlled substances, or the facilitation of human trafficking and prostitution.

Do Not Create Psychologically or Emotionally Harmful Content

  • Facilitate or conceal any form of self-harm, including disordered eating and unhealthy or compulsive exercise.
  • Engage in behaviors that promote unhealthy or unattainable body image or beauty standards.
  • Shame, humiliate, intimidate, bully, harass, or celebrate the suffering of individuals.
  • Coordinate the harassment or intimidation of an individual or group.
  • Generate content depicting sexual violence.
  • Generate content depicting animal cruelty or abuse.
  • Generate violent or gory content that is inspired by real acts of violence.
  • Promote, trivialize, or depict graphic violence or gratuitous gore.
  • Develop a product, or support an existing service that facilitates deceptive techniques with the intent of causing emotional harm.

Do Not Spread Misinformation

  • Create and disseminate deceptive or misleading information about a group, entity or person.
  • Create and disseminate deceptive or misleading information about laws, regulations, procedures, practices, or standards established by an institution, entity or governing body.
  • Create and disseminate deceptive or misleading information with the intention of targeting specific groups or persons.
  • Create and advance conspiratorial narratives meant to target a specific group, individual or entity.
  • Impersonate real entities or create fake personas to falsely attribute content or mislead others about its origin without consent or legal right.
  • Provide false or misleading information related to medical, health, or science issues.

Do Not Create Political Campaigns or Interfere in Elections

  • Promote or advocate for a particular political candidate, party, issue or position, including soliciting votes, financial contributions, or public support for a political entity.
  • Engage in political lobbying to actively influence the decisions of government officials, legislators, or regulatory agencies on legislative, regulatory, or policy matters.
  • Engage in campaigns, including political campaigns, that promote false or misleading information to discredit or undermine individuals, groups, entities, or institutions.
  • Incite, glorify, or facilitate the disruption of electoral or civic processes.
  • Generate false or misleading information on election laws, procedures and security.

Do Not Use for Criminal Justice, Law Enforcement, Censorship, or Surveillance Purposes

  • Make determinations on criminal justice applications, including making decisions about or determining eligibility for parole or sentencing.
  • Target or track a person’s physical location, emotional state, or communication without their consent.
  • Utilize Claude to assign scores or ratings to individuals based on an assessment of their trustworthiness or social behavior.
  • Build or support emotional recognition systems or techniques that are used to infer people’s emotions.
  • Analyze or identify specific content to censor on behalf of a government organization.
  • Utilize Claude as part of any biometric categorization system for categorizing people based on their biometric data.

Do Not Engage in Fraudulent, Abusive, or Predatory Practices

  • Facilitate the production, acquisition, or distribution of counterfeit or illicitly acquired goods.
  • Promote or facilitate the generation or distribution of spam.
  • Generate content for fraudulent activities, schemes, scams, phishing, or malware that can result in direct financial or psychological harm.
  • Generate deceptive or misleading digital content such as fake reviews, comments, or media.
  • Engage in or facilitate multi-level marketing, pyramid schemes, or other deceptive business models.
  • Promote or facilitate payday loans, title loans, or other high-interest, short-term lending practices that exploit vulnerable individuals.

Do Not Abuse our Platform

  • Coordinate malicious activity across multiple accounts.
  • Utilize automation in account creation or to engage in spammy behavior.
  • Circumvent a ban through the use of a different account.
  • Facilitate or provide account access to Claude to persons or entities who are located in unsupported locations.
  • Intentionally bypass capabilities or restrictions established within our products.
  • Unauthorized utilization of prompts and completions to train an AI model.

Do Not Generate Sexually Explicit Content

  • Depict or request sexual intercourse or sex acts.
  • Generate content related to sexual fetishes or fantasies.
  • Facilitate, promote, or depict incest or bestiality.
  • Engage in erotic chats.

High-Risk Use Case Requirements

Some integrations (meaning use cases involving the use of our products and services) pose an elevated risk of harm because they influence domains that are vital to public welfare and social equity. “High-Risk Use Cases” include:

  • Legal: Integrations related to legal interpretation, legal guidance, or decisions with legal implications.
  • Healthcare: Integrations affecting healthcare decisions, medical diagnosis, patient care, or medical guidance.
  • Insurance: Integrations related to health, life, property, disability, or other types of insurance underwriting.
  • Finance: Integrations related to financial decisions, including investment advice, loan approvals, and determining creditworthiness.
  • Employment and housing: Integrations related to decisions about employability or eligibility for housing.
  • Academic testing: Integrations related to standardized testing and admissions evaluations.
  • Media: Integrations related to using our products to automatically generate content for external consumption.

Additional Safety Measures

If your integration is listed above, we require that you implement the following additional safety measures:

  • Human-in-the-loop: When using our products to provide advice, a qualified professional must review the content or decision prior to dissemination.
  • Disclosure: You must disclose to your customers that you are using our products to help inform your decisions.

Disclosure Requirements

The below use cases – regardless of whether they are High-Risk Use Cases – must disclose to their users that they are interacting with an AI system rather than a human:

  • All customer-facing chatbots including any external-facing or interactive AI agent.
  • Products serving minors that allow direct interaction with AI systems.