AI Acceptable Use Policy

Governing the responsible use of all AI models, APIs, and services developed by Deep Cognition Labs.

Effective Date: April 5, 2026 Version: 1.0 Entity: Deep Cognition Labs (DeepCog.ai)

1 Purpose and Scope

This Acceptable Use Policy ("AUP") governs the use of all artificial intelligence models, APIs, platforms, tools, and services ("Services") developed, maintained, and distributed by Deep Cognition Labs ("DeepCog.ai," "we," "us," or "our") under the DeepCog.ai brand, including OpenBioLLM, GenomicLLM, ClinicalReasoner, DrugDiscovery-LLM, PathologyVision-LLM, and all related infrastructure.

This AUP applies to all individuals and organizations ("Users") who access or use our Services, whether through direct API access, the DeepCog.ai platform, HuggingFace repositories, enterprise licensing, research partnerships, or any other means of access.

Agreement

By accessing or using our Services, you agree to comply with this AUP in addition to our Terms of Service and Privacy Policy. Violation of this AUP may result in immediate suspension or termination of access.

2 Core Principles

DeepCog.ai builds specialized medical AI with the following non-negotiable principles governing all permitted use:

🛡️ Patient Safety First

Our models are designed to augment clinical expertise, not replace it. Any use that creates risk of harm to patients, research participants, or the general public is strictly prohibited.

👁️ Human Oversight

AI outputs must remain under meaningful human review and control, particularly in clinical, diagnostic, or treatment contexts. Fully autonomous AI decision-making in patient care is not permitted.

📢 Transparency

Users must not misrepresent AI-generated outputs as the independent conclusions of a licensed human professional, nor conceal AI involvement from those who have a right to know.

✅ Accuracy and Honesty

Users must not knowingly deploy, distribute, or act upon outputs they have reason to believe are incorrect, hallucinated, or misleading.

3 Permitted Uses

3.1 Clinical and Healthcare

  • Supporting licensed clinicians in differential diagnosis generation as a decision support tool, with mandatory human review before any clinical action
  • Assisting in literature synthesis, clinical documentation, and medical coding workflows
  • Generating draft radiology, pathology, or clinical reports for review and sign-off by qualified professionals
  • Supporting clinical trial protocol design, eligibility screening assistance, and adverse event literature review
  • Medical education and training simulations for healthcare professionals

3.2 Biomedical Research

  • Hypothesis generation and literature synthesis for peer-reviewed research
  • Genomic variant interpretation and annotation in research settings
  • Drug target identification, ADMET property prediction, and lead compound analysis
  • Bioinformatics pipeline integration for genomic sequence analysis
  • Generation of structured research reports and scientific summaries

3.3 Healthcare Technology Development

  • Integration into healthcare software products under appropriate regulatory frameworks (e.g., FDA AI/ML Software as a Medical Device guidance)
  • Building clinical decision support tools with appropriate human oversight mechanisms
  • Development of patient-facing health information tools where outputs are clearly labeled as AI-generated

3.4 Education and Training

  • Medical and life sciences education at accredited institutions
  • Healthcare professional continuing education and simulation
  • AI literacy training for clinicians and health informaticists

4 Prohibited Uses

Strictly Prohibited

The following uses are prohibited under all circumstances. Violation may result in immediate termination of access and referral to relevant authorities.

4.1 Patient Safety and Clinical Harm

  • Using model outputs as the sole or final basis for diagnosis, treatment decisions, surgical planning, or medication dosing without review by a licensed healthcare professional
  • Deploying our models in emergency or critical care settings without qualified human oversight
  • Using our models to make autonomous treatment decisions in any setting
  • Providing AI-generated medical advice directly to patients and representing it as clinical guidance from a licensed professional

4.2 Deception and Misrepresentation

  • Presenting AI-generated outputs as the independent conclusions of a human expert without disclosure
  • Fabricating or falsifying research data, clinical trial results, or scientific findings using our models
  • Generating fake peer review, fraudulent scientific literature, or counterfeit regulatory submissions
  • Using our models to impersonate licensed medical professionals or regulatory bodies
  • Misrepresenting model capabilities, accuracy rates, or validation status to patients, clients, or regulators

4.3 Privacy and Data Protection

  • Processing individually identifiable patient health information (PHI/PII) through public APIs without a signed Business Associate Agreement (BAA)
  • Using patient data obtained without proper consent or legal authorization as model inputs
  • Attempting to re-identify de-identified datasets using our models
  • Extracting or inferring private patient information through prompt injection or adversarial techniques

4.4 Harmful and Malicious Applications

  • Generating content designed to facilitate self-harm, suicide, eating disorders, or substance abuse
  • Using our models to develop biological, chemical, radiological, or nuclear weapons, or to enhance the lethality or transmissibility of pathogens
  • Creating disinformation campaigns related to vaccines, public health, treatments, or medical products
  • Generating fraudulent insurance claims, billing records, or healthcare documentation
  • Using our models to circumvent drug safety regulations or generate documentation for unapproved or counterfeit pharmaceuticals

4.5 Discrimination and Bias Exploitation

  • Deliberately using or amplifying known model biases to discriminate against patients based on race, ethnicity, sex, gender, disability, age, or any other protected characteristic
  • Using our models in insurance underwriting, employment screening, or credit decisions in ways that violate applicable anti-discrimination laws
  • Designing systems that systematically disadvantage vulnerable populations in access to healthcare or medical information

4.6 Security and Infrastructure

  • Attempting to reverse-engineer, extract, or reconstruct model weights, training data, or proprietary architectures
  • Using our Services to conduct adversarial attacks against other AI systems or healthcare infrastructure
  • Probing for or exploiting security vulnerabilities in our APIs, platforms, or infrastructure
  • Using automated scraping, bulk downloading, or denial-of-service techniques against our Services

4.7 Legal and Regulatory Violations

  • Any use that violates HIPAA, GDPR, CCPA, the EU AI Act, FDA regulations, or any other applicable law or regulation
  • Using our models in jurisdictions where such use is prohibited by local law
  • Violating the terms of any open-source license under which our models are distributed

5 High-Risk Use Requirements

Certain applications are permitted but require additional safeguards. Users deploying our Services in the following contexts must implement the controls described:

Clinical Decision Support Deployments

  • Written clinical validation study demonstrating performance on the intended patient population
  • Documented human oversight workflow with named responsible clinician(s)
  • Patient disclosure mechanism where clinically appropriate
  • Incident reporting procedure for adverse outcomes potentially linked to AI assistance
  • Regular performance monitoring and drift detection

Genomic Interpretation in Clinical Settings

  • Mandatory genetic counselor or clinical geneticist review of AI-generated variant interpretations before disclosure to patients
  • Clear documentation that variant classifications are AI-assisted and subject to human confirmation
  • Compliance with applicable clinical laboratory regulations (CLIA, CAP, or equivalent)

Research Involving Human Subjects

  • IRB or equivalent ethics board approval for studies using our models with human participant data
  • Informed consent processes that disclose AI involvement where required
  • Data minimization practices consistent with the research purpose

Pediatric and Vulnerable Populations

  • Heightened validation requirements for any deployment affecting pediatric, geriatric, pregnant, or otherwise vulnerable patient populations
  • Additional human oversight and review layers beyond standard clinical use requirements

6 User Responsibilities

6.1 All Users Must

  • Read, understand, and comply with this AUP, our Terms of Service, and all applicable laws before accessing our Services
  • Implement appropriate access controls to prevent unauthorized use of API keys and credentials
  • Report suspected misuse, security incidents, or safety concerns to [email protected] promptly
  • Ensure that personnel using our Services on their behalf are made aware of and bound by this AUP
  • Maintain records sufficient to demonstrate compliance with this AUP upon request

6.2 Enterprise and API Users Must Additionally

  • Conduct appropriate due diligence on downstream use cases before deploying our models in production
  • Implement rate limiting and abuse prevention mechanisms appropriate to their deployment context
  • Not resell or sublicense access to our Services without explicit written authorization from DeepCog.ai
  • Execute a Business Associate Agreement (BAA) before processing any PHI through our Services

6.3 Research Users

  • Acknowledge DeepCog.ai and the specific models used in any publications, preprints, or presentations arising from use of our Services
  • Share safety-relevant findings — including identified failure modes, biases, or hallucinations — with DeepCog.ai through our responsible disclosure process

7 Children and Minors

Our Services are not directed at individuals under the age of 18. Users must not knowingly collect or process data from minors in connection with our Services except in institutional research or clinical contexts governed by appropriate parental consent and institutional oversight mechanisms.

8 Enforcement

8.1 Monitoring

DeepCog.ai reserves the right to monitor usage patterns for compliance with this AUP. We do not review the content of individual queries for general API users, but we may investigate usage flagged by automated systems, third-party reports, or user complaints.

8.2 Consequences of Violation

Violations of this AUP may result in, at our sole discretion:

  • Immediate suspension of API keys and account access
  • Permanent termination of all DeepCog.ai Services
  • Removal from HuggingFace repositories and research collaboration programs
  • Notification of relevant regulatory, professional licensing, or law enforcement authorities where legally required or appropriate
  • Legal action for damages where violations cause harm

8.3 Appeals

Users who believe their access was suspended in error may submit an appeal to [email protected] within 30 days of suspension. Appeals will be reviewed within 15 business days.

9 Reporting Violations

If you become aware of a violation of this AUP or a safety concern related to our Services, please report it. We take all reports seriously and investigate promptly. Reports made in good faith will be treated confidentially to the extent permitted by law.

Report Type Contact
Safety incidents[email protected]
Security vulnerabilities[email protected]
AUP violations[email protected]
General compliance[email protected]

10 Relationship to Other Policies

This AUP supplements and should be read alongside:

  • DeepCog.ai Terms of Service — governing the contractual relationship between users and DeepCog.ai
  • DeepCog.ai Privacy Policy — governing the collection and processing of personal data
  • Model Cards — per-model documentation of intended use, known limitations, and evaluation results, published on HuggingFace and deepcog.ai
  • Open Source Licenses — Apache 2.0 or other licenses applicable to specific model releases, which impose additional conditions on redistribution and modification

Where a conflict exists between this AUP and any open-source license, the more restrictive provision applies to the extent permissible.

11 Policy Updates

DeepCog.ai may update this AUP from time to time to reflect changes in our Services, applicable law, or best practices in responsible AI deployment. We will provide at least 30 days' notice of material changes via email to registered users and by posting an updated version at deepcog.ai/legal/aup.

Continued use of our Services following the effective date of any update constitutes acceptance of the revised AUP.

12 Contact

For questions about this policy, please contact: