Responsible AI Policy

Updated Date: December 24, 2024

1. Introduction

The National Research Institute for Democratized Learning ("NRIDL," "we," "us," or "our") is a nonprofit organization dedicated to making AI and digital technologies accessible to all, thereby fostering equitable education and bridging the digital divide. Our AI solutions and services ("AI Products") are designed to empower communities, educators, small businesses, and learners with responsibly developed, transparent, and mission-aligned tools. This Policy outlines our approach to developing, deploying, and using AI technologies ethically and in line with our core values.

2. Scope & Purpose

This AI Policy applies to all AI-driven features, tools, and initiatives provided or supported by NRIDL through our websites, platforms, or partnerships (collectively, the "Services"). It governs how we design, use, and manage AI to ensure that our solutions are developed and employed responsibly, transparently, and for the public good.

You may encounter additional offerings or integrations from third parties via our platforms or programs. Those services are not covered by this Policy and remain subject to the terms and conditions of their respective providers.

3. Our Guiding Principles

Democratized Access & Equity

We strive to ensure that our AI Products promote access for underserved groups, bridging socio-economic, geographical, and cultural divides.

Transparency & Trust

We prioritize clear communication about when and how AI is used, ensuring that stakeholders—learners, educators, community partners—understand the nature of AI-generated outputs and potential limitations.

Human-Centered Design & Ethics

We embed ethical considerations in every stage of AI development and deployment, placing human well-being, dignity, and autonomy at the forefront.

Data Privacy & Security

We uphold strict policies to protect user data. We aim to use personal information only as necessary to enhance AI capabilities responsibly, while maintaining confidentiality and security.

Accountability & Continuous Improvement

We regularly assess and refine our AI Products, striving to minimize unintended harm, reduce biases, and respond proactively to community feedback.

4. AI Products and Your Data

4.1 Types of AI Products

Our AI Products may include:

4.2 Data Collection & Use

User Input: When you interact with our AI Products—e.g., input text, prompts, questions, or other content—we may use this data to generate outputs and improve model performance.

Aggregated & Anonymized Data: We may collect and aggregate user interactions for internal research and to refine our AI Products. All personally identifiable information ("PII") is removed or anonymized when used for training or analysis.

Training Models & Improving Services: NRIDL may use your inputs to enhance the accuracy and relevance of our AI solutions. However, we do not use private or proprietary user content (e.g., uploaded documents) to train third-party models unless explicitly authorized by the user.

5. Acceptable Use

5.1 Prohibited Uses

Our AI Products are intended for educational, humanitarian, and socially constructive purposes. Accordingly, you shall not use them for:

5.2 High-Risk Areas

Users must not deploy our AI Products in contexts deemed "high risk" under applicable AI legislation—such as EU AI Act domains—unless explicitly approved in writing by NRIDL. This includes any use that may threaten individual rights, health, or public safety.

6. Compliance & Enforcement

6.1 Reporting Misuse

If you notice any misuse of our AI Products—such as generating harmful content, infringing on privacy, or otherwise violating these standards—please contact us at support@nridl.org. We are committed to investigating and taking appropriate corrective action.

6.2 Consequences of Violation

Violations of this Policy or our other terms may lead to suspension or termination of access to our Services. We reserve the right to take additional legal measures if warranted.

7. Use of Third-Party Service Providers

NRIDL may integrate third-party AI services to enhance features such as language translation, sentiment analysis, or generative text. These third-party providers remain subject to their own terms of use and policies, which we encourage you to review independently.

We strive to work only with partners who share our values of data privacy, safety, and ethical development.

8. Security, Privacy, and Trust

8.1 Security Measures

We employ technical and organizational safeguards to protect the confidentiality and integrity of data processed by our AI Products. Although no system is entirely immune to security threats, we regularly review and update our protocols to mitigate risks.

8.2 Ethical & Safety Reviews

Before releasing major updates or new AI features, we conduct ethical and safety reviews to identify and address potential biases, harmful outcomes, or compliance issues. We encourage users to exercise caution and judgment when relying on AI-generated content, especially in critical domains such as public health or financial advice.

8.3 Transparency & Disclosure

Features that rely on third-party AI platforms will be clearly marked, and we will provide appropriate disclosures indicating the nature of the AI being used. We encourage users to attribute any AI-generated content to maintain transparency and trust.

9. Responsible AI Development and Deployment

Human Oversight and Accountability

NRIDL maintains an AI Ethics and Compliance Board (AIECB) comprising cross-functional members who oversee ethical impact assessments, privacy reviews, and risk management for all AI projects.

Cybersecurity Measures

All data used for AI training and inference is encrypted both at rest and in transit. We implement strict access controls, multifactor authentication, and regular security assessments.

Fairness and Equity

We actively test for and mitigate biases within AI models by employing diverse training datasets, applying fairness metrics, and using debiasing techniques to minimize disparate impacts on marginalized communities.

Safety and Reliability

Prior to deployment, AI systems undergo rigorous testing, including adversarial testing and scenario-based evaluations. Post-deployment monitoring ensures ongoing reliability and adherence to ethical standards.

10. Compliance, Audits, and Certification

NRIDL conducts periodic internal audits to evaluate compliance with this policy and the effectiveness of data protection, cybersecurity measures, and fairness mechanisms. We may also engage accredited third parties for external audits or seek relevant certifications to enhance transparency and trust.

11. Updates to this Policy

NRIDL reserves the right to update or modify this AI Policy at any time. Significant changes will be announced on our website or via other relevant channels, and the updated version will be effective immediately upon posting. Your continued use of our AI Products after such changes implies acceptance of the revised terms.

12. Contact Us

If you have questions or concerns regarding this AI Policy or our AI practices, please reach out to us.

Email: support@nridl.org

We value your feedback and remain committed to improving our AI Products in alignment with our mission of democratizing learning, closing the digital divide, and fostering inclusive, equitable innovation.