Responsible AI Policy.
Updated Date: Dec 24th, 2024
1. Introduction
The National Research Institute for Democratized Learning (“NRIDL,” “we,” “us,” or “our”) is a nonprofit organization dedicated to making AI and digital technologies accessible to all, thereby fostering equitable education and bridging the digital divide. Our AI solutions and services (“AI Products”) are designed to empower communities, educators, small businesses, and learners with responsibly developed, transparent, and mission-aligned tools. This Policy outlines our approach to developing, deploying, and using AI technologies ethically and in line with our core values.
2. Scope & Purpose
This AI Policy applies to all AI-driven features, tools, and initiatives provided or supported by NRIDL through our websites, platforms, or partnerships (collectively, the “Services”). It governs how we design, use, and manage AI to ensure that our solutions are developed and employed responsibly, transparently, and for the public good.
You may encounter additional offerings or integrations from third parties via our platforms or programs. Those services are not covered by this Policy and remain subject to the terms and conditions of their respective providers.
3. Our Guiding Principles
Democratized Access & Equity
We strive to ensure that our AI Products promote access for underserved groups, bridging socio-economic, geographical, and cultural divides.Transparency & Trust
We prioritize clear communication about when and how AI is used, ensuring that stakeholders—learners, educators, community partners—understand the nature of AI-generated outputs and potential limitations.Human-Centered Design & Ethics
We embed ethical considerations in every stage of AI development and deployment, placing human well-being, dignity, and autonomy at the forefront.Data Privacy & Security
We uphold strict policies to protect user data. We aim to use personal information only as necessary to enhance AI capabilities responsibly, while maintaining confidentiality and security.Accountability & Continuous Improvement
We regularly assess and refine our AI Products, striving to minimize unintended harm, reduce biases, and respond proactively to community feedback.
4. AI Products and Your Data
4.1 Types of AI Products
Our AI Products may include:
Generative AI Tools: Automated assistants that provide advice, draft text, develop lesson plans, or perform other generative tasks to support educators, small businesses, and community projects.
Analytics & Insights: Machine-learning models that analyze user-generated data to offer insights on learning outcomes, operational efficiency, or community engagement.
Personalized Learning Platforms: Adaptive educational technologies that tailor content to a learner’s progress and needs.
4.2 Data Collection & Use
User Input: When you interact with our AI Products—e.g., input text, prompts, questions, or other content—we may use this data to generate outputs and improve model performance.
Aggregated & Anonymized Data: We may collect and aggregate user interactions for internal research and to refine our AI Products. All personally identifiable information (“PII”) is removed or anonymized when used for training or analysis.
Training Models & Improving Services: NRIDL may use your inputs to enhance the accuracy and relevance of our AI solutions. However, we do not use private or proprietary user content (e.g., uploaded documents) to train third-party models unless explicitly authorized by the user.
5. Acceptable Use
5.1 Prohibited Uses
Our AI Products are intended for educational, humanitarian, and socially constructive purposes. Accordingly, you shall not use them for:
Illegal or Harmful Purposes: Any activity that violates local, national, or international laws, or that promotes violence, terrorism, or abuse.
Harassment or Discrimination: Content that is threatening, harassing, defamatory, hateful, or otherwise discriminatory toward individuals or groups.
Misinformation or Manipulation: Creating deceptive materials, deepfakes, political propaganda, or fraudulent content.
Spam or Malicious Content: Disseminating large-scale spam, malicious code, or content-farming materials.
Privacy Violations: Submitting others’ personal information without consent, or infringing upon data protection laws and regulations.
Intellectual Property Infringement: Violating copyright, trademark, or other intellectual property rights.
Prompt Injection or Exploits: Attempting to discover or manipulate the underlying source code or logic in unauthorized ways.
5.2 High-Risk Areas
Users must not deploy our AI Products in contexts deemed “high risk” under applicable AI legislation—such as EU AI Act domains—unless explicitly approved in writing by NRIDL. This includes any use that may threaten individual rights, health, or public safety.
6. Compliance & Enforcement
6.1 Reporting Misuse
If you notice any misuse of our AI Products—such as generating harmful content, infringing on privacy, or otherwise violating these standards—please contact us at [Contact Email]. We are committed to investigating and taking appropriate corrective action.
6.2 Consequences of Violation
Violations of this Policy or our other terms may lead to suspension or termination of access to our Services. We reserve the right to take additional legal measures if warranted.
7. Use of Third-Party Service Providers
NRIDL may integrate third-party AI services to enhance features such as language translation, sentiment analysis, or generative text. These third-party providers remain subject to their own terms of use and policies, which we encourage you to review independently.
We strive to work only with partners who share our values of data privacy, safety, and ethical development.
8. Security, Privacy, and Trust
8.1 Security Measures
We employ technical and organizational safeguards to protect the confidentiality and integrity of data processed by our AI Products. Although no system is entirely immune to security threats, we regularly review and update our protocols to mitigate risks.
8.2 Ethical & Safety Reviews
Before releasing major updates or new AI features, we conduct ethical and safety reviews to identify and address potential biases, harmful outcomes, or compliance issues. We encourage users to exercise caution and judgment when relying on AI-generated content, especially in critical domains such as public health or financial advice.
8.3 Transparency & Disclosure
Features that rely on third-party AI platforms will be clearly marked, and we will provide appropriate disclosures indicating the nature of the AI being used. We encourage users to attribute any AI-generated content to maintain transparency and trust.
9. Updates to this Policy
NRIDL reserves the right to update or modify this AI Policy at any time. Significant changes will be announced on our website or via other relevant channels, and the updated version will be effective immediately upon posting. Your continued use of our AI Products after such changes implies acceptance of the revised terms.
10. Contact Us
If you have questions or concerns regarding this AI Policy or our AI practices, please reach out to us.
We value your feedback and remain committed to improving our AI Products in alignment with our mission of democratizing learning, closing the digital divide, and fostering inclusive, equitable innovation.
Responsible AI Development and Deployment Policy.
Updated Date: Dec 16th, 2024
1. Purpose and Scope
The National Research Institute for Democratized Learning (NRIDL) is committed to developing and deploying Artificial Intelligence (AI) technologies ethically, responsibly, and in a manner that respects human rights, privacy, and the broader public interest. This policy establishes governance frameworks, human oversight mechanisms, cybersecurity protections, and data stewardship principles for all AI-related initiatives undertaken by NRIDL.
This policy:
Defines measures to ensure compliance with relevant Canadian legislation, including Bill C-27 (which encompasses the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act), as well as the Personal Information Protection and Electronic Documents Act (PIPEDA).
Sets forth standards for human oversight and accountability, privacy and data governance, fairness and equity, safety, and cybersecurity.
Applies to all NRIDL AI projects, research activities, partnerships, contractors, vendors, and collaborators who work with or on behalf of NRIDL.
2. Alignment with Canadian Legislation and Standards
NRIDL’s AI initiatives will comply with Canadian privacy and AI-related regulatory frameworks, including:
Bill C-27:
Consumer Privacy Protection Act (CPPA): Ensures personal information is handled lawfully, with meaningful consent, data minimization, and transparency measures in place.
Personal Information and Data Protection Tribunal Act: Acknowledges the role of an independent tribunal in addressing complaints and ensuring accountability in data protection.
Artificial Intelligence and Data Act (AIDA): Guides the responsible design, development, and deployment of AI, especially for high-impact systems.
PIPEDA: Continues to govern the fair handling of personal data, ensuring adherence to its principles such as accountability, identifying purposes, consent, and limiting use, disclosure, and retention.
3. Human Oversight and Accountability
Governance Structure:
NRIDL will establish an AI Ethics and Compliance Board (AIECB) comprising cross-functional members (legal, data protection, engineering, policy, ethics, community representatives). The AIECB will:Review AI projects at key lifecycle stages (design, testing, deployment, post-deployment monitoring).
Oversee ethical impact assessments and privacy impact assessments to ensure compliance with CPPA, AIDA, and PIPEDA.
Provide guidance on risk management, address stakeholder concerns, and resolve internal disputes related to AI ethics.
Responsibility Assignments:
NRIDL will designate accountable roles for each project phase:Project Lead: Ensures compliance with this policy and oversees day-to-day activities.
Data Privacy Officer: Validates that data handling aligns with CPPA, PIPEDA, and internal privacy frameworks.
Security Officer: Ensures cybersecurity standards and practices are upheld.
Legal Counsel: Confirms compliance with applicable legislation and regulatory standards.
Human-in-the-Loop (HITL):
For high-risk AI applications (e.g., those impacting individual rights, wellbeing, or educational opportunities), outputs will be subject to human review before final decisions are implemented. This ensures a fail-safe check and accountability to prevent automated decision-making that could produce unjust outcomes.Transparent Documentation:
NRIDL will maintain documentation detailing the purpose, design choices, data sources, model architectures, and validation methodologies for each AI system. This supports audits, external reviews, and legal compliance.
4. Cybersecurity Measures
Data Security and Encryption:
All data used for AI training and inference, especially personal or sensitive data, will be encrypted both at rest (e.g., AES-256) and in transit (e.g., TLS/SSL). Access to data will be restricted to authorized personnel with a legitimate operational need.Identity and Access Management (IAM):
NRIDL will implement strict access controls, enforce multifactor authentication (MFA), and adopt the principle of least privilege to prevent unauthorized access. Regular reviews of user accounts and permissions will be conducted.Secure Development Lifecycle (SSDLC):
Throughout the AI development process, security best practices will be integrated, including code reviews, automated scanning for vulnerabilities, and adherence to secure coding standards. AI components will be regularly patched and updated.Penetration Testing and Vulnerability Assessments:
NRIDL’s IT security team or appointed third parties will conduct periodic penetration tests and vulnerability assessments on AI systems and associated infrastructure. Findings will be addressed promptly, and remediation steps documented.Incident Response and Breach Notification:
In the event of a security incident, NRIDL will follow a formal incident response plan, aiming for rapid containment, eradication of threats, and system recovery. Any reportable breaches will be disclosed to affected parties and relevant authorities in accordance with CPPA and PIPEDA notification requirements.
5. Data Privacy and Governance
Data Minimization and Purpose Limitation:
NRIDL will collect and use only the minimum amount of personal data necessary for AI training and operations. Purpose specification is a guiding principle: personal information will only be used for the reasons stated at the time of collection, as required by CPPA and PIPEDA.Consent and Transparency:
Where personal data is involved, individuals will be informed about data usage in AI systems, including the intended purposes, potential risks, and safeguards. NRIDL will provide clear, accessible privacy notices and obtain meaningful consent where required.Data Stewardship and Integrity:
Data stewards will ensure that datasets are accurate, relevant, and current. Regular data quality checks and updates, alongside strict version control and logging, guarantee data integrity.
6. Fairness and Equity
Bias Mitigation Measures:
NRIDL will actively test for and mitigate biases within AI models by:Employing diverse and representative training datasets.
Applying fairness metrics (e.g., demographic parity, equalized odds) and conducting bias audits.
Using debiasing techniques (e.g., re-weighting, adversarial de-biasing) to minimize disparate impacts on marginalized communities.
Stakeholder Engagement and Inclusivity:
NRIDL will engage external experts, community organizations, and individuals potentially affected by AI systems. Feedback from these engagements will inform model design, feature selection, and evaluation criteria to support equitable outcomes.Continuous Improvement:
NRIDL will continuously monitor AI outcomes and user feedback to identify new sources of bias or inequity. The AIECB will recommend adjustments, retraining, or alternative methods if disparities persist.
7. Safety and Reliability
Robust Testing and Validation:
Prior to deployment, AI systems will undergo rigorous testing, including adversarial testing, stress testing, and scenario-based evaluations. Safety benchmarks will be established, and fail-safes built into critical systems to handle malfunctions or unexpected behaviors gracefully.Explainability and Interpretability:
NRIDL will strive to use explainable AI methods, especially for high-stakes decisions affecting learners, educators, or the public. Clear explanations of AI-driven outcomes will foster trust and make it easier to identify and correct errors.Monitoring and Maintenance:
Post-deployment, AI systems will be monitored to detect performance degradation, anomalies, or unsafe outputs. Maintenance protocols, including periodic model updates and retraining, will ensure ongoing reliability and adherence to evolving legal and ethical standards.
8. Compliance, Audits, and Certification
Training and Education:
NRIDL will provide regular training for all staff involved in AI development and management, ensuring familiarity with legal requirements under Bill C-27, PIPEDA, and internal ethical standards.Internal Audits:
Periodic internal audits will evaluate compliance with this policy, as well as the effectiveness of data protection, cybersecurity measures, and fairness mechanisms. Audit results will inform continuous improvements.External Reviews and Certifications:
NRIDL may engage accredited third parties to conduct external audits or seek relevant certifications (e.g., ISO/IEC 27001 for information security, ISO/IEC 27701 for privacy information management). Such certifications and reviews enhance transparency and trust with stakeholders.
9. Policy Review and Updates
This policy will be reviewed at least annually, or more frequently as needed, to adapt to technological changes, emerging best practices, and evolving legal requirements (including updates to Bill C-27 and PIPEDA). Amendments will be communicated promptly to all relevant parties.
10. Public Accountability and Transparency
NRIDL will publish high-level summaries of its Responsible AI practices, impact assessments, and key metrics to inform the public, stakeholders, and regulators. This transparency aligns with NRIDL’s mission to democratize learning and foster public trust in AI-driven innovations.