Scoring Model
Every tool in the District AI Index is evaluated across four dimensions. Each dimension is scored from 0 to 10 by our editorial team.
Ease of Use
20%How quickly can a teacher start using this tool productively? We evaluate onboarding friction, interface clarity, learning curve, and whether the tool fits into existing workflows without significant training.
Specific factors evaluated:
- Time to first productive use
- Interface clarity and navigation
- SSO / LMS integration simplicity
- Quality of onboarding materials
Instructional Value
40% (highest weight)Does this tool genuinely improve teaching and learning outcomes? We evaluate alignment with pedagogical best practices, standards support, differentiation capabilities, and whether the tool enhances or replaces teacher judgment.
Specific factors evaluated:
- Alignment to learning science / pedagogy
- Standards and curriculum alignment
- Differentiation and scaffolding support
- Assessment and feedback quality
- Student engagement and active learning
Data Privacy
20%How responsibly does this tool handle data? We evaluate FERPA and COPPA compliance, data collection minimization, DPA availability, transparency of privacy policies, and whether student data is used for model training.
Specific factors evaluated:
- FERPA compliance documentation
- COPPA compliance (especially for K–8 tools)
- Data Processing Agreement availability
- SOC 2 Type II certification
- Student data used for AI model training (red flag if yes)
- Clarity and accessibility of privacy policy
Accessibility
20%Can all educators and students use this tool effectively? We evaluate WCAG conformance, screen reader compatibility, keyboard navigation, and VPAT/ACR publication.
Specific factors evaluated:
- WCAG 2.1 AA conformance
- VPAT/ACR availability
- Keyboard navigation support
- Screen reader compatibility
- Mobile responsiveness
- Language/translation support
Overall Score Calculation
Overall Score =
(Ease of Use × 0.20)
+ (Instructional Value × 0.40)
+ (Data Privacy × 0.20)
+ (Accessibility × 0.20)
Instructional Value carries the highest weight because our primary audience — educators and district leaders — prioritize tools that genuinely improve teaching and learning outcomes.
Privacy Flag Logic
Each tool receives a privacy classification based on its privacy level assessment:
District Ready
Privacy Level = “High” — Tool has strong, verified privacy protections suitable for district-wide deployment. FERPA compliant, DPA available, no student data used for model training.
Teacher Use Only
Privacy Level = “Medium” — Tool has reasonable privacy practices but may lack full compliance documentation or have gaps that require individual teacher judgment rather than district-wide mandated deployment.
Use Caution
Privacy Level = “Low” — Tool has significant privacy concerns or insufficient documentation. Not recommended for use with student data without additional district-level review and safeguards.
Monthly Rankings Methodology
“Top Tools This Month” rankings use a weighted formula:
Monthly Ranking Score =
(Overall Score × 0.50)
+ (Recency Score × 0.25)
+ (Engagement Score × 0.15)
+ (Completeness Score × 0.10)
- Recency: Recently reviewed tools receive a boost (reviewed today = 1.0, 180+ days = 0.0)
- Engagement: Percentile rank of click-through volume in the past 30 days
- Completeness: Percentage of optional profile fields filled (privacy notes, accessibility notes, VPAT, etc.)
Compliance Documentation — Important Disclaimer
What our compliance signals mean — and what they don't mean:
The District AI Index editorial team reviews publicly-available vendor documentation to report compliance documentation status. We do not certify compliance. Vendors are responsible for their own legal claims.
When we mark a tool as “FERPA Compliant,” “COPPA Compliant,” “SOC 2 Type II,” “DPA Available,” or “VPAT/ACR Available,” we mean one of the following:
- “Available” — The vendor has published documentation publicly accessible via URL (linked on the tool profile)
- “Partial” — The vendor references the standard but documentation is incomplete, gated, or outdated
- “Unavailable” — The vendor has explicitly stated the standard is not met, or no documentation exists
- “Unknown” — We have not been able to verify documentation either way
These signals reflect documentation status as of the last editorial review date. Compliance landscapes change. Vendor claims may change. Documents may be removed or updated.
Districts must independently verify all compliance claims directly with vendors as part of their procurement process. District AI Index provides a starting point for evaluation, not a substitute for legal, privacy, or accessibility review.
Editorial Independence
Our non-negotiable rules:
- Editorial scores are never influenced by paid relationships, affiliate partnerships, or listing tier.
- The monthly ranking algorithm explicitly excludes featured_flag and is_sponsored from its computation.
- Sponsored and featured listings are always visually labeled — never disguised as editorial picks.
- Vendors cannot pay to improve their editorial score or ranking position.
- We will decline or remove listings that misrepresent their privacy, compliance, or capabilities.
- All compliance documentation links are independently verified — we link to vendor pages, not vendor claims.
Corrections & Disputes
If a vendor or reader believes our evaluation contains an error:
- Email editorial@districtaiindex.com with the specific claim and supporting evidence.
- Our editorial team will review the claim within 5 business days.
- If the claim is verified, we will update the tool profile and note the correction.
- If the claim is disputed, we will respond with our reasoning.
We maintain a correction log. All significant changes to tool scores or classifications are documented with dates and reasons.
Questions about our methodology? Contact our editorial team.