The use of artificial intelligence (AI) in educational technology (Grand View Research, 2025), and schools are being encouraged to adopt AI by a range of actors from states (Dwyer, 2025) to the federal government (White House, 2025) to industry (Gulezian, 2025; Microsoft, 2025). Used well, AI may provide significant educational benefits. But the use of AI in a high-stakes context such as K–12 education also raises concerns about how well these systems will work, whether they will be worth the investment, whether they will introduce bias or other harms, and more (Quay-de la Vallee, 2025). One commonly recommended, broadly endorsed approach to support the responsible use of edtech products that incorporate AI is transparency from the developers and designers of these systems.1
Transparency about edtech products that incorporate AI can help school administrators (anyone at a school or local education agency who is making decisions about whether and how AI is used) select systems that will provide them with the functionality they need; ensure that those systems are suited for their contexts; identify potential risks and ways to mitigate them; and push the field at large toward improved development practices, outcomes, equity, and more. Recognizing these potential benefits, different edtech industry associations have begun pushing their members to commit to transparency about the use of AI in their products. However, little information is available about what constitutes meaningful transparency in this context and to what extent edtech companies are meeting the needs of schools, educators, students, and parents.To better assess the current state of transparency in edtech products in the K–12 context that incorporate AI, the Center for Democracy & Technology (CDT) has developed a rubric of eight key elements of transparency that contribute to a full picture of an edtech product and can be used to inform decisions such as whether to procure the product and how to use it:
- Use and Context Limitations: AI systems are often designed for a specific context or task, and using them outside these confines can lead to unexpected and potentially harmful behavior. Being able to assess these parameters is critical for educators to determine which tool to select and the appropriate confines and safeguards needed.
- Underlying AI Models: Many edtech products that incorporate AI are built on top of pre-existing models. The strengths and weaknesses of an underlying AI model may be present in the end product, contributing to its efficacy and risks. Knowing which model was used and how it affects the final tool is an important part of assessing how the tool will work in a given deployment.
- Training Data and Methodologies: How a model is trained, including the data used, affects how an AI system performs as the model will reflect any underlying assumptions, biases, gaps, and data quality issues. Decision-makers will need to assess whether the training data and methods make the system likely to perform well for their specific needs.
- Domain Adaptation: For products incorporating a pre-existing model, fine-tuning the system for an education context can help ensure that the product is effective (e.g., results and outputs reflect best practices in teaching and learning, not just a synthesis of any available information on a given topic) and mitigate risks that are of particular concern for educational use. As a result, decision-makers need to know what adaptations were made to assess if those changes are sufficient for their needs.
- Testing and Evaluation: Testing and evaluating products that incorporate AI is important to ensure that they are performing as expected, producing the intended results, and not causing unintended consequences. Providing decision-makers with access to a description of the tests and evaluations performed and to the results of these analyses supports informed choices and effective product adoption.
- Data Governance: AI systems consume and produce considerable data that, if not protected, can place students and schools at risk of privacy and other harms, so understanding how data is kept secure and managed appropriately is crucial to not only fulfilling legal obligations but also keeping students safe.
- Governance Structures: Governing AI systems is a complex process, incorporating steps such as determining data use policies, ensuring that staff are trained, communicating with stakeholders, and more. Companies should have frameworks in place for assessing their products from an ethical standpoint and ensuring that their products are used in an ethical manner. Companies should be clear about how they make these decisions, including who is involved and what decision-making structures they have in place, to ensure that their systems and data are appropriately managed.
- Information Accessibility: The previous elements are critical for decision-makers trying to select appropriate AI systems, and the information needs to be made available to them in an accessible and comprehensive way, such as by ensuring that the information is available in one prominent place; is written in plain, accessible language; and is well organized.
Using this rubric, CDT reviewed publicly available information for more than 100 edtech companies and found that:
- On average, edtech companies offer little transparency about their products, with companies receiving an average transparency score of 4 out of a possible 16.
- When edtech companies attempt to be transparent, the information is often focused on select categories, namely Use and Context Limitations and Information Accessibility.
- Companies with middling transparency make information about AI easy to access.
A lack of robust transparency from edtech vendors hinders the ability of educators and educational institutions to find effective, safe, and equitable AI-based edtech products. The rubric CDT sets out in this report provides guidance to vendors on what information they should provide about their products that incorporate AI, as well as what school administrators should demand and expect before they purchase and use such products.
