Skip to Content

Dr. Ying Xu, Assistant Professor of Learning Sciences & Technology, University of Michigan: Evidence Tier 4 & Tier 1

In our interview with Dr. Ying Xu, Assistant Professor of Learning Sciences and Technology at University of Michigan, she reflects on the following questions: “As an education researcher, can you describe the co-design elements of your partnership-based work with children, parents, and schools? How have you engaged with participants to investigate and build evidence-based educational technologies for kids?”

Dr. Ying Xu’s team, Intelligent Media Lab at University of Michigan, research children’s informal STEM learning and engagement with interactive conversational agents in educational digital media. Dr. Xu spoke with us about Tier 4 and 1 evidence-building activities her team engages.

The project is funded through the NSF Advancing Informal STEM Learning (AISL) program and involves developing interactive conversational versions of PBS KIDS STEM education video content through the incorporation of an intelligent tutor. The intelligent tutor uses speech recognition and natural language processing to interact with students by asking questions and providing real-time tailored feedback as students watch STEM video content.

Dr. Xu’s research team carries out research in two parts: development of the interactive videos, followed by evaluation of the videos. Dr. Xu explains:

We engage in different forms of evidence-building processes with our partners depending on the stage and nature of our project.

During the development phase, Dr. Xu’s team seeks formative feedback from media partners and users to ensure the design aligns with industry practices and guidelines. At this stage, the team engages in Tier 4 activities with a small number of participants to gather feedback quickly to help the team compare and decide on optimal design solutions.

This process includes inviting teachers and students from various multilingual language backgrounds, including Spanish, English, and Mandarin, to test whether the intelligent tutor’s questions are clear, as well asconducting on-site field testing to ensure the application runs smoothly on school Wi-Fi and in noisy classroom environments. The team then uses formative feedback from these Tier 4 activities to rapidly update and refine their design.

Once Dr. Xu’s team has a relatively stable design version, they move to the evaluation stage.  At the evaluation stage, the team engages in Tier 1 evidence-building activities – working with a larger pool of hundreds of multilingual students – to evaluate whether the interactive videos incorporating intelligent tutors enhance students’ understanding of scientific concepts.

These evaluations are typically carried out using experimental methods, particularly randomized controlled trials. For example, in one evaluation, students were randomly divided into two groups: one group watched STEM educational videos in the interactive format accompanied by the intelligent tutor, while the other group watched the videos in their original, non-interactive format. The team then compared students’ understanding of scientific concepts presented in the videos across the two formats.

Dr. Xu’s team have found that students who engage with STEM educational content using interactive intelligent tutors perform better than those who engage by watching non-interactive video versions of the content.

While these findings provide some evidence of how the interactive videos improve students’ learning, Dr. Xu closed with the caveat that positive results from multiple studies are needed to claim a strong evidence-base. As such, her team is working to replicate these studies with students from diverse multilingual and socioeconomic backgrounds in different contexts to ascertain if the same outcomes can be replicated.

Recognizing the inherent challenges for individual research groups undertaking such wide-ranging research, Dr. Xu emphasized the need for Open Science, a set of principles and practices that aim to make scientific research from all fields accessible to all:

We advocate for Open Science and share the source code of our program, and also the anonymized research data. This will allow other research groups to contribute to the evidence-building process collectively.

The ultimate goal of these replication efforts via Open Science practices is to bring to fruition evidence-based integration of cutting-edge natural language processing (NLP) technologies into children’s STEM media in a manner that allows children to interact with educational media in meaningful and educationally beneficial ways.