In 2018, Secretary of State for Education Damian Hinds issued a challenge to the education technology industry to help solve some of the most pressing problems facing education. The issues he outlined included developing more inclusive teaching practices and improving access to flexible teacher professional development, as well as improving assessment and providing support to school administrators.
At UCL’s EDUCATE programme, we have worked with more than 150 small companies who seek to address these challenges with cutting-edge technology, ranging from built-in AI chatbots that can provide real-time support to learners, to assessment tools that enable teachers to use criterion-based marking to captures students’ progress over time. However, education technology suppliers can’t solve these problems alone (Luckin , 2016). Industry, academia and schools need to work together to co-develop high-quality technology that has a positive impact on its users.
We aim to raise the level of debate concerning the efficacy of education technology across the educational community. Schools who are informed consumers of education technology can ensure that the products they purchase make the best use of their increasingly limited budget and time and provide a real benefit to learners. According to a 2018 survey, however, only 44 per cent of primary and 31 per cent of secondary schools in England report that the implementation of education technology has helped them to achieve their original objectives (BESA, 2017). This article aims to help schools to use technology to achieve intended goals by providing advice and guidance to conduct their own pilot study or evaluation of technology.
Grounded in research evidence
As a research institution, evidence forms the foundation of the work we do. In our work with EDUCATE companies, one of the first learning activities involves identifying different types of evidence that might influence the design, marketing claims or use of their product. Not all companies use evidence in this way, so it becomes equally important for teachers and schools to understand the nature of the available evidence base for education technology, in order to make informed decisions about what might be suitable for their context and needs. The range of evidence types includes:
- Anecdotal evidence, which regularly takes the form of a testimonial. It is subjective, and the sample size is often small. Anecdotal evidence can inform you about a product that has worked for a trusted colleague or another school like yours. However, it would be unwise to base your purchasing decision on this alone.
- Descriptive evidence may be qualitative or quantitative and is useful in providing data about the characteristics of users of a particular product or the environment in which product implementation took place. Descriptive evidence is useful for schools trying to identify products that have worked in schools with similar student populations or contexts.
- Correlational evidence establishes that two measurable variables are related, but one does not necessarily cause the other. This type of evidence is common in social science research, in which causality is so difficult to substantiate.
- Causal evidence is often thought to be the ‘gold standard’ for scientific research because it demonstrates that something causes something else to happen. It is extremely difficult to establish causality in research not conducted in a laboratory, in which the intervention can be isolated as the only variable that could have caused the observed outcome.
The most important point about research evidence is that more than one type is usually needed to make an informed decision. For example, even if a research study is able to establish causality, this still might not provide you with all of the information you need to understand whether the product is appropriate for your context.
How to be a discerning consumer
The relatively small number of schools who report that education technology has helped them achieve their original objectives may suggest that much technology in use by schools either does not perform in the way its designers claim or was not an appropriate choice to address these schools’ problems. The following steps can help to avoid these issues.
Can your issues be addressed by technology?
The first step is to identify and prioritise problems facing your school that might be solved using technology (not all can be!). The best way to do this is to conduct a needs assessment: a complete evaluation of your school’s needs in relation to the priorities for its development, as outlined in the school development plan. This exercise should include all relevant members of school staff, not be limited to the senior leadership team and/or technology lead at the school. A needs assessment is more about the school’s objectives than it is about the technology itself.
After gathering the needs from your school community, conduct an inventory of the technology in your school. Many schools have cupboards or networks awash with previously purchased resources and it is possible that an existing technology resource might be used or adapted to address your needs. This inventory should also include human resources. You might want to survey your staff to understand their capacity – and willingness – to adopt new technology. The results of such a survey could help you to ascertain how much professional development staff might need before piloting or adopting new technology.
Finding the right technology to meet your needs
Once you know you need technology, how do you find the best product to address your needs? Schools are likely to consult other teachers when inquiring about the efficacy of a particular product (BESA, 2017); as discussed previously, this kind of anecdotal evidence does have its value but should not be the only evidence consulted. A company’s website, which is a common source of information for schools, might provide a lot of information about the product but it is rare that any of the information presented will be negative. It is worth searching Google Scholar to see whether any research has been conducted on the product. Some companies may have commissioned formal evaluations and may have even posted findings on their websites – ask them to send you the original research report! This can help answer the following questions:
- Does the product do what the company claims it does?
- What are the required conditions for the product to be successful?
- Are the findings consistent with the data that was collected?
- Were the schools or learners that formed the sample similar to yours?
This might help to determine whether the findings presented in the company’s marketing materials are both reliable and relevant for your context – or, more importantly, give you a set of questions to ask the company or other schools using that technology.
Try before you buy
Today there are platforms that allow schools the opportunity to pilot technology – free of charge – before making a purchasing decision. Piloting education technology is not a quick, two-week project, however. Your school will need to organise a team to plan the pilot, determine the sample and time period for the pilot and organise the collection and storage of data.
In our experience, successful pilots tend to follow the steps below.
- Write a list of questions that you need the pilot to answer. What are you hoping to learn from it? These are the research questions you hope will be answered by the data you collect during the trial of the product.
- Determine the timing and schedule for the pilot. The best timing for those who will be involved in the pilot might be influenced by when you need to make a purchasing decision, or by your school timetable or exam schedule. In addition, the supplier should provide guidance as to how long the product should be used to achieve the stated outcomes.
- Draw a sample of teachers and students to participate in the pilot. Justify why you have made this selection.
- Collect your data. This includes data on your users’ experiences, as well as data that will help you answer your research questions. The nature of the data that you collect to understand the learning impact will vary depending on the technology. For example, evaluating the impact of primary pupils’ use of a programmable toy might use systematic observations and interviews with children. If you want to understand whether a product influences a student’s knowledge or skills, you will want a baseline measure of these before the student interacts with the product and another measure after they have used it.
- Review the results of your pilot. Understand how your school’s context or special circumstances may have influenced your results. Additionally, if you altered the implementation of the product from what was recommended by the supplier, did that affect the intended outcome?
- Make a purchasing decision. Revisit your original research questions and determine whether the product trialled has actually met your school’s needs. If so, determine whether you will purchase the product.
- Make an implementation plan, which must consider how teachers and students learn to become confident users of the technology. All teachers (and/or students) will have a ‘first lesson’, which must be carefully planned, implemented and evaluated – or there is unlikely to be a second! It is rare for whole groups of teachers to share the same enthusiasm as those who chose the product, so be realistic in your initial expectations.
- Evaluate the implementation against your objectives. Make sure you continue to seek feedback from teachers and learners – and, if the technology evolves new functionalities or content, so should your implementation plans.
For more information about the EDUCATE programme, see: educate.london