It’s assessment time. You’ve just finished a unit on skip counting and you want to know how your students have progressed. So you plan out the assessment, get your assessment tool ready and pencil in a time to run it with your class. In reality, teachers know too well this isn’t as easy as it sounds. And the work doesn’t disappear once the assessment is complete. Because then we need to mark, input data, output data and analyse the results to achieve our ultimate goal of understanding what our students actually know.
But one of the most frustrating things about assessment is that often, our assessment tools don’t give us this information we need to inform our practice. The information isn’t granular enough. Plus it can be really difficult to interpret.
While assessment tools are designed to help us gather information and insights about our students, a lot of the time they actually result in extra admin work, unreliable or unusable data and frustration.
So, what do we need from our assessment tools? What features will give us data we can actually use without adding to our ever-growing admin lists? Let’s take a look.
Assessments should only exist so that learning can take place. Sounds like common sense right? But think about it. So many of our assessments are implemented so we can meet reporting requirements. And a lot of assessment tools are designed with this in mind.
What if instead your assessment tool was centred on giving you the information that you need to increase your impact in the classroom? That would mean you’d be empowered with relevant information to inform your practice.
You could use the tool to build a post-assessment for the topic you’ve just finished teaching to determine student progress, plus add in pre-assessment questions to identify what your students already know of the next topic. This level of flexibility allows you to go back to cover off gaps in the previous topic, and enter the new topic knowing where you’ll need to spend more or less time with your class.
Again, this seems obvious. Of course your assessment tool should be able to do this. But most don’t. Most take a singular post-assessment view that gives you little more than a snapshot of how students went with the last topic. And this information is so high-level that you can’t pinpoint the exact content that they missed. Which doesn’t help you help them.
That’s why granular, trustworthy data is the next important feature of a great assessment tool.
Teachers can use data to achieve great outcomes for their students. But thorough, useful data can be difficult to collect. And sometimes the data we have can be really hard to interpret or not practical to use in the classroom.
While the goal of most of the assessments we undertake will be to show student progress, we also need post-tests to identify gaps from previous topics and pre-tests to provide an overview of student understanding before moving on to new topics.
This data informs teacher practice, allowing us to increase our impact with our students by focusing on what they’re actually ready to learn. We can centre our lesson plans on areas our class, or specific students, are weaker in — rather than giving every aspect of the unit equal weight. Not to mention that with this data, we can go back and fill gaps from the previous topic where needed.
But how do you get data that is granular enough to be used in this way? Firstly, you need a curriculum aligned tool that provides data that is detailed enough to see what aspects of a topic a student has mastered and what they still need to work on.
Secondly, you need that data to be really easy to access. There’s no point having really granular data that takes hours to output, analyse or even find amongst multiple spreadsheets. Your assessment tool shouldn’t just have the capability to provide detailed information, it should display it in real-time, in an easy to understand dashboard. That way, you can get the information you need and move into planning quickly — without needing to dedicate time to input, output or analyse information. Because a tool that adds to you workload isn’t really a ‘tool’ at all — it’s a burden and one that you don’t have time for.
What underlines all of this, however, is the need for the data you have to be accurate.
While granular data can have a positive impact on our teaching practice, it can have the opposite effect when it’s not accurate. We can waste our time with incorrect information that has us focusing where we shouldn’t.
The assessment tool you use isn’t worth a thing unless the data it provides is accurate and trustworthy. But how can we be sure the tool that we use will give us information we can trust?
The first thing you should look for, is a tool that’s been extensively tested. You want the tool to be proven to work with students from different backgrounds, levels and in different settings. If the tool was designed with thorough testing as part of the process then you can be sure the creators have prioritised data accuracy.
Another thing to look for is the question types within the assessment. Does the tool have lots of different question types to allow students to show their understanding of a concept in different ways? As we know, young students should be assessed with activities that are age appropriate, something that some tools neglect as they are adapted from platforms used with older students. A tool that accounts for this will provide more accurate data.
Data can’t truly be accurate if the assessment tool you're using isn’t specifically designed for F-4 students. This is important for two reasons. Firstly, because you need the tool you use to take into account developing language and motor skills. And secondly, you want your students to be engaged in the assessment. Assessments can be tough to run at the best of times, so using a tool that feels more like an activity than a test will ensure students enjoy completing it.
A tool that has a young student friendly interface with big fonts and graphics, colours and gamification, will ensure students can accurately complete the assessment, and have fun while doing it.
A tool like Early Insights, also includes features that reduce literacy demands, like questions being automatically read out to students, and students being able to answer questions by speaking to the device. This ensures teachers are not spending time explaining questions to students and showing them how to answer. It is also extensively tested with young students and EALD students to make sure it’s intuitive and accurately assesses understanding.
It is still vitally important to take into account teacher judgement in many types of assessments. Which is why your tool needs to provide a place for this input of information.
The value of teacher observations cannot be overstated. Every day we see our students learn, grow and develop, in both big and small ways. The insights we have and the judgements we make can’t be replicated by any assessment tool, no matter how sophisticated. That’s why it’s so important to use a tool that values teacher observations and includes interviews for concepts that require teacher judgement.
The Early Insights F-4 assessment tool, for example, includes interviews for certain topics that benefit from teacher judgment. Interview questions are built into the tool, so you don’t have to create your own questions. During the interview, you are able to select whether the student has mastered or is yet to master the concepts observed. The system then adds this information to your data dashboard in real time, so you can see the full picture of where each student is at.