The final stage of the ADDIE model measures the success of the project and the rollout of the course. Planning for evaluation at the analysis stage is important to ensure a successful outcome - what is the problem that needs to be solved? What is the overall goal of the learning? What will success look like? - answers to these questions should be threaded throughout the stages of ADDIE.
For this project elements of all 3 types of evaluation will be included:
Goal based evaluation - this focuses on how the learning experiences address the business problem. Does this learning fill the gap on current practice in reading, interpreting, reporting and reviewing non-18FDG scans? Does it equip radiologists with the skills needed to read and report on PSMA?
Process based evaluation - each phase of ADDIE from analysis to implementation will be evaluated. Feedback from all stakeholders about Analysis, Design, Development and Implementation will be collected with a view of seeing what can be improved for future development projects.
Outcomes based evaluation - through assessing learning objectives and looking at application of skills on the job, the impact of the digital learning experience on the learner's knowledge, skills and behaviours can be evaluated.
Although there are other models of evaluation, for this project the Kirkpatrick Model will be used, incorporating the 4 stages:
Level 1 measures learner experience and engagement with the course, it is reviewed on an ongoing basis as learners flow through the experience:
Learning data analytics from LearnDash LMS will provide data on
Course enrolment numbers
Course completion data (What percentage of learners have completed? How long did the learning take? Were there parts of the course that took significantly longer than others? If they have not completed, where did they stop within the learning experience?)
Learner feedback survey
A feedback survey will be embedded into the learning platform and become available to the learner upon completion of the course
This level is an integral part of the training experience and evaluates and measures whether the course achieved the intended learning outcomes. It evaluates the transfer of learning of the participants and measures: knowledge, skills, expertise and is done using the end of course assessment but is only effective if the questions in the assessment align perfectly with the course content and learning outcomes. Again, the LearnDash platform will provide the tools to do this, and this evaluation will occur on an ongoing basis as learners access and complete the course:
LMS report on quiz completion, number of attempts to pass and average scores
Level 3 evaluation will take place a few months after the rollout of the training course. Behaviour changes take a while to take effect within an organisation, and so level 3 evaluations are typically conducted 3-6 months after the training. Evaluation is designed to determine if the newly acquired skills, knowledge, or expertise are being used in the every day environment of the learner. To be most effective, the level 3 evaluation approach will be a bi-yearly review of relevant metrics and performance review data. Evaluation will be in the form of:
Manager's survey
Performance reviews - are the numbers of radiologists reporting more accurately on PSMA PET-CTs?
Level 4 data measures how the training program impacts and contributes to the success of the organization as a whole. For this project the leading indicators of business impact are:
Improved patient experiences and better clinical outcomes for patients.
An increase in the number of radiologists skilled and equipped to report on PSMA PET-CT.
An increase in the number of PSMA PET-CT scans reported.
Peter Morville's UX Honeycomb model provides a good framework for evaluating user experience of the learning. When evaluating the PSMA course, each of the segments provide us with the following measurements:
The commentary from Part Two Screen Plan and Storyboard goes into specifics and answers the following:
Has the course been designed to minimise cognitive overload? (Chunking of content has been considered and content has been separated into lessons and modules)
Have Meyer's multimedia principles been addressed? (The course contains images/interactions/examples of text formatting to bring the design to life. Content has been thought through in line with Mayer's principles.)
This incorporates visceral, behavioural, and reflective design:
The course has been designed using Rise 360 templates to contain images and text layouts to visually appeal to the learner
Real case-based scenarios are used throughout the course to connect the learner with the material
The course is designed with Universal Design Principles in mind:
Multiple means of engagement through different learning activities and the extent to which the course is accessible for all learners (see UDL section)
The learning materials are easy to find, all interactions work and have been tested
Assessments are easy to find and cross-referenced with scan images that need to be accessed through the internal PACS system. They have been fully tested and instructions are clearly set out for learners
All content has been validated by leading experts in the field, any research papers that the content refers to have also been validated as peer-reviewed and credible
Instructional design principles have been fully adhered to:
Learning outcomes clearly defined and articulated, content and assessments are aligned with outcomes, a discussion forum is included for learning reflections
Quality control processes are in place:
Content edit and QA / peer review to ensure images/text are referenced correctly, spelling and grammar is checked, consistency standards
The Digital Learning Institute's UCDL Honeycomb Model Checklist (see Appendix 1) is a tool to measure and evaluate based on the segments of the honeycomb listed above. The team will use this tool to evaluate the course and put a plan in place for any improvements needed. It will be the job of the Learning Developer/ID to complete the form. As this course is asynchronous eLearning some of the parts of the checklist related to Communication and Connections and Building a Community are not fully applicable. The course does contain a discussion forum and it is hoped that in future course development projects we can use more of the points on the checklist as a guide for building better communication, connections, and community. As we look to include more social learning and networking opportunities within our course offerings in our LMS, our Marketing & Events Coordinator will be responsible for evaluating against these 2 areas of the checklist.
Universal Design for Learning (UDL) is a framework to improve and optimize teaching and learning for all people based on scientific insights into how humans learn (www.cast.org). Using the Digital Learning Institute's UDL checklist (see Appendix 2) will help to ensure the learning from the course is inclusive and transformative for everyone and provide insights into how the course and learning experience could be improved. Transforming the data from the UDL and UX evaluations into an action plan for the improvements would be the next step in the process. Currently, different members of the team take on the role of Quality Assurance/Content Editor for any new course developments, it is their responsibility to check for aspects of all 3 areas of the UDL framework.
When evaluating the course, the following are some examples of what the course does well in regards to UDL:
Recruiting interest - real case-based scenarios used throughout the course allowing the learner to relate to their own work/life experience. Various multimedia activities to engage and allow the learner to interact with the content.
Sustaining effort and persistence - learners are encourage to post to a discussion area where social learning can take place to foster collaboration and community.
Self regulation - self reflection is encouraged at the end of each module.
Perception - the LMS offers opportunities for the user to customise how they view the course. The course contains multiple ways of viewing visual information, e.g., infographics, tables, images, text
Language - the course uses language and terminology that radiologists use on a daily basis in their role
Comprehension - the course content draws parallels to what learners do in their daily practice and the skillset they already have. Transfer is maximised through opportunities to review and practice learning through case-based quizzes.
Physical action - learners can navigate directly to lesson content, skipping the sidebar navigation, when tabbing through a course with a screen reader. Rise 360 also allows learners to use keyboard-accessible shortcuts for alternate navigation.
Expression and communication - the course design multimedia, offering different use of multiple types of media during construction.
Executive functions - The course takes a scaffolded approach to learning and has embedded prompts to "stop and think" before acting on assessment questions.
For the collection of learning analytics data, we will rely solely on the LMS to provide the following information:
Number of learners enrolled
Completion/non completion data
Assessment pass/fail data
Average time spent on learning
Data collected will help inform such questions as: are there areas of the course where learners are stopping and not moving forward? Are there stumbling blocks to course completion? Are learners successful in the assessments? How many attempts at the assessments are learners taking? Are there specific quiz questions that are causing problems for the learner? For example:
Learners are stopping and not moving forward/ stumbling blocks to completion.
Do we need to change the course expectations set out at the beginning of the learning? Are learners running out of time because we have anticipated the learning taking less time than it is in reality, and therefore need to adjust the time requirements for completing the course? Do we need to implement protected study time for our learners?
Is there a particular module where learners are stopping? If so, we need to look at the content and check that it is relevant, that the outcomes for this part of the learning are achievable.
Unsuccessful assessments.
If there is a high rate of failure, have we set the assessment at the appropriate level? Do the assessment questions refer back to the learning content and the learning outcomes?
Accessing and analysing this data will help promote data driven design and identify areas of improvements to the course, through a cyclical continuous process of monitoring, gaining insights, and taking action.
Overall, I feel that a robust evaluation strategy has been put into place for this course development by integrating UX and UDL evaluation within the levels of the Kirkpatrick model. As Kirkpatrick is the evaluation model I'm most familiar with, I was interested to read and learn more about alternatives to Kirkpatrick (LTEM and Kaufman's 5 levels of evaluation) but went with the tried and tested model that I know well and felt was the most appropriate for this project. Regarding Kirkpatrick, I have come to realise that a sound ADDIE process of course development allows for the Kirkpatrick stages to be considered in reverse order from the outset of the project. In Part One of the project, what will be evaluated at Level 4 holds stakeholders in the project (SMEs, IDs and so on) accountable to what the overall organisational goals of the training is. With this at the forefront, we can then consider Level 3 and ask what are the behaviours that need to change, how will we design the learning to change these behaviours? Then in Part Two we consider Level 2 learning - what are the skills and knowledge the learners needs and how do we design assessments to measure them? After this, we can design and develop the course itself - what design principles should we use to relay the content, to engage learners and give them the best learning experience possible?
I found the UX and UDL evaluation checklists provided by the Digital Learning Institute to be an excellent way of evaluating the learning experience design and ease of accessibility of the course. In the past, I have often relied on the fact that the software authoring tools have a lot of this considerations already built into their templates, however, by using the checklist I was able to highlight areas that needed improvement.
Learning analytics - our current LMS (moodle) provides opportunities to analyse data and help inform data driven learning design (DDLD), however, in our own case, our system has numerous unresolved bugs and makes for a very messy production of reports. In terms of current DDLD we rely on user evaluation feedback surveys to help inform any changes we may need to make to a course, but these surveys can only provide rudimentary information. I am excited to work with our developers to instigate bespoke reports into our LearnDash LMS to make for better data analytics outcomes and to inform our design process further. The integration of AI within our new LMS will also aid this process and hopefully give us more meaningful insights.
Cast. (n.d.). The UDL Guidelines https://udlguidelines.cast.org/
Digital Learning Institute (2023). CDL Honeycomb Model Checklist, Retrieved 20-Feb-24 from module 10 of the Professional Diploma in Digital Learning Design.
Digital Learning Institute (2023, Sep) UDL Checklist, Retrieved 20-Feb-24 from module 10 of the Professional Diploma in Digital Learning Design.
LearnDash. (n.d.). Professional tools for instructional designers https://learndash.com
Peck, Devlin. (May 5, 2023). The Kirkpatrick Model of Training Evaluation (with Examples). https://www.devlinpeck.com/content/kirkpatrick-model-evaluation