Industries that use virtual reality to train personnel can gain big returns for a small additional investment in certification testing.
VR has proved an effective tool to improve overall outcomes in many settings. The role of certification is to assess individual competency.
Individuals trained using VR use the same VR tools to demonstrate the extent of their mastery of the skills they need to succeed in their work.
We have developed our VR testing process, shown here, through our work on VR programs, including construction hazards, crane operators, scissor lifts, and boom lifts.
If you’re interested in exploring VR-based certification for your training program, please complete the form on this page.
Building an Accredited VR Test
In February 2022 the Construction Hazards Identification exam, developed by ITI (Industrial Training International), became the first VR exam to be awarded accreditation by the American National Standards Institute (ANSI) National Accreditation Board. It was accredited under ISO/IEC 17024, the standard for personnel certification bodies.
In an article, “Certification in the Metaverse,” Wallace Judd, Ph.D., describes the challenges that had to be overcome to make sure the VR exam contributed to a reliable, valid, and fair certification.
Validating VR Testing
Some of our work in VR testing has involved validating the results of VR tests by comparing them to the results of hands-on performance tests. Read more here, and watch part of a webinar on the findings here.
Our VR Testing Process
Define Target Performance
What do you want to certify? How can you establish that the certification works? What is the return on investment?
Review Course Elements
Select elements that match target. Elements are functions, user interface components, obstacles, actors, and relationships.
Select Variable Elements
Define viable variants of each element. Change attributes of avatars. Change settings on gauges, vary speed of vehicles. Make changes to the state of objects in the environment that can have different attributes.
Define Scoring for Elements
Quantify variability in difficulty, criticality. Scoring should reflect the speed of response, the value of a correct solution, and the importance of getting it right.
Include irrelevant, unscored elements. Not every component presented should be operated on.
Define the minimum acceptable performance in terms of the aggregate score for all elements, with deductions for distractors that were acted upon. The cutscore may also include aggregate time as well as timing criteria for each element.
Conduct Beta Test
Define test-retest reliability. Compute psychometric measures that define item efficiency. Revise unreliable components. Parameterize trial items for possible inclusion in later test versions.
To what extent does the VR test predict target behavior?
Verify score reporting and pass | fail calculations.