TotSpot AR
Industry
Augmented Reality
Date
January 2025
The TotSpot AR game concept was developed in collaboration with a partner during my time at the WKU XR Lab. We began exploring the idea of creating an augmented reality video game that was targeted at young children. As the UX Researcher on this project, I was responsible for conducting background research on how the game could be educational, understanding how users would interact with it, and creating the research poster.
Abstract
The TotSpot AR Game project aims to develop an interactive augmented reality (AR) game for toddlers to enhance their cognitive skills through object identification. By overlaying visuals on the real world via a projection onto a car window, the game encourages players to touch the window to identify passing objects outside by highlighting them with colorful projected borders. Its simple user interface of audio and visual cues caters to a young audience to provide an engaging learning experience during car trips. We plan to model the custom container holding all the TotSpot’s physical hardware in 3D modeling software. The container will be designed to mount to the back of a car seat, allowing guardians to set up the TotSpot in their car easily. By integrating AR via object highlighting into the identification game, TotSpot provides a layer of immersion to the experience. This project strives to convert the often monotonous environment of a car to something fun and interactive for children, connecting them to the outside world while also being educational.
Motivation
“Object recognition is the process by which humans organize the visual world into meaningful perceptual units.” Gaining object recognition skills is an important developmental milestone. TotSpot AR aims to help reach this milestone by creating a fun and immersive experience. TotSpot AR will not only help young children recognize objects; it will also teach them how to spell. The game will utilize auditory learning to help young children learn how to spell the names of objects that are outside the car’s window.
Approach
We are using the Unity game engine to develop the TotSpot software. We are also using an object detection model and an object tracking model to identify and track objects passing into the camera’s view, respectively. To integrate Unity with these models, a plugin called OpenCV for Unity provides various useful functions with examples of how to use them in Unity projects. We are using this not only to help jumpstart integrating models into the TotSpot game but also to use objects and functions provided by the OpenCV library itself, which include actions like drawing rectangles and converting Unity textures into processable images. All the physical components used to run the TotSpot game will be housed in a custom 3D-printed container first designed in 3D modeling software. We will likely use Blender or Maya.
Future Process
Once the TotSpot game is at the alpha stage of development, with touch integration and audio implemented, we will conduct closed user testing, focusing on software efficacy. A second round of user testing will be conducted once all physical hardware, including the Raspberry Pi, mini projector, camera, speaker, and 3D printed box, have been assembled and are able to run the TotSpot game. This phase of testing will focus on the hardware’s efficacy within the car environment, including if the physical TotSpot device is suitable and safe for the user demographic. Our resulting product will be a compact device that will provide toddlers with a fun AR experience that will also teach them the important developmental skills of object detection and spelling.
References
Ayzenberg, V., & Behrmann, M. (2024). Development of visual object recognition. Nature Reviews Psychology, 3(2), 73–90. https://doi.org/10.1038/s44159-023-00266-w






