Very Big Things
Dan
Dan
Marino
Foundation -
ViTA
Project
Employment /
Nominee
Very Big Things
Employment
Nominee
For many people with developmental disabilities or autism, the job interview is the biggest obstacle to employment. This web-based virtual-reality interview training platform allows clients to practice interviewing as many times as they want to, from the comfort of their own homes.- Very Big Things Team
Q: Talk about your initial prototypes. How did those ideas change throughout design and execution?
A: There were several archetypes we were solving for, not only those in the ASD community. We encouraged the creation of as many concepts as possible. We block framed and wireframed quickly before considering UI, colors, or typography and created prototypes to validate the best ideas, which evolved as new data presented itself. We based the mood board on both the existing DMF standards and rand our newly created visuals. The simulations separately as they needed to emulate real-world interviews to desensitize social interactions. We were selective with colors, in-room objects, model mannerisms, and other elements, to make the simulation separate for the platform.Q: What influenced your chosen technical approach, and how did it go beyond past methods?
A: The engineering and design was heavily influenced by making a product that worked for this specific market. For many people with developmental disabilities or autism, the job interview is the biggest obstacle to employment. This web-based virtual-reality interview training platform allows clients to practice interviewing as many times as they want to, from the comfort of their own homes. This not only builds their confidence in interviewing, but also builds key interpersonal communication skills, such as the ability to make eye contact and project expected mannerisms.Q: What web technologies, tools, or resources did you use to develop this?
A: We accomplished this engineering breakthrough through cutting-edge graphics libraries, advanced engineering and attention to the user experience. We used Three.js, a cross-browser JavaScript library, to create 3D computer graphics. We also used custom 3D frameworks for webGL, creating skeletons for models inside of the web browser and animating them with lifelike movements such as blinks and looking around, as well as effects such as subsurface scattering, which mimics the natural appearance of skin – like light passing through a candle – and of other light-based visuals, including soft shadows.What breakthrough or “a-ha” moment did you experience when concepting or executing this project?
Our a-ha moment with this project came when we figured out how we were going to make the interactions with these avatars as realistic as possible. We began with creating a 3D model of the interviewer and the environment, including the mesh and textures, gesture animations and lighting applications to soften both models’ looks. Four interviews were then created in the 3D environment, questions were added and synchronized the models’ voices and gestures. Finally, we developed the web service, integrating the 3D application inside the web application, creating user roles and management features, and creating and testing interview recording and reviewing.