Oscar Wang

UX + Creative Technology + Accessible

Hi , I am Oscar Wang,

a user experience designer, specializing in using creative technology along with physical devices to find design opportunities for underserved population.

© 2025 Oscar Wang

Type

VR, Accessible Design, Hand Gesture

Date

2024.06

Participant

Personal Project

Hand Voice

Peoject Introduction

Inspired by Duolingo, I wondered if there could be a way to entertain people with disabilities and those around them, or to quickly learn sign language for those who are interested in it.

What is ASL?

American Sign Language (ASL) is a complete, natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. It uses hand shapes, facial expressions, and body movements to convey meaning.

Why Using VR?

Learning a language is extremely difficult, if you add some image things, you can help users deepen the memory, so as to learn faster. Learning in VR can not only deepen memory, but also make learning more interesting in a more interactive process.


VR can also accurately determine whether the user's actions are standard through the camera.

How do Hand Voice Work?

At each learning session, each sign language word represents an object within the scene that appears in the scene when the word is spelled out, or that the user can interact with. This helps users to remember items and gestures associated with them while playing.

Design Outcome

Each time you learn a sentence, there are three stages.


Stage 1 asks the user to successfully spell a word.


Stage 2 asks the user to spell a series of related words.


Stage 3 asks the user to spell out an entire sentence.


The scene will be guided to teach the user how to spell, and the camera can accurately determine whether the gesture is standard.


Each sign word is associated with an object within the scene that the user can interact with.

Stage 1

User been asked to spell word “ Grocery Store”.

Stage 2

User been asked to spell a serious of food “ Apple, Bread, Egg”.

Stage 3

User been asked to spell a full sentence “ Go to grocery store to buy apple, bread and egg”.

Design Process

In the first step, I tried to use meta quest Interactable SDK to make a single hand gesture in the scene and connect the gesture to the object. Whenever the gesture is detected to have been successfully made, the item will appear.

Next, I tried to record the movements of both hands at the same time to form a complete gesture. I connected two squares in the scene, and whenever a hand made a correct gesture, the corresponding square would change color, indicating that the hand had done the right thing. At the same time, the scene changes accordingly.


In order to let the user better observe and learn the action, I put the visual hand in the scene, the user can observe the thing again.

But during the User test, I found that neither using blocks to indicate nor providing a huge visual hand worked very well.

Then I studied ASL carefully and chose a simple gesture as the first word to learn, "apple", which I made in unity and recorded a video of the hand. I also tried to make the left hand and right hand display different colors, hoping that users can easily understand visual hands. However, in the user test, the effect was not obvious, but the hand video I recorded got unanimous praise.

Next, I tried to associate the word with the actual object, and every time the word "apple" was spelled out, an apple would fall out of the scene.

Finally, I designed the system in Unity to guide the gestures that should be made.


Then I chose a series of gestures, recorded them, made a unity scene, and finally finished prototype.

Type

VR, Accessible Design, Hand Gesture

Date

2024.06

Participant

Personal Project

Hand Voice

Peoject Introduction

Inspired by Duolingo, I wondered if there could be a way to entertain people with disabilities and those around them, or to quickly learn sign language for those who are interested in it.

What is ASL?

American Sign Language (ASL) is a complete, natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. It uses hand shapes, facial expressions, and body movements to convey meaning.

Why Using VR?

Learning a language is extremely difficult, if you add some image things, you can help users deepen the memory, so as to learn faster. Learning in VR can not only deepen memory, but also make learning more interesting in a more interactive process.


VR can also accurately determine whether the user's actions are standard through the camera.

How do Hand Voice Work?

At each learning session, each sign language word represents an object within the scene that appears in the scene when the word is spelled out, or that the user can interact with. This helps users to remember items and gestures associated with them while playing.

Design Outcome

Each time you learn a sentence, there are three stages.


Stage 1 asks the user to successfully spell a word.


Stage 2 asks the user to spell a series of related words.


Stage 3 asks the user to spell out an entire sentence.


The scene will be guided to teach the user how to spell, and the camera can accurately determine whether the gesture is standard.


Each sign word is associated with an object within the scene that the user can interact with.

Stage 1

User been asked to spell word “ Grocery Store”.

Stage 2

User been asked to spell a serious of food “ Apple, Bread, Egg”.

Stage 3

User been asked to spell a full sentence “ Go to grocery store to buy apple, bread and egg”.

Full Process

Video

In the first step, I tried to use meta quest Interactable SDK to make a single hand gesture in the scene and connect the gesture to the object. Whenever the gesture is detected to have been successfully made, the item will appear.

Next, I tried to record the movements of both hands at the same time to form a complete gesture. I connected two squares in the scene, and whenever a hand made a correct gesture, the corresponding square would change color, indicating that the hand had done the right thing. At the same time, the scene changes accordingly.


In order to let the user better observe and learn the action, I put the visual hand in the scene, the user can observe the thing again.

But during the User test, I found that neither using blocks to indicate nor providing a huge visual hand worked very well.

Then I studied ASL carefully and chose a simple gesture as the first word to learn, "apple", which I made in unity and recorded a video of the hand. I also tried to make the left hand and right hand display different colors, hoping that users can easily understand visual hands. However, in the user test, the effect was not obvious, but the hand video I recorded got unanimous praise.

Next, I tried to associate the word with the actual object, and every time the word "apple" was spelled out, an apple would fall out of the scene.

Finally, I designed the system in Unity to guide the gestures that should be made.


Then I chose a series of gestures, recorded them, made a unity scene, and finally finished prototype.

Oscar Wang

UX + Creative Technology + Accessible

Hi , I am Oscar Wang,

a user experience designer, specializing in using creative technology along with physical devices to find design opportunities for underserved population.

© 2025 Oscar Wang