During my senior year, I took a class in Human Augmentics, where I had the opportunity to design a product that would help “augment” or aid a human’s life with technology. I have always been fascinated with sign language translators. I would see them in the classroom keeping up with the lecturer at an incredibly fast pace, and that inspired me to think of ways to enhance communication between individuals who are deaf or hard-of-hearing with other students in the classroom. I did research on the efficacy of having speech translators in class, and I decided that it would be interesting to try and develop a real-time speech-to-text transcriber that would allow students in the class to be able to communicate with those who are hard-of-hearing without having to learn another language.

My team and I combined a directional microphone and a head-mounted display with Python code integrated with the Google Cloud Speech API. At the end of the project, we had a working prototype that would transcribe speech recorded in five-second intervals and display the text onto the head-mounted display. According to eye-tracking research that I found about language tools provided for the deaf and hard-of-hearing in academic settings, this would allow students to be able to experience a lecture and not have to move their eyes around from the lecturer to the presentation material to the ASL translator.


This was an incredibly fun project and it inspired me to pursue meaningful work that would enhance learning opportunities for all.