Indian Sign Language Interpreter


The Indian Sign Language synthesis project aims to convert “what is spoken” in public presentations to “sign language animations” similar to what a real sign language interpreter would do. The full pipeline includes: Automatic Speech recognition (using APIs), Sentence simplification, Sentence translation (English to ISL), Gesture and Expression generation on Avatars. While Sentence simplification and Sentence translation (English to ISL) are text problems, we are exploring using vision to solve the graphics problem of Gesture and Expression generation.

Expected Social impact:

Currently, for the estimated 10 to 20 lakh hearing impaired people in our country, the number of ISL interpreters are in the low hundreds. Spoken content is therefore inaccessible to them, be it lectures or meetings or speeches. We expect our solution to greatly help in levelling the field and making this content accessible to this section of the population, ensuring easier access to education and social interactions.

Team Members:

Prof Dinesh Babu J(PI),

Shyam Krishna (Full time MS student),

2 animation interns, 1 Research Associate,

2 project students (MTech)