My professional works and research are focused on building scalable machine learning models that are robust against domain and category shifts with minimal-to-no extra label information. To that end, I primarily work with deep domain adaptation, unsupervised, self-supervised, adversarial, disentangled representation learning & learnable data augmentation techniques with practical text, audio, and video applications. I’m interested in discovering the optimum transferability of the representations between domains, tasks, and modalities and solving real-world ML problems with these ideas.
At Amazon, my team develops the end-to-end neural machine translation pipeline that powers Amazon's next-generation customer service experience where, as an Applied Scientist, I improved the robustness of the NMT models under noisy, out-of-domain inputs using some of the above ideas. In 2021, I also interned with the Audio and Acoustics Research Group in Microsoft Research where I developed novel deep neural architectures to estimate the performance of various types of deep noise suppression models. I received my Ph.D. in Information Systems at the University of Maryland, Baltimore County under the supervision of Dr. Nirmalya Roy in the Mobile, Pervasive, and Sensor Computing (MPSC) Lab.
Before coming back to graduate school, I spent around 8 years in the industry building (and later assembling & leading teams) distributed & scalable back-ends that served millions of users. Between the years 2009–2013, I was also an active contributor to a few open-source NLP/ML projects through the Google Summer of Code program (both as a participant and later in mentoring roles).