My Story
Pranav Venkit is a final-year Ph.D. candidate in the Informatics program within the College of Information Sciences and Technology at Pennsylvania State University. He is a research assistant in the Human Language Technologies Lab, led by Dr. Shomir Wilson. Pranav's research examines sociotechnical language models through the lenses of trust and ethics, with a particular focus on identifying and addressing sociodemographic biases in NLP models and understanding their societal impacts. His interdisciplinary work spans the domains of Human-Computer Interaction (HCI), Social Informatics, and Privacy in NLP.
Pranav has authored multiple papers published in leading NLP and AI Ethics conferences such as ACL, EMNLP and FAccT. His research has garnered recognition at major venues and has been featured in prominent media outlets, including Fast Company, AIHub, The Hill, and VentureBeat. You can find more on his research's media coverage here.
During his internship at Salesforce AI Research, under the mentorship of Jason Wu and Philippe Laban, Pranav worked on improving Answer Engines and Retrieval-Augmented Generation (RAG) systems, focusing on evaluation methods and designing safer AI systems. He was also part of the Primed to (re)act project, led by Dr. Christopher Graziul, a collaboration with the University of Chicago, investigating language and behavior in broadcast police communications, particularly in relation to minority populations. Pranav has actively collaborated with numerous researchers from both academia and industry to advance the field of Trustworthy Human Language Technology. Among his notable collaborations is his work with Dr. Aylin Caliskan at the University of Washington, where they explored the cultural implications of harm in NLP models and with Dr. Koustava Goswami from Adobe Research to understand the ethical frameworks of hallucination in language models. Some of his other collaborations also include researchers from leading institutions and organizations such as Hugging Face, Meta, the Georgia Institute of Technology, and IBM Research, among others.
Pranav has held several prominent leadership roles in the research community, including serving as an Area Chair for ACL 2025 and as a Program Committee member for FAccT 2024, FAccT 2023, and TrustNLP 2023. Beyond his organizational contributions, he has also been an active mentor, guiding graduate students through impactful research initiatives. Notably, he served as a Project Mentor for the NSF Careers Program, leading a project titled "Bias and Fairness in Machine Learning: Mitigating Bias in Unstructured Data." In this role, he helped students develop and implement models to identify and mitigate bias in language systems.
Before joining Penn State, Pranav worked at Honeywell Technology Services, contributing to automation, process bots, and application virtualization. He earned his B.Tech in Computer Science and Engineering from Amrita School of Engineering, where he undertook projects in full-stack development and conversational automation systems.
Check out his Scholar profile here!
Follow him on Twitter (now X) for more such updates about his work and research! π
March 2025: π I was awarded the IST 2024 Travel Award for my presentation at EMNLP 2024 in Miami
February 2025: π Selected to be the Area Chair for ACL 2025 (ARR February cycle)
February 2025: π I have successfully defended my thesis defense!!! π Offically Dr. Venkit now π
January 2025: π I Will be part of the Program Committee for FAccT 2025
November 2024: π It felt great being recognized as an 'Outstanding Reviewer' for EMNLP 2024! π₯³ Looking forward to giving back to the research community further.
November 2024: π My paper titled "An Audit on the Perspectives and Challenges of Hallucinations in NLP" was accepted and published at EMNLP 2024 in Miami, USA!Β
November 2024: π€ My work on broadcast police communications was recognized and talked about in PennState News!
November 2024: π My work on the analysis of braodcast police communications recieved the DEI (Diversity, Equity and Inclusion Recognition) at CSCW 2024 for our contribution to represent strong examples of work that focuses on or serves minorities, otherwise excluded individuals or populations, or intervenes in systemic structures of inequality. π
November 2024: π My paper titled "Race and Privacy in Broadcast Police Communications" was accepted and published at CSCW 2024 in San Jose, Costa Rica ποΈ.Β
October 2024: Β π My work titled "Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach" was accepted and presented in AIES 2024 conference in San Jose, California!
October 2024: Β π Our work titled "CALM: A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias" was accepted to the first ever COLM conference in Philadelphia, Pennsylvania!Β
August 2024: π€ Wrapped up my internship at Salesforce AI Research where I worked on creating an evaluation benchmarks for Answer Engines such as Perplexity AI and Bing Co-pilot!
Legends to categorize the news:
π: Awards and RecognitionsΒ Β Β Β π: Paper Acceptance and PresentationsΒ Β Β π€ : Media Recognition Β Β Β π€: Miscellaneous Updates
You can find more in my Resume/CV π