My Profile Picture

My Story


I am a doctorate student in the IST department at Pennsylvania State University pursuing my PhD in Informatics. I am currently working as a research assistant in the Human Language Technologies Lab at Penn State led by Dr. Shomir Wilson. My research focuses on identifying and mitigating sociodemographic biases in NLP models. I am a part of the Primed to (re)act project that is involved in understanding the language and behavior of Broadcast Police Communications toward minority population. This is a collaborative project with the University of Chicago. Previously, I worked on HCI and data visualization projects, through Project iOn, related to the effectiveness of computational notebooks in educational platform. This project was funded by and headed by Dr. Patrick Dudas and Dr. Josephine Wee.


Prior to joining Pennsylvania State University, I worked in Honeywell Technology Services for two years. My work focused on Automation, Process Bots and Application Virtualization. I completed my B.Tech in Computer Science Engineering from Amrita School of Engineering. Some projects of mine focused on full-stack development and conversation based automation system.

Publications and News


Peer Reviewed Conference and Journal Papers

Venkit, P., Srinath, M., Gautam, S., Venkataraman. S., Gupta, M., Passonneau, B., Wilson, S. (2023). The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis
[Outstanding Paper Award]
2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2023

Venkit, P., Gautam, S., Panchanadikar, R., Huang, K., Wilson, S. (2023). Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles
6th AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES) 2023
Venkit, P., Srinath, M., Wilson, S. (2023). Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models
[Best Paper Award]
TrustNLP 2023, at ACL 2023
Srinath, M., Sundareswara, S., Venkit, P., Giles, C., Wilson, S. (2023). Privacy Lost and Found: An Investigation at Scale of Web Privacy Policy Availability 
[Best Paper Award]
ACM Conference on Document Engineering (DocEng) 2023
Srinath, M., Matheson, L., Venkit, P., Zanfir-Fortuna, G., Schaub, F., Giles, L., Wilson, S. (2023). Privacy Now or Never: Large-Scale Extraction and Analysis of Dates in Privacy Policy Text
ACM Conference on Document Engineering (DocEng) 2023
Venkit, P. (2023). Towards a Holistic Approach: Understanding Sociodemographic Biases in NLP Models using an Interdisciplinary Lens
6th AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES) 2023Venkit, P., Gautam, S., Panchanadikar, R., Huang, K., Wilson, S. (2023). Nationality Bias in Text Generation
17th Conference of the European Chapter of ACL (EACL) 2023
Venkit, P., Wilson, S. (2022). A Study of Implicit Language Model Bias Against People with Disabilities
International Conference on Computational Linguistics (COLING) 2022
____

Peer Reviewed Symposiums Papers

Venkit, P
., Graziul, C., Goodman, M., Kenny, S., Wilson, S. (2022). An Exploratory Analysis of Broadcast Police Communications in Chicago.
Social Thought Symposium- ST ’22

Venkit, P.
, Dudas, P. (2021). A Jupyter Book Approach to Latent Dirichlet Allocation Understanding.
VIS 2021, VISXAI Workshop

Venkit, P., Chou, H., Tyagi, H., Dudas, P. (2021).Project iOn: Utilization of Computational Notebooks for Education.
TLT 2021 Symposium

Venkit, P., Dudas, P., Billah, S. (2020). Wayfinding and Navigation in Virtual Environment: Learning from Audio Games.  
Think Within 2020 Symposium - TW '20
____

Archive Papers

Gupta, V., Venkit, P., Wilson, S., Passonneau, R. (2023). CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
arxiv Pre-Print 2023
Gupta, V., Venkit, P., Wilson, S., Passonneau, R. (2023). Survey on Sociodemographic Bias in Natural Language Processing.
arxiv Pre-Print 2023
Venkit, P., Karishma, Z., Hsu, C., Katiki, R., Huang,K., Wilson, S., Dudas, P. (2021). A ‘Sourceful’ Twist: Emoji Prediction Based on Sentiment, Hashtags and Source.
arxiv Pre-print 2021____

Awards and Recognition

Outstanding Paper Award
(2023): The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis
Conference on Empirical Methods in Natural Language Processing, 2023

Best Paper Award (2023): Privacy Lost and Found: An Investigation at Scale of Web Privacy Policy Availability
ACM Conference on Document Engineering 2023, 28 August, 2023

Best Paper Award
(2023): Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models
TrustNLP 2023 (an ACL 2023 Workshop), 9 July, 2023

Research Showcase (2023): Nominated to present my research at the IST department Research Showcase
Pennsylvania State University IST Research Showcase, 27 Feb, 2023

Hackathon Award (2023): ChronoNews.aiNittany Ai Challenge Semi-Finalist, 15 Mar. 2023
Hackathon Award (2020): BeyondTweetsHackPSU Second Place & The Best SaaS Application award
Hackathon Award
(2019): Health Protocol System
First Place at the Honeywell Innovation Hackathon
____

News and Media Mentions

News Mention
(2024): Fast Company. "How can we make AI less biased against disabled people",
Fast Company Tech, 11 March, 2024
News Mention (2023): Mary Fetzer. "Trained AI models exhibit learned disability bias, IST researchers say.",
Pennsylvania State University News, 30 November, 2023
News Mention (2023): Lucy Smith. "A Critical Survey Towards Deconstructing Sentiment Analysis: Interview with Pranav Venkit and Mukund Srinath.",
AIHub News, 2 November, 2023
News Mention
(2023): Mary Fetzer. "Most websites do not publish privacy policies, researchers say.",
Pennsylvania State University News, 25 October, 2023
News Mention
(2023): Francisco Tutella. "Positive triggering method reduces nationality bias in large text generators.",
Pennsylvania State University News, 26 April, 2023
Media Mention
(2023): Dr. Roman Klinger. "Reports and Insights of EACL 2023.",
Dr. Roman Klinger Blog, 2023
News Mention
(2022): Hallman, Jessica. "AI language models show bias against people with disabilities, study finds.”,
Pennsylvania State University News, 14 Oct, 2022
News Mention (2022): Melillo, Gianna. "Common AI language models show bias against people with disabilities”,
The Hill, 14 Oct, 2022
News Mention (2021): Wiggers, Kyle. “How Bias Creeps into the AI Designed to Detect Toxicity.”, VentureBeat, 9 Dec. 2021
News Mention (2021): Hallman, Jessica. "Study of police language aims to find patterns that may lead to tragic outcomes.”,
Pennsylvania State University News, 25 May, 2023____

Experience

L&T Tech
Application Intern
Dec 2015 — Jan 2016
Honeywell
Application Engineer
Jan 2017 — Jul 2019
Salesforce AI
AI Research Intern
May 2024 — Aug 2024
PennState
Research Assistant
Aug 2019 — present