Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /opt/bitnami/wordpress/wp-includes/functions.php on line 6131

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the filebird domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /opt/bitnami/wordpress/wp-includes/functions.php on line 6131

Notice: Function acf_get_value was called incorrectly. Advanced Custom Fields - We've detected one or more calls to retrieve ACF field values before ACF has been initialized. This is not supported and can result in malformed or missing data. Learn how to fix this. Please see Debugging in WordPress for more information. (This message was added in version 5.11.1.) in /opt/bitnami/wordpress/wp-includes/functions.php on line 6131

Deprecated: preg_replace(): Passing null to parameter #3 ($subject) of type array|string is deprecated in /opt/bitnami/wordpress/wp-includes/kses.php on line 2018

SoDa Symposium for Privacy Week: Two Presentations with Q&A

Research Talks/Events

Date/Time: Tuesday, January 28, 2025 12:00 pm - 1:00 pm

Location: Virtual EST (Zoom)


UMD students, faculty, staff, alumni, and friends—join us for the next SoDa Symposium!

Presentation 1:
People, Language, Large Language Models, and Privacy: A Technical Problem or a Fundamental Puzzle?
Presented by Prof. Ivan Habernal

In this talk, I will most likely ask more questions than give answers. We know that people have a fundamental right to privacy, and we know that differential privacy gives us formal guarantees to protect the privacy of people in a database and to do machine learning. But humans also communicate through language, we use written text to do natural language processing, and we train large language models using the entire internet. So how does this work with privacy? From a technical perspective, it looks like it’s just about finding faster techniques or better models – I’m going to talk here about rewriting texts and generating synthetic data under differential privacy. But from a privacy perspective, things quickly get messy when we start asking fundamental questions, and we might be faced with the conundrum: What is privacy in language in the first place?

Presentation 2:
From Dialogue to Data: How Statisticians Can Safeguard Privacy and Promote Trust in LLMs
Presented by Prof. Rochelle E. Tractenberg

Generative AI and large language models are unleashing unprecedented capabilities in data processing, but they also raise serious questions about protecting the individuals and organizations whose data fuel these models. In this talk, I will bridge ethical frameworks from statistics, computing, and mathematics to illustrate how statisticians can help ensure privacy is more than a buzzword. As large language models (LLMs) blur the boundaries between data, dialogue, and personal expression, statisticians play a pivotal role in shaping responsible AI practices. This talk explores how principles from statistics, computing, and mathematics can help safeguard privacy in language-based systems and cultivate (deeper) public trust. Rather than prescribing a one-size-fits-all formula, we will reflect on the key question: How can we balance transparency, accountability, and intellectual freedom in an environment where AI may inadvertently expose sensitive information? Drawing on ethical frameworks and real-world examples, we’ll highlight the potential for statisticians to steer LLM developments in ways that honor human autonomy, encourage fairness, and uphold the spirit of confidentiality—ultimately fostering more trustworthy AI. By combining methodological rigor with ethical deliberation, statisticians have a unique role to play in defending privacy norms, cultivating transparency, and embedding responsible AI practices across research and industry.

Additional Information:
Please contact infoevents@umd.edu at least one week prior to the event to request disability accommodations. In all situations, a good faith effort (up until the time of the event) will be made to provide accommodations.

Speaker(s): Ivan Habernal, Head of Trustworthy Human Language Technologies Group; Rochelle E. Tractenberg, Director, Collaborative for Research on Outcomes and Metrics

Register