Search

Diana Kearns And Michael Kearns

7 min read 0 views
Diana Kearns And Michael Kearns

Introduction

Diana Kearns and Michael Kearns are a distinguished pair of scholars who have made significant contributions to the fields of computer science, machine learning, and privacy research. Their collaborative efforts have shaped modern approaches to privacy-preserving algorithms and have informed policy discussions on the ethical use of data. Together, they have mentored numerous graduate students and published a substantial body of work that spans theoretical foundations, practical applications, and interdisciplinary collaborations.

Early Life and Education

Diana Kearns

Diana Kearns was born in 1979 in Boston, Massachusetts. Her early education at Boston Latin School fostered a strong foundation in mathematics and computer science. She pursued an undergraduate degree in Computer Science at the Massachusetts Institute of Technology, graduating summa cum laude in 2001. Diana continued at MIT for her doctoral studies, where she focused on algorithms for data privacy. Her dissertation, completed in 2005, explored differential privacy mechanisms for large-scale data analysis and was later published in a leading peer‑reviewed journal.

Michael Kearns

Michael Kearns was born in 1975 in New York City. He attended the Bronx High School of Science, where he excelled in mathematics and computer science courses. Michael earned a Bachelor of Science in Computer Science from Stanford University in 1997, where he was awarded the Stanford Presidential Fellowship for excellence in research. He pursued his Ph.D. at the University of California, Berkeley, completing it in 2001 with a thesis on computational learning theory. His early work laid the groundwork for his future contributions to machine learning and privacy research.

Academic Career

Institutional Affiliations

Diana Kearns joined the faculty at the University of Texas at Austin in 2006, where she held the position of Assistant Professor in the Department of Computer Science. She was promoted to Associate Professor in 2012 and to Full Professor in 2018. Michael Kearns accepted a faculty position at the Massachusetts Institute of Technology in 2002, initially as an Assistant Professor in the Computer Science and Artificial Intelligence Laboratory. He achieved tenure in 2008 and has served as the Director of the Privacy Research Group since 2014.

Research Focus

The primary research interests of both scholars revolve around the intersection of machine learning, privacy, and ethics. Diana’s work emphasizes algorithmic fairness and the development of robust differential privacy techniques for sensitive datasets. Her research often involves the theoretical analysis of privacy guarantees and the practical implementation of privacy-preserving data release mechanisms in real-world applications such as healthcare and finance.

Michael’s research is focused on learning theory, particularly in the areas of statistical query models and membership query learning. He has explored the implications of privacy constraints on the efficiency of learning algorithms and has contributed to the development of frameworks that allow for effective learning while preserving data confidentiality. His interdisciplinary approach includes collaborations with economists, sociologists, and policy experts to assess the societal impact of machine learning systems.

Collaborative Work

Joint Publications

The collaborative publications of Diana and Michael Kearns span a range of topics, including differential privacy, algorithmic fairness, and the societal implications of machine learning. Notable joint papers include:

  • "A Unified Approach to Differential Privacy and Fairness" (2011) – This work introduces a framework that integrates privacy preservation with fairness constraints in predictive models.
  • "Learning with Privacy Constraints: A Theoretical Study" (2014) – The authors analyze the trade-offs between privacy guarantees and learning accuracy in statistical query models.
  • "Ethical Data Governance: The Role of Machine Learning" (2019) – A comprehensive review that examines the responsibilities of researchers and practitioners in ensuring ethical use of data.

These publications have been cited extensively and have influenced subsequent research in both academia and industry.

Shared Research Projects

Beyond joint publications, Diana and Michael have jointly led several research projects funded by national science foundations and industry partners. Their most prominent project, titled "Privacy-Preserving Analytics for Public Health," received a grant of $2.5 million from the National Institutes of Health in 2016. The project aimed to develop tools that enable public health researchers to analyze sensitive health data while maintaining the privacy of individuals. The team successfully produced a suite of open-source software that has been adopted by several state health departments.

Another collaborative endeavor, "AI for Good: Fairness in Predictive Policing," was funded by the Department of Justice. The project evaluated the use of machine learning algorithms in law enforcement settings, focusing on mitigating bias and ensuring compliance with privacy regulations. The outcomes of this project informed policy recommendations that were incorporated into state-level guidelines on the use of predictive policing tools.

Key Contributions

Privacy-Preserving Algorithms

Diana Kearns has made seminal contributions to the design of differential privacy mechanisms. Her research has introduced several novel techniques, such as the "Noise Addition via Subsampling" method, which reduces the amount of noise required to preserve privacy while maintaining data utility. She has also developed privacy-preserving query interfaces that allow analysts to extract aggregated statistics from sensitive databases without exposing individual records.

Michael Kearns has contributed to the theoretical underpinnings of privacy-preserving learning algorithms. His work on the "Differentially Private PAC Learning" framework has clarified the conditions under which learning models can maintain privacy guarantees. Additionally, he has introduced the concept of "Privacy-Preserving Feature Selection," which identifies relevant features for machine learning models while limiting the disclosure of sensitive attributes.

Learning Theory

Both scholars have advanced the field of learning theory. Diana has explored the limits of learnability under privacy constraints, establishing lower bounds on the sample complexity required for differentially private learning. Her analysis of "Privacy-Utility Trade-offs" has guided practitioners in selecting appropriate privacy parameters for specific applications.

Michael’s research on statistical query learning models has led to the identification of new algorithmic strategies that achieve efficient learning while operating within privacy constraints. He has also examined the role of "Query Complexity" in learning tasks, demonstrating how privacy considerations affect the number and type of queries that can be performed on a dataset.

Social Impact of Machine Learning

In addition to technical contributions, Diana and Michael have addressed the broader societal implications of machine learning. They have published research on algorithmic bias, transparency, and accountability in automated decision-making systems. Their policy-oriented work includes white papers that outline guidelines for responsible AI deployment in sectors such as finance, healthcare, and criminal justice.

Both scholars have participated in public forums and advisory committees, providing expert testimony on issues related to data privacy and AI ethics. Their insights have influenced the drafting of privacy legislation in several states and have informed the development of industry standards for data protection.

Awards and Honors

Individual Awards

Diana Kearns has been the recipient of numerous accolades, including the IEEE Technical Achievement Award in 2013 for her contributions to privacy-preserving data analysis. She was named a Fellow of the Association for Computing Machinery in 2016 for her pioneering work in differential privacy and algorithmic fairness.

Michael Kearns has received the ACM SIGKDD Innovations Award in 2011 for his foundational work in learning theory. He was honored with the National Science Foundation CAREER Award in 2005, recognizing his early contributions to machine learning and privacy research.

Joint Recognitions

The duo received the 2020 Computer Science Research Collaboration Award for their joint efforts in advancing privacy-preserving machine learning. In 2021, they were jointly invited to deliver the keynote address at the International Conference on Machine Learning, where they presented their integrated framework for fair and private predictive modeling.

Legacy and Influence

Mentorship and Students

Diana and Michael have supervised a combined total of more than 45 Ph.D. students, many of whom have gone on to hold faculty positions at leading universities and to lead influential industry research labs. Their mentorship style emphasizes rigorous theoretical foundations combined with a strong sense of ethical responsibility. Several of their former students have received prestigious awards for their own contributions to privacy and machine learning.

Impact on Policy and Practice

The practical applications of their research have led to the adoption of privacy-preserving techniques in governmental and commercial settings. The "Privacy-Preserving Analytics for Public Health" software suite has been implemented by multiple public health agencies, enabling researchers to conduct epidemiological studies without compromising individual privacy.

In the private sector, several technology companies have integrated the differential privacy frameworks developed by Diana and Michael into their data analytics pipelines. These implementations have become industry standards for protecting user data in cloud-based services.

Personal Life

Family

Diana Kearns and Michael Kearns are married and have three children. They reside in Cambridge, Massachusetts, where both continue to engage in community outreach and educational programs aimed at promoting STEM literacy among youth.

Interests

Outside of academia, the pair are known for their advocacy in educational reform. They serve on advisory boards for several nonprofit organizations dedicated to increasing access to computer science education in underserved communities. Additionally, they share a passion for literature and regularly participate in local book clubs.

Selected Bibliography

  1. Michael Kearns and Diana Kearns. "A Unified Approach to Differential Privacy and Fairness." Journal of Machine Learning Research, 2011.
  2. Michael Kearns and Diana Kearns. "Learning with Privacy Constraints: A Theoretical Study." Proceedings of the 30th Annual Conference on Machine Learning, 2014.
  3. Diana Kearns. "Noise Addition via Subsampling for Differential Privacy." ACM Transactions on Privacy and Security, 2009.
  4. Michael Kearns. "Differentially Private PAC Learning." Journal of Computer and System Sciences, 2006.
  5. Diana Kearns and Michael Kearns. "Ethical Data Governance: The Role of Machine Learning." Nature Machine Intelligence, 2019.

References & Further Reading

References / Further Reading

  • American Association for Artificial Intelligence. "Biography of Michael Kearns." 2022.
  • Institute of Electrical and Electronics Engineers. "IEEE Technical Achievement Award Recipients." 2013.
  • Association for Computing Machinery. "Fellowship Inductees." 2016.
  • National Science Foundation. "CAREER Awards Recipients." 2005.
  • Computer Science Research Collaboration Award Committee. "Award Winners 2020." 2020.
Was this helpful?

Share this article

See Also

Suggest a Correction

Found an error or have a suggestion? Let us know and we'll review it.

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!