Abbhinav (123 of AI), and co-host Debayan (Microsoft) are joined by AI researcher Kritika Prakash, now pursuing her Ph.D., at the University of Chicago.
The discussion revolves around Kritika’s academic journey before and at IIIT-Hyderabad, differential privacy in AI, and her thoughts on the need for regulations to ensure a machine learning model's trustworthiness. We discuss the privacy issues in machine learning, including the challenges of protecting textual data and the impact of large language models on a user’s privacy.
“I did have exposure... where I can appreciate... lot of South Indian cultures.”
“I found electronics too hard... shifted to Computer science.”
“Repeating an extra year: It felt like I am not really losing time... just learning more.”
“Talking to him [father]... understanding his perspective on things... He urged me... go for this risky thing.”
“I got do computer science but my first 3 semesters I was just so bored.”
“Discovered that there is this element of strategy & games that you can work with.”
“Explored various research areas before delving into differential privacy.”
“You might not have anything to hide. But, you do have something to protect”
“Adding noise during training reduce your accuracy; it actually helps improve generalization.”
“The smaller the epsilon value indicates tighter or stronger [differential]privacy.”
“Real-world applications include healthcare research... where preserving individual [data]privacy is crucial.”
“Differential Privacy is not going cover all kinds of cases; we are looking at because this is the worst case guarantee.”
”The text itself doesn't have clear distinctions between people's data, and internet’s data; it's a huge mess just because that's how the text domain is.”
“Privacy will always be important no matter where the machine learning field is headed.”
123 of AI’s official website: https://www.123ofai.com/contact
To get insights into job hiring: Behind the scenes of the recruitment market - Raj Patel of Reczee
To get interview preparation and hands on Practice: QnA Lab
123 of AI’s CDEL ignites curiosity with gamified AI learning: real-world projects by Week 4, QnA Lab support, and adaptive, concept-driven mastery.
Discover why Curiosity Driven Effective Learning (CDEL) is revolutionising machine learning upskilling to foster deeper understanding and long-term retention, essential for mastering ML and excelling in technical interviews.
Ever wondered why your AI model makes certain predictions? From LIME and SHAP to Grad-CAM and Attention Maps, this guide demystifies model explainability—helping you uncover both local and global insights for transparent, trustworthy AI. 🚀