
Dr. Abhilasha Bhargav-Spantzel
Technology Talk
“From Firewalls to Fairness: Securing the Future with Threat Modeling and Ethics”
Abstract:
In an era where Artificial Intelligence (AI) is transforming industries and societies, the importance of robust security foundations and ethical considerations has never been more critical. This talk will explore the evolving landscape of threat modeling in the age of AI, emphasizing the need to build, and strengthen foundational security principles while integrating evolving ethical frameworks.
This session will explore advanced threat modeling techniques, emphasizing a holistic approach that integrates traditional security and privacy principles, particularly for emerging AI solutions such as agents, skills and plugins. Special focus will be given to Responsible AI (RAI) principles, including transparency, accountability, fairness, inclusiveness, reliability, and safety, which guide the development and deployment of AI technologies.
Concrete examples will demonstrate how AI impacts critical security vectors:
- Identity: Strong authentication methods such as Multi-Factor Authentication (MFA), Zero Trust principles, and AI-driven adaptive and inclusive authentication are essential to secure identity flows and authorization pathways.
- Data Flows: Protecting data through classification, encryption, secure APIs, and monitoring for exfiltration and misuse is crucial. Privacy-preserving techniques continue to show value as we face advanced attacks in the AI world such as XPIA (cross prompt injection attacks).
- Types of Threats: Understanding and mitigating AI-generated attacks, supply chain threats, and real-world incidents through proactive strategies such as attack surface reduction and AI-driven threat hunting.
- Recovery Mechanisms: Ensuring resilience through rapid detection, rollback strategies, and AI-powered anomaly detection, along with securing the supply chain and maintaining patch velocity.
These security considerations remain essential for designing resilient architectures with appropriate security hooks. Participants will gain practical strategies for implementing these principles in real-world scenarios, ensuring that AI systems are not only secure but also ethically sound. Grounded in security foundations and ethical AI, this talk aims to engage security researchers and professionals in collaborative discussions to address the complexities of AI-driven threats.
Together, we can explore how we can step up our threat modeling efforts, remember the basics, and prioritize responsible AI to build a secure and ethical digital future.
Bio:
Abhilasha Abhilasha Bhargav-Spantzel is a Partner Security Architect in the Office Product group focused on Identity and AI Copilot security and safety. Previously she was responsible for security architecture for Microsoft Security Response Center (MSRC). Prior to joining Microsoft she was at Intel for 14 years, focusing on hardware-based identity and security product architecture. She completed her doctorate from Purdue University, which focused on identity and privacy protection using cryptography and biometrics. Abhilasha drives thought leadership and the future evolution of cybersecurity platforms through innovation, architecture, and education. She has given numerous talks at conferences and universities as part of distinguished lecture series and workshops. She has written 5 book chapters and 30+ ACM and IEEE articles and has 40+ patents. Abhilasha leads multiple D&I and actively drives the retention and development of women in technology. She is passionate about STEM K-12 cybersecurity education initiatives, as well as co-organizes regular camps and workshops for the same.