Photo Gallery
|
Nahid Ghalaty
Additional Info
| Nominee’s Name | Nahid Farhady Ghalaty |
| Nominee’s Job Title or Role | Principal Security Engineer |
| Company / Organization | Microsoft |
| Company size | 30,000 or more employees |
| Country | United States |
| World Region | North America |
| Website | https://www.linkedin.com/in/nahid-farady/ |
NOMINATION HIGHLIGHTS
My innovation advances privacy-aware synthetic data as a foundational capability for AI security and Responsible AI (RAI), addressing a critical gap in how organizations evaluate and protect modern AI systems. As AI becomes embedded in high-stakes workflows, risks increasingly arise from data exposure, model misuse, and insufficient safety testing. My work reframed synthetic data from a research artifact into a practical security control, enabling rigorous evaluation without exposing sensitive or regulated information.
The importance of this contribution lies in shifting security earlier in the AI lifecycle. Rather than relying solely on post-deployment monitoring, I developed methodologies to generate high-fidelity synthetic datasets that preserve complex statistical patterns and rare edge cases while enforcing measurable privacy guarantees through formal privacy budgets. This approach allows teams to safely simulate adversarial scenarios, evaluate robustness, and quantify risks such as over-blocking, bias, and potential information leakage — all without accessing production data.
The innovation introduced three key advances. First, I operationalized privacy budgets as engineering guardrails, embedding privacy accounting directly into data generation and experimentation workflows so that risk is measurable and transparent. Second, I designed RAI-focused synthetic benchmarks tailored to specific failure modes, enabling precise and repeatable evaluation instead of ad-hoc testing. Third, I established security-driven evaluation loops that integrate synthetic data into development pipelines, aligning model iteration with enterprise risk management and governance practices.
The impact has been both technical and cultural. Technically, teams gained the ability to test sensitive scenarios at scale, accelerate experimentation, and improve model reliability without compromising confidentiality. Organizationally, the work created a repeatable blueprint for privacy-by-design, strengthening collaboration across engineering, security, and policy teams while increasing stakeholder trust in AI deployments.
By demonstrating that strong privacy guarantees, rigorous safety evaluation, and high model utility can coexist, this innovation elevates synthetic data from a convenience to a core pillar of AI security. It enables organizations to move from reactive risk mitigation to proactive assurance, ensuring AI systems are not only performant but also secure, trustworthy, and aligned with responsible innovation principles.
This combination of technical depth, measurable risk reduction, and scalable impact makes the contribution a meaningful advancement in the practice of AI security.
Community Choice Award
Vote for This Nominee
Share this page on any platform above to cast your vote. Each completed social post counts as one vote for this nomination.
Voting closes July 18, 2026 — winners announced ahead of Black Hat USA
What is the Community Choice Award? →
The Community Choice Award is a separate recognition decided entirely by public votes — not by the judging panel. Every nominee is eligible for both.
