Nahid Ghalaty

Recognized in the Category:

Additional Info

Nominee’s NameNahid Farhady Ghalaty
Nominee’s Job Title or RolePrincipal Security Engineer
Company / OrganizationMicrosoft
Company size30,000 or more employees
CountryUnited States
World RegionNorth America
Websitehttps://www.linkedin.com/in/nahid-farady/

NOMINATION HIGHLIGHTS

My innovation advances privacy-aware synthetic data as a foundational capability for AI security and Responsible AI (RAI), addressing a critical gap in how organizations evaluate and protect modern AI systems. As AI becomes embedded in high-stakes workflows, risks increasingly arise from data exposure, model misuse, and insufficient safety testing. My work reframed synthetic data from a research artifact into a practical security control, enabling rigorous evaluation without exposing sensitive or regulated information.

The importance of this contribution lies in shifting security earlier in the AI lifecycle. Rather than relying solely on post-deployment monitoring, I developed methodologies to generate high-fidelity synthetic datasets that preserve complex statistical patterns and rare edge cases while enforcing measurable privacy guarantees through formal privacy budgets. This approach allows teams to safely simulate adversarial scenarios, evaluate robustness, and quantify risks such as over-blocking, bias, and potential information leakage — all without accessing production data.

The innovation introduced three key advances. First, I operationalized privacy budgets as engineering guardrails, embedding privacy accounting directly into data generation and experimentation workflows so that risk is measurable and transparent. Second, I designed RAI-focused synthetic benchmarks tailored to specific failure modes, enabling precise and repeatable evaluation instead of ad-hoc testing. Third, I established security-driven evaluation loops that integrate synthetic data into development pipelines, aligning model iteration with enterprise risk management and governance practices.

The impact has been both technical and cultural. Technically, teams gained the ability to test sensitive scenarios at scale, accelerate experimentation, and improve model reliability without compromising confidentiality. Organizationally, the work created a repeatable blueprint for privacy-by-design, strengthening collaboration across engineering, security, and policy teams while increasing stakeholder trust in AI deployments.

By demonstrating that strong privacy guarantees, rigorous safety evaluation, and high model utility can coexist, this innovation elevates synthetic data from a convenience to a core pillar of AI security. It enables organizations to move from reactive risk mitigation to proactive assurance, ensuring AI systems are not only performant but also secure, trustworthy, and aligned with responsible innovation principles.

This combination of technical depth, measurable risk reduction, and scalable impact makes the contribution a meaningful advancement in the practice of AI security.