New
Senior Applied Scientist
![]() | |
![]() United States, Washington, Redmond | |
![]() | |
OverviewSecurity represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Microsoft Security AI Research team develops advanced AI-driven security solutions to protect Microsoft and its customers. Our team combines expertise in large-scale AI, knowledge graphs, and generative models to address evolving security challenges across Microsoft's complex digital environment. Defending Microsoft's complex environment provides a unique opportunity to build and evaluate autonomous defense and offense through emerging generative AI capabilities. By leveraging rich security telemetry and operational insights from Microsoft's Threat Intelligence Center and Red Team, you will have access to a one-of-a-kind environment for innovation at scale. As a Senior Applied Scientist, you will focus on applying advanced graph algorithms and large language models (LLMs) to automate and enhance red-teaming operations. Deep expertise in both graph theory/graph machine learning and large language models is essential for this role. You will be responsible for designing and building AI systems that combine knowledge graphs and LLMs for adversarial simulation, attack path discovery, and threat modeling in a production environment. While cybersecurity experience is preferred, it is not required. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesResearch, design, and develop advanced graph-based and LLM-powered AI systems to automate red-teaming and adversarial simulation.Build and maintain large-scale knowledge graphs and leverage LLMs for representing, reasoning about, and simulating attack paths, threat relationships, and mitigation strategies within Microsoft's cloud and enterprise environments.Apply state-of-the-art graph algorithms, graph neural networks, and LLM techniques to real-world security data.Collaborate with security researchers, applied scientists, and engineers to design autonomous agents and multi-agent frameworks for security testing and incident response.Integrate data and insights from Microsoft's Threat Intelligence Center, Red Team, and security telemetry to inform graph and LLM modeling and simulation.Contribute to research prototypes and their operationalization in production systems, with a focus on scalability and robustness.Although this is an individual contributor (IC) role, the Principal Applied Scientist is expected to provide technical leadership, mentor and support staff on technical aspects, and foster a collaborative, team-oriented environment.Develop and deploy state-of-the-art graph AI models to enhance red teaming automation.Embody our culture and values |