Technical Program Manager II, Detection and Response, Core
at Google
Location
Boulder, CO, USA
Compensation
$138k–$198k USD
Type
full time
Posted
3 days ago
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
Job description
A problem isn’t truly solved until it’s solved for all. That’s why Googlers build products that help create opportunities for everyone, whether down the street or across the globe. As a Technical Program Manager at Google, you’ll use your technical expertise to lead complex, multi-disciplinary projects from start to finish. You’ll work with stakeholders to plan requirements, identify risks, manage project schedules, and communicate clearly with cross-functional partners across the company. You're equally comfortable explaining your team's analyses and recommendations to executives as you are discussing the technical tradeoffs in product development with engineers.
The mission of the security and privacy organization is to protect, respect and defend our users, Googlers and the internet. Users trust Google with large quantities of highly important data and expect it to be protected from illicit access. Increasingly, sophisticated actors attempt to threaten the security of this data and the privacy of our users. The Detection and Response team mission is to understand these threats, detect them, and respond with equal vigor. We are a team of Technical Program Managers (T/PgMs) supporting teams and their 24x7 security operations.
The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.
The US base salary range for this full-time position is $138,000-$198,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Collaborate with team members and stakeholders to understand or identify defined work problems and program goals, obtain prioritized deliverables, discuss program impact and review key metrics.
- Identify, communicate, and collaborate with relevant stakeholders within one or more teams to drive impact and work toward mutual goals.
- Partner with engineering, research, and product to mitigate AI-specific risks (e.g., model exfiltration, data poisoning) across the AI lifecycle.
- Own the technical roadmap for agentic threats. Develop machine-speed response protocols and automate defenses against malicious agent behaviors, prompt injections, and jailbreaks.
- Drive the transformation of legacy incident response into scalable, automated workflows. Apply operational principles to ensure real-time, global collaboration during high-stakes security incidents.
Minimum qualifications:
- Bachelor's degree in a technical field, or equivalent practical experience.
- 2 years of experience in program management.
- Experience in cyber-security focusing on intrusion detection or response.
- Experience in agentic threat detection.
Preferred qualifications:
- 2 years of experience managing cross-functional or cross-team projects.
- Experience with Python, R or other scripting languages.
- Experience working with AI/ML technologies or programs, including an understanding of AI-specific security risks.
- Experience partnering with software/security teams.
- Experience in cyber threat intelligence.
The application window will be open until at least May 21, 2026. This opportunity will remain online based on business needs which may be before or after the specified date.