Founded in 2015, Shield AI is a venture-backed deep-tech company with the mission of protecting service members and civilians with intelligent systems. Its products include the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines. With offices and facilities across the U.S., Europe, the Middle East, and the Asia-Pacific, Shield AI’s technology actively supports operations worldwide. For more information, visit www.shield.ai. Follow Shield AI on LinkedIn, X, Instagram, and YouTube.
Job Description:
Shield AI builds autonomy systems for defense applications, including air, maritime, and space platforms operating in complex and contested environments.
We are establishing a centralized AI and Data Platform organization responsible for the infrastructure that underpins autonomy development across Hivemind and other programs. This team owns the systems used to train models, run simulation, manage data, and deploy models to operational environments.
We are seeking a Principal Engineer that will scale an initial architecture into a platform that supports multiple autonomy programs.
Success in this role requires disciplined execution, delivering fast iteration for engineering teams while maintaining reliability, cost control, and architectural consistency as the system scales.
The Principal Engineer is accountable for ensuring engineers can move efficiently from idea to trained model to deployed capability, and that infrastructure decisions reflect the realities of the domain, including simulation-driven development, continuously evolving multi-modal sensor data, and deployment to constrained and reliability-critical systems.
This role spans the full lifecycle of autonomy development, training foundation models, running large-scale and multi-fidelity simulation, managing training data, evaluating models, and deploying optimized models to edge systems.
A key part of this role is defining how these capabilities extend beyond internal use. This includes establishing how Shield AI delivers AI infrastructure in customer environments across on-premise, cloud, hybrid, and sovereign or nationally constrained environments.
What you'll do:
Platform Ownership: Define and operate the core AI and data platform across training, simulation, data management, evaluation, and deployment.
Compute Strategy and Infrastructure: Own where and how workloads run across on-premise, cloud, and hybrid environments. Drive capacity planning, utilization, and cost-per-compute decisions, including support for classified and air-gapped systems
Training and Simulation Systems: Build infrastructure for distributed training (supervised learning, RL/MARL, foundation models) and large-scale, multi-fidelity simulation. Ensure training and simulation systems operate together without bottlenecks.
Data Platform: Ingest and manage multi-modal sensor data (EO, IR, radar, EW, IMU). Establish dataset versioning, data lineage, feature storage, data cataloging, and classification-aware storage and access controls.
MLOps, Evaluation, and Model Lifecycle: Establish a consistent workflow for experiment tracking, model registry, artifact provenance, and automated validation. Implement evaluation and V&V gates so models meet defined standards before deployment.
Deployment and Operational Feedback: Own the pipeline from training to deployment, including model optimization (e.g., distillation, quantization, pruning), deployment to edge systems, monitoring, drift detection, and retraining triggers.
Customer AI Infrastructure: Define how AI infrastructure is deployed in customer environments across on-premise, cloud, hybrid, and sovereign settings. Establish a consistent approach that avoids one-off solutions while adapting to operational constraints.
Platform Standardization: Define common tools, interfaces, and workflows across teams. Reduce duplication while maintaining flexibility where needed.
Cross-Team Partnership: Work directly with Hivemind and other autonomy teams to ensure the platform supports real workloads and evolves with program needs.
Key Outcomes:
Faster iteration from idea to trained model to evaluated result
High utilization of compute resources with clear visibility into usage and cost
Simulation capacity that supports large-scale training without bottlenecks
Consistent end-to-end lifecycle: development, evaluation, deployment, monitoring, and retraining
Repeatable data loop: telemetry, scenario extraction, retraining, and redeployment
Reliable deployment of optimized models to edge systems
Broad platform adoption across autonomy programs
Repeatable approach for deploying AI infrastructure in customer environments
Representative performance targets:
Training iteration cycles measured in days, not weeks
Sustained high utilization of GPU resources under production workloads
Required qualifications:
Experience building and operating ML infrastructure at scale (100+ GPU clusters, distributed systems)
Experience defining compute strategy, including on-premise vs cloud tradeoffs, capacity planning, and cost management
Strong understanding of ML workloads, including foundation models, RL/MARL, simulation-based training, and fine-tuning
Experience building data platforms with dataset versioning, lineage, and cataloging
Ability to debug and resolve system issues when needed
Preferred qualifications:
Experience in defense or classified environments (e.g., air-gapped systems, SCIFs)
Experience with simulation-heavy ML systems (robotics, autonomy, or similar domains)
Experience deploying and optimizing models for edge hardware
Familiarity with HPC systems (schedulers, parallel storage, high-speed networking)
Why Join Us
You will define the infrastructure that supports the development and deployment of autonomy systems across Shield AI.
This role establishes the foundation for how models are trained, evaluated, and deployed, and directly impacts how quickly new capabilities are delivered into operational environments.
You will have ownership over systems and decisions that are often distributed across multiple teams at other organizations, with the opportunity to shape how AI infrastructure is built and used both internally and in customer environments.
#LI-DM2
#LF
Full-time regular employee offer package:
Pay within range listed + Bonus + Benefits + Equity
Temporary employee offer package:
Pay within range listed above + temporary benefits package (applicable after 60 days of employment)
Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information.
###
Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity or Veteran status. If you have a disability or special need that requires accommodation, please let us know.