




Summary: Seeking a Middle DevOps Engineer to enhance Kubernetes and Linux reliability for AI and research compute platforms, focusing on GPU workload orchestration and automation. Highlights: 1. Strengthen Kubernetes & Linux reliability for AI and research compute 2. Improve GPU workload orchestration with Kubernetes and Volcano 3. Automate operations with Python and UNIX Shell We are adding a Middle DevOps Engineer to strengthen Kubernetes and Linux reliability for AI and research compute platforms. You will improve GPU workload orchestration with Kubernetes and Volcano, manage scheduling and quotas, and automate operations with Python and UNIX Shell while working with clients. Apply to help teams run scalable GPU compute smoothly **Responsibilities** * Maintain GPU-enabled Kubernetes clusters and standalone Linux compute environments to sustain efficient scheduling and strong performance * Set up and troubleshoot Volcano job scheduling, including queue configuration, POD execution, GPU allocation, and namespace quota enforcement * Oversee Kubernetes administration across the stack, including namespaces, RBAC, resource quotas, and workload isolation approaches * Develop and maintain Python and Shell automation to simplify job submission, resource provisioning, and system reporting * Partner with orchestration, optimization, and observability teams to raise scheduling efficiency, capacity utilization, and researcher workflows * Observe platform health and resource usage, delivering data and feedback to meet optimization and reporting needs * Identify and recommend enhancements to infrastructure, tooling, and automation workflows to improve performance, scalability, and usability * Keep day-to-day operations smooth for researchers running diverse AI and computational workloads **Requirements** * Hands-on experience with 2+ years in DevOps or infrastructure engineering roles supporting complex, large-scale environments * Expert-level knowledge of Kubernetes administration and orchestration, including namespaces, POD scheduling/distribution, PVC, NFS, and resource quota management * Practical experience with Volcano scheduler for GPU job execution, queue configuration, workload prioritization, and Kubernetes integration * Proven background managing GPU cluster environments in Kubernetes and on standalone Linux compute nodes * Advanced scripting skills in Python for infrastructure automation plus proficiency with UNIX Shell scripting (e.g., Bash) * Strong Linux system administration capability, including troubleshooting, performance tuning, and configuration management * Solid understanding of infrastructure automation and orchestration concepts and related tooling * Fluent English communication skills (spoken and written) for direct client interaction **Nice to have** * Helm for Kubernetes application package management * Monitoring and observability tooling, especially Prometheus, Grafana, and Loki * Infrastructure as Code tools such as Terraform * Multi-cloud Kubernetes exposure, including Amazon EKS and Google GKE * Azure Networking knowledge, including VPN, ExpressRoute, and network security * Familiarity with AI-assisted coding tools (e.g., GitHub Copilot, ChatGPT, Claude) * Experience with hybrid (cloud + on-premises) scheduling and resource optimization


