Research (Systems) Engineer
other jobs MicroTECH Global Ltd
Added before 2 Days
- Scotland,Midlothian
- Full Time, Permanent
- £99,000 - £100,000 per annum
Job Description:
We are seeking Systems Research Engineers with a strong interest in computer systems, distributed AI infrastructure, and performance optimization. These roles are ideal for recent PhD graduates or exceptional BSc/MSc engineers looking to build research-driven engineering experience in areas such as operating systems, distributed systems, AI model serving, and machine learning infrastructure. You will work closely with senior architects on real-world projects, helping to prototype and optimize next-generation AI infrastructure.
Required Qualifications and Skills:
· Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
· Strong knowledge of distributed systems, operating systems, machine learning systems architecture, Inference serving, and AI Infrastructure.
· Hands-on experience with LLM serving frameworks (e.g., vLLM, Ray Serve, TensorRT-LLM, TGI) and distributed KV cache optimization.
· Proficiency in C/C++, with additional experience in Python for research prototyping.
· Solid grounding in systems research methodology, distributed algorithms, and profiling tools.
· Team-oriented mindset with effective technical communication skills.
Desired Qualifications and Experience:
· PhD in systems, distributed computing, or large-scale AI infrastructure.
· Publications in top-tier systems or ML conferences (NSDI, OSDI, EuroSys, SoCC, MLSys, NeurIPS, ICML, ICLR).
· Understanding of load balancing, state management, fault tolerance, and resource scheduling in large-scale AI inference clusters.
· Prior experience designing, deploying, and profiling high-performance cloud or AI infrastructure systems.
Required Qualifications and Skills:
· Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
· Strong knowledge of distributed systems, operating systems, machine learning systems architecture, Inference serving, and AI Infrastructure.
· Hands-on experience with LLM serving frameworks (e.g., vLLM, Ray Serve, TensorRT-LLM, TGI) and distributed KV cache optimization.
· Proficiency in C/C++, with additional experience in Python for research prototyping.
· Solid grounding in systems research methodology, distributed algorithms, and profiling tools.
· Team-oriented mindset with effective technical communication skills.
Desired Qualifications and Experience:
· PhD in systems, distributed computing, or large-scale AI infrastructure.
· Publications in top-tier systems or ML conferences (NSDI, OSDI, EuroSys, SoCC, MLSys, NeurIPS, ICML, ICLR).
· Understanding of load balancing, state management, fault tolerance, and resource scheduling in large-scale AI inference clusters.
· Prior experience designing, deploying, and profiling high-performance cloud or AI infrastructure systems.
Job number 3387286
Increase your exposure to recruiters with ProJobs
Thousands of recruiters are looking for you in the Job Master profile database, increase your exposure 4 times with a ProJob subscription
You can cancel your subscription at any time.