MLIR Compiler Backend Developer

d-Matrix · South Bay

Company

d-Matrix

Location

South Bay

Type

Full Time

Job Description

THE ROLE: COMPILER BACKEND

About us

If you are following the evolution of the leading approach in deep learning powered AI, the renaissance in NLP as well as the next disruption in computer vision, you likely know it's all about Transformer based modelsThey are powering neural nets with billions to trillions of parameters and existing silicon architectures (including the plethora of AI accelerators) are struggling to varying degrees to keep up with exploding model sizes and their performance requirements. More importantly, TCO considerations for running these models at scale are becoming a bottleneck to meet exploding demand. Hyperscalers are keen on how to gain COGS efficiencies with the trillions of AI inferences/day they are already serving, but certainly for addressing the steep demand ramp they are anticipating in the next couple of years. d-Matrix is addressing this problem head on by developing a fully digital in memory computing accelerator for AI inference that is highly optimized for the computational patterns in Transformers. The fully digital approach removes some of the difficulties of analog techniques that are most often touted in pretty much all other in-memory computing AI inference products. d-Matrix's AI inference accelerator has also been architected as a chiplet, thereby enabling both a scale-up and scale-out solution with flexible packaging options. The d-Matrix team has a stellar track record in developing and commercializing silicon at scale as senior execs at the likes of Inphi, Broadcom, and Intel. Notably, they recognized early the extremely important role of programmability and the software stack and are thoughtfully building up the team in this area even since before their Series A. The company has raised $44m in funding so far and has 70+ employees across Silicon Valley, Sydney and Bengaluru.

Why d-Matrix

We want to build a company and a culture that sustains the tests of time. We offer the candidate a very unique opportunity to express themselves and become a future leader in an industry that will have a huge influence globally. We are striving to build a culture of transparency, inclusiveness and intellectual honesty while ensuring all our team members are always learning and having fun on the journey. We have built the industry's first highly programmable in-memory computing architecture that applies to a broad class of applications from cloud to edge. The candidate will get to work on a path breaking architecture with a highly experienced team that knows what it takes to build a successful business.

The team and the role

The d-Matrix compiler team is looking for exceptional candidates to help develop the compiler backend - specifically the problem of assigning hardware resources in a spatial architecture to execute low level instructions. The successful candidate will be motivated, capable of solving algorithmic compiler problems and interested in learning intricate details of the underlining hardware and software architectures. The successful candidate will join a team of experienced compiler developers, which will be guiding the candidate for a quick ramp up in the compiler infrastructure, in order to attack the important problem of mapping low level instructions to hardware resources. We have opportunities specifically in the following areas:

- Model partitioning (pipelined, tensor, model and data parallelism), tiling, resource allocation, memory management, scheduling and optimization (for latency, bandwidth and throughput).

Qualifications

Minimum: • Bachelor's degree in relevant field. • Ability to deliver production quality code in modern C++. • Experience in modern compiler infrastructures, for example: LLVM, MLIR. • Experience in machine learning frameworks and interfaces, for example: ONNX, TensorFlow and PyTorch.• Experience in production compiler development.

Desired:• Algorithm design ability, from high level conceptual design to actual implementation.• Experience with relevant Open Source ML projects like Torch-MLIR, ONNX-MLIR, Caffe, TVM.• Passionate about thriving in a fast-paced and dynamic startup culture.

Date Posted

09/29/2023

Views

4

Back to Job Listings Add To Job List Company Profile View Company Reviews
Positive
Subjectivity Score: 0.8

Similar Jobs

Senior Developer, Data Engineer - Tarana Wireless, Inc.

Views in the last 30 days - 0

Tarana is seeking a Senior DeveloperData Engineer with 5 years of experience in building largescale data pipelines The role involves designing buildin...

View Details

Principal Engineer Software (Full Stack Developer) - Palo Alto Networks

Views in the last 30 days - 0

Palo Alto Networks is seeking a Senior FullStack Engineer to develop and maintain highperformance web applications collaborating with crossfunctional ...

View Details

Executive Assistant - ServiceNow

Views in the last 30 days - 0

ServiceNow a global market leader in AIenhanced technology is seeking a highly organized and experienced executive assistant to support a VP The role ...

View Details

Senior Program Manager, Global Occupational Health & Safety - ServiceNow

Views in the last 30 days - 0

ServiceNow is seeking a Health Safety Program Manager to design implement and lead a comprehensive corporate safety program The role involves develop...

View Details

AI Solution Manager, ServiceNow Platform - ServiceNow

Views in the last 30 days - 0

ServiceNow a global market leader in AIenhanced technology is seeking an AI Solution Manager to lead the implementation of AI solutions for complex bu...

View Details

Staff Flight Test Engineer - Wisk

Views in the last 30 days - 0

Wisk Aero is seeking a Staff Flight Test Engineer to join their team in Hollister CA The role involves ensuring safe and efficient flight testing and ...

View Details