Silicon Architecture Engineer, PhD, University Graduate

Google · Bangalore, India

Company

Google

Location

Bangalore, India

Type

Full Time

Job Description

Minimum qualifications:

  • PhD in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering or related technical field, or equivalent practical experience
  • Experience of accelerator architectures and data center workloads
  • Proficiency in programming languages (e.g., C++, Python)
Preferred qualifications:
  • Experience with performance modeling tools
  • Knowledge of arithmetic units, bus architectures, accelerators, or memory hierarchies
  • Knowledge of high performance and low power design techniques

About the job

As a Silicon Architecture Engineer, you'll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive Tensor Processing Unit (TPU) technology that powers Google's AI/ML applications. You will work with hardware and software architects to architect, model, analyze, and define next-generation TPUs. You will have dynamic, multi-faceted responsibilities in areas such as product definition, design, and implementation, collaborating with the Engineering teams to drive the optimal balance between performance, power, features, schedule, and cost.

Want more jobs like this?

Get jobs in Bangalore, India delivered to your inbox every week.

By signing up, you agree to our Terms of Service & Privacy Policy.


Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers' engineers and love voiding warranties by taking things apart so we can rebuild them. We keep our networks up and running, ensuring our users have the best and fastest experience possible.

Responsibilities

  • Work on Machine Learning (ML) workload characterization and benchmarking. Propose capabilities and optimizations for next-generation TPUs.
  • Develop architectural and microarchitectural power/performance models and evaluate quantitative and quantitative performance and power analysis.
  • Collaborate with the architecture team on power, performance, area (PPA) trade-off analysis as part of product definition.
  • Collaborate with partners in hardware design, software, compiler, ML model, and research teams for effective hardware/software codesign, creating high performance hardware/software interfaces.
  • Develop architecture specifications that meet current and future computing requirements for AI/ML roadmap.

Apply Now

Date Posted

01/21/2025

Views

0

Back to Job Listings Add To Job List Company Profile View Company Reviews
Positive
Subjectivity Score: 0.9

© 2026 Job Transparency. All rights reserved.