Military Spouse Connection Jobs

Military Spouse Connection mobile logo

Job Information

Amazon SoC Modeling Engineering Team Lead, Annapurna Labs, Machine Learning Accelerators in Austin, Texas

Description

Custom SoCs (system-on-chips) are the brains behind AWS’s Machine Learning servers. Our team builds C++ models of these accelerator SoCs for use by internal partner teams. The modeling team currently develops functional models, but we’re expanding to include performance modeling for key SoC components. We’re looking for a Sr. Modeling Engineer to lead the team in architecting then delivering on new functional and performance models, infrastructure, and tooling for our customers.

As the ML accelerator modeling team lead, you will:

  • Chart the technical direction for the modeling team

  • Build and manage a small, strong team of 3-5 modeling engineers

  • Develop and own functional and performance models end-to-end, including model architecture, integration with other model or infrastructure components, testing, correlation, and debug

  • Drive model and modeling infrastructure performance improvements

  • Work closely with architecture, rtl design, design-verification, emulation, and software teams

  • Innovate on the tooling you provide to customers, making it easier for them to use our SoC models

  • Develop software which can be maintained, improved upon, documented, tested, and reused

Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled, tested, and correlated with high quality. The model is a critical piece of software used in both our SoC and SW stack development processes. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.

You will thrive in this role if you:

  • Are an expert in performance modeling for SoCs, ASICs, GPUs, or CPUs

  • Enjoy building, managing, and leading small teams

  • Are comfortable modeling in C++ and familiar with Python

  • Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization

  • Want to jump into an ML role, or get deeper into the details of ML at the system-level

Although we are building machine learning chips, no machine learning background is needed for this role. This role spans modeling of the ML and management regions of our chips, and you’ll dip your toes into both. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.

This role can be based in either Cupertino, CA or Austin, TX. The team is split between the two sites, with no preference for one over the other.

This is a fast-paced role where you'll work with thought-leaders in multiple technology areas. You'll have high standards for yourself and everyone you work with, and you'll be constantly looking for ways to improve your software, as well as our products' overall performance, quality, and cost.

We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!

Basic Qualifications

  • 5+ years of non-internship professional experience writing functional or performance models

  • Experience programming with C+- Familiarity with SoC, CPU, GPU, and/or ASIC architecture and micro-architecture

Preferred Qualifications

  • 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing

  • Experience developing and calibrating performance models for custom silicon chips

  • Experience with writing benchmarks and analyzing performance

  • Experience with PyTest and GoogleTest

  • Familiarity with modern C++ (11, 14, etc.)

  • Experience in multi-threaded programming, vector extensions, HPC, and QEMU

  • Experience with machine learning accelerator hardware and/or software

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.

DirectEmployers