Data Infrastructure team within the software organization powers analytics, experimentation, and ML feature engineering to power the software we all love in our devices. The mission of the Data Infrastructure org is to provide cutting edge, reliable and easy to use infrastructure for ingesting, storing, processing and interacting with data and help the teams that build data intensive applications be successful. The Data Infrastructure team is looking for engineers who wants to bring their passion for infrastructure to build world class infrastructure products. You will build and lead our data infrastructure team and help the members of your team grow both technically and professionally. You will work with many cross functional teams and lead the planning, execution, and success of technical projects with the ultimate purpose of improving the software experience for customers.
The ideal candidate will have outstanding communication skills, proven data infrastructure design and implementation capabilities, strong business acumen, and an innate drive to deliver results.
He/she will be a self-starter, comfortable with ambiguity and will enjoy working in a fast-paced dynamic environment.
Build and operate the company’s largest data infrastructure supporting millions of users at 100+ PB scale.
Scale and operationalize big data technologies like Spark, Kafka, Presto, Flink, Hadoop in both on-premises and AWS environment.
Ensure data infrastructure offers reliable high-quality data with consistent SLAs with good monitoring, alerting and incident response and continual investment to reduce tech-debt.
Write code, documentation, participate in code reviews, and mentor other engineers.
3+ years of experience scaling and operating distributed systems like big data processing engines (e.g., Apache Hadoop, Apache Spark), streaming systems (e.g., Apache Flink, Apache Kafka), or resource management systems (e.g., Apache Mesos, AWS and Kubernetes).
Fluency in Java, Scala or a similar language.
Ability to debug complex issues in large scale distributed systems.
Passion for building infrastructure that is reliable, easy to use and easy to maintain.
Experience with Apache Spark programming, ETL, data warehousing environments is helpful but not required.
B.S. degree in Computer Science, Computer Engineering, or equivalent practical experience.
M.S., or PhD in Computer Science, Computer Engineering, or equivalent practical experience.
Duration: 6 months
Location: Seattle, WA or Santa Clara Valley, CA (on-site)
Submit resume to jobs@OSIengineering.com