The job below is no longer available.
You might also like
in Druid Hills, GA
Lead Data Engineer - Architect - req346677
•30 days ago
Hours | Full-time, Part-time |
---|---|
Location | Druid Hills, Georgia |
About this job
Lead Data Engineer - Architect – req346677
https://careers.honeywell.com/us/en/job/req346677/
715 Peachtree Street, N.E., Atlanta, GA 30308 – preferred location
1944 E Sky Harbor Circle, Phoenix, AZ 85034
Honeywell is charging into the Industrial IoT revolution with the establishment of Honeywell Connected Enterprise (HCE), building on our heritage of invention and deep, on-the-ground industry expertise. HCE is the leading industrial disruptor, building and connecting software solutions to streamline and centralize the assets, people and processes that help our customers make smarter, more accurate business decisions. Moving at the speed of software, we are creating, innovating and delivering solutions fast, challenging the way things have always been done, piloting new ways for all of us to work, and expecting our successes to set new standards for our customers and for Honeywell.
JOB ACTIVITIES
Deliver contemporary analytics solutions for all Honeywell business groups and functions.
Develop solutions on various Database systems via Databricks, Hive, Hadoop, PostgreSQL, etc. working with Big Data, IoT data, SQL, Azure, AWS.
Drive transformation of legacy solutions to contemporary solutions using the Cloud PaaS, IaaS and SaaS
Create reusable cloud services frameworks for cloud application adoption Work with scrum masters, product owners, data architects, data engineers, data scientists and DevOps
Create a new platform using your experience of APIs, microservices, and platform development
Build products from the idea phase through launch and beyond.
Drive application cloud adoption goals in collaboration with Database, Data Center and Cloud Infrastructure.
Define and govern cloud architectural standards/principles, global product security guidelines and cloud governance.
Identify and implement process improvements and automate processes where possible
Ensure cloud offerings are highly available, redundant, fault tolerant, diverse, and affordable.
Conduct POCs and Cloud placement/Cloud fit studies to help make better business decisions
Diagnose and resolve Cloud Native technical application deployment issues
Facilitate Cloud technology decisions within the organization and drive "Cloud First" strategy.
YOU MUST HAVE
8 years of data engineering experience
2+ years working experience with Microsoft Azure
4 years of experience using Azure Storage (ADLS Gen2), Private links, private endpoints
4 years of experience using Kubernetes and Docker
4 years of design, development and deployment of applications to process very large amounts of data - structured and unstructured
4 years of experience hands-on with Spark, Hive/ Databricks/ Cloudera
4+ years of experience in writing complex SQL statements
Bachelor's degree from an accredited college or university
WE VALUE
Azure certification – Azure administrator associate
Experience in design, development and deployment of applications to process very large amounts of data - structured and unstructured
Hands on experience in Databricks, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure Data Lake Storage) based Hadoop distributions
Experience with Azure components like: Virtual Machines, Azure PaaS database services, VNet, Subnet, Express Route, Azure DNS, Application Insights, Azure Active Directory/ Role Based Access; Azure KeyVault, Azure Web Apps, Private links, private endpoints
Strengths using Kubernetes and Docker; cloud-based deployments. Understanding of containers & container orchestration (Swarm or Kubernetes)
Experience in working with NoSQL system (HBase, Cassandra, MongoDB, etc.)
Led/guided/mentored a team of data engineers on various big data technologies.
Experience working with in Agile Methodologies and Scrum
Experience working with remote and global teams and cross team collaboration
Experience with one or more visualization software such as Tableau, Qlikview, Angular js, D3.js
Experience with dimensional modeling, data warehousing and data mining
Good understanding of branching, build, deployment, CI/CD methodologies such as Octopus and Bamboo
Understanding of best-in-class model and data configuration and development processes