The job below is no longer available.

You might also like

in Columbus, OH

Use left and right arrow keys to navigate
Estimated Pay $60 per hour
Hours Full-time, Part-time
Location Columbus, OH
Columbus, Ohio

Compare Pay

Estimated Pay
We estimate that this job pays $60.34 per hour based on our data.

$35.74

$60.34

$77.88


About this job

Job Description

Job Description

Job Description:

MUST have skills - min of 5 Years
- Data Engineer
- heavy SQL/ Orcle
- AWS (1-2 years)
MUST have skills - min of 3 Years
Microservices
Data Pipeline
Python
AWS

NICE to have skills
- AWS
- ETL/DATA Pipeline - AbInitio, Informatica, hadoop

Prefer 2 to 7 years of Spark on Cloud development experience
Required 4 to 7 years of strong SQL skills; Teradata is preference but experience in any other RDBMS
Any experience in understanding requirements related to extraction, transformation, and loading (ETL) of data using Spark on Cloud.
Formal training or certification in software engineering concepts and 3+ years applied experience.
Ability to independently design, build, test, and deploy code. Should be able to lead by example and guide the team with his/her technical expertise.
Ability to identify risks/issues for the project and manage them accordingly.
Hands-on development experience and in-depth knowledge of Java/Python, Microservices, Containers/Kubernetes, Spark, and SQL.
Hands-on practical experience in system design, application development, testing, and operational stability
Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
Proficient in coding in one or more programming languages
Experience across the whole Software Development Life Cycle
Proven understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security
Proven knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
Preferred qualifications, capabilities, and skills
Knowledge about Data warehousing Concepts.
Experience with Agile based project methodology.
Ability to identify risks/issues for the project and manage them accordingly.
Knowledge or experience on ETL technologies like Informatica or Ab-initio would be preferable
Dress Codes:

Preferred Qualifications:
1 3 years of Spark on Cloud development experience.
1 3 years of strong SQL skills; Teradata is preference but experience in any other RDBMS
Formal training or certification on software engineering concepts and 3+ years applied experience.
Hands-on development experience and in-depth knowledge of Java/Python, Microservices, Containers/Kubernetes, Spark, and SQL
Hands-on practical experience in system design, application development, testing, and operational stability
Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
Proficient in coding in one or more programming languages
Any experience with Ab Initio, Informatica or other ETL/Data Pipeline Tools
Exposure to agile methodologies such as CI/CD, Applicant Resiliency, and Security
Emerging knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)