Verstand AI (www.verstand.ai) is seeking Data Engineers with strong SQL expertise, python development skills and Apache Kafka experience. The data engineers will be instrumental in significant initiatives to transform all aspects of data management services for commercial and public sector clients. The work leads to revised approaches to enterprise data warehousing, business intelligence and data wrangling/ELT/ETL and leverages the cloud to apply advanced analytics and data mining capabilities. Verstand Data Engineers will play a significant role in the implementation, maintenance and continuous improvement of enterprise data platforms and will work closely with business stakeholders, software development and support teams, as well as cloud DevOps. Most importantly, Verstand AI's data engineers will get an opportunity to work with cutting edge technologies and be part of data teams that help clients with end-to-end data science programs.
- Experience with setting up and operating data pipelines and data wrangling procedures using Python and/or SQL
- Collaborate with engineers and business customers to understand data needs (batch and real time/event streaming), capture requirements and deliver complete BI solutions
- Design and build data extraction, transformation, and loading processes by writing custom data pipelines
- Design, implement and support platforms that can provide ad-hoc access to large datasets and unstructured data
- Model data and metadata to support ad hoc and pre-built reporting
- Tune application and query performance using performance profiling tools and SQL
- Build data expertise and own data quality for assigned areas of ownership
Minimum Experience, Skills and Education:
- 5+ years of experience in using SQL and databases in a business environment
- Experience in cloud environment, distributed systems, system automation, and real-time platform
- Experience with Apache Kafka (with Confluent a plus)
- Experience with cloud technologies such as Google Cloud Platform (GCP), Azure and Amazon Web Services Redshift (AWS)
- Experience in custom ETL design, implementation, and maintenance
- Experience with data warehouse schema design and data modeling
- Production level experience with Python, SQL, and shell scripting
- Experience with batch and stream processing
- Experience with building large scale data processing systems
- Solid understanding of data design patterns and best practices
- Working knowledge of data visualization tools such as Tableau and Power BI
- Experience in analyzing data to identify deliverables, gaps, and inconsistencies
- Familiarity with agile software development practices and drive to ship quickly
- Experience leading change, taking initiative, and driving results
- Effective communication skills and strong problem-solving skills
- Proven ability and desire to mentor others in a team environment
- Bachelor's degree from four-year College or university in Computer Science, Technology or related field
Experience That Sets You Apart:
- Experience with microservice platforms, API development, and containers.
- Experience with Apache Airflow
- Retail vertical production experience
Verstand AI is a fast-growing firm that believes in ongoing training and development for its staff. The firm's mission is to help both its commercial and public sector clients resolve data management challenges and move to delivering insight and benefits for stakeholders, customers and constituents.
Based out of Tysons Corner, VA, Verstand does business across the United States and is moving into Europe and Asia. If you're interested in working with us and have a desire to tackle challenging data problems, we welcome your interest and encourage you to apply.
Posting ID: 606983753Posted: 2021-03-06