Data Engineer (QART)
| Estimated Pay info | Based on similar jobs in your market$58 per hour |
|---|---|
| Hours | Full-time |
| Location | Athens, Attica Trimble, Ohio open_in_new |
About this job
We are Qualco Group, a leading fintech organisation with over 25 years of experience delivering innovative technology solutions to banks and financial institutions. Serving clients in over 30 countries, we leverage advanced technologies, such as AI and analytics, to develop proprietary software and platforms that accelerate digital transformation and generate lasting value for businesses, society, and the broader economy.
Qualco Group is a Greek technology organisation listed on the Athens Stock Exchange, with a growing footprint in defence, security, civil protection, and critical infrastructure operations. Through its specialised in-house R&D department, the Applied Research & Technology Centre, the Group develops advanced, dual-use solutions and products and provides services powered by Artificial Intelligence, Machine Learning, multi-sensor fusion, and cloud-to-edge architectures.Our mission is to deliver modular, scalable, and mission-ready platforms that support real-time decision-making, autonomous systems, common operational picture and situational awareness, and operational continuity across air, land, sea, and urban environments. Built on robust data architectures and interoperability standards, our solutions integrate with existing systems and evolve alongside operational needs.
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and storage systems. The ideal candidate will work across ingestion, transformation, and delivery layers, ensuring data is reliable, efficient, and optimized for analytics and operational use.
Key Responsibilities:
- Design and implement data ingestion pipelines using both streaming and batch processing patterns;
- Develop and maintain PostgreSQL databases including schema design, indexing, performance tuning, and query optimization;
- Build and operate Apache Kafka pipelines for real‑time streaming, event processing, and message-driven architectures;
- Work with Redis as a message broker or caching layer to support high‑throughput, low‑latency data flows;
- Integrate with internal and external APIs for data collection, synchronization, and automated data ingestion;
- Implement ETL/ELT workflows using modern orchestration tools and best practices;
- Ensure data quality, validation, and observability across all stages of the pipeline;
- Manage Git repositories and workflows using GitHub, GitLab, or Bitbucket, applying best practices in branching, versioning, and CI integration;
- Optimize storage and retrieval strategies for structured and semi‑structured data;
- Collaborate with data science, analytics, and backend teams to deliver clean, well‑structured datasets;
- Maintain documentation and data models to support transparency and long‑term maintainability;
- Ensuring that all activities and duties are carried out in full compliance with regulatory requirements and supporting the continued implementation of the Group Anti-Bribery and Corruption Policy.
Requirements
- Bachelor’s degree in computer science, engineering, or a related field (required);
- Master’s degree in computer science or related field (optional, a plus);
- 3+ years of experience in data engineering or similar roles;
- Strong expertise in PostgreSQL including performance tuning, replication, and advanced SQL;
- Hands‑on experience with Apache Kafka (producers, consumers, schema management, stream processing);
- Experience with Redis as a message broker or caching system;
- Proficiency in building and consuming APIs (REST, GraphQL, or gRPC);
- Experience with ETL/ELT tools and data transformation frameworks;
- Solid programming skills in Python, Java, or similar languages;
- Experience with Git tools and GitOps‑style workflows;
- Familiarity with containerized environments (Docker, Kubernetes);
- Understanding of data modeling principles for OLTP and OLAP systems;
- Highly proficient in both spoken and written English.
Desirable Skills: - Experience with cloud data platforms (AWS/GCP/Azure);
- Knowledge of workflow orchestration tools such as Airflow, Dagster, or Prefect;
- Familiarity with schema registries and serialization formats (Avro, Protobuf, Parquet);
- Background in distributed systems and high‑throughput data processing;
- Participation in open‑source projects or data engineering communities;
Benefits
Your Life @ Qualco
As a #Qmember, you'll embody our values every day, fostering a culture of teamwork & integrity, passion for results, quality & excellence, client focus, and agility & innovation. Within a truly human-centred environment built on mutual respect and trust, your dedication to our shared vision will not only be recognized but also celebrated, offering boundless opportunities for your personal and professional growth.
Find out more about #LifeatQualco Group 👉🏼 qualco.group/life_at_qualco_group
Join the #Qteam and enjoy:
💸 Competitive compensation, Meal vouchers, and annual bonus programs.
💻 Cutting-edge IT equipment, mobile phone, and data plan.
🏢 Modern facilities, free coffee, beverages, indoor parking, and in-house restaurant.
👨 Private health insurance, occupational doctor and nutritionist.
🤸 Onsite gym, wellness facilities, and ping pong room.
💡 Career and talent development tools.
🎓 Mentoring, coaching, personalized annual learning, and development plan.
🌱 Employee referral bonus, regular wellbeing, ESG, and volunteering activities.
At QUALCO, we value diversity and inclusivity. Your race, gender identity and expression, age ethnicity or disability make no difference in Qualco. We want to attract, develop, promote, and retain the best people based only on their ability and behavior.
Application Note: All CVs and application materials should be submitted in English.
Disclaimer: QUALCO collects and processes personal data in accordance with the EU General Data Protection Regulation (GDPR). We are bound to use the information provided within your job application for recruitment purposes only and not to share these with any third parties. For more details on the processing of your personal data during the Recruitment procedure, please be informed in the , before the submission of your application.