Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride in our ability to innovate and push the boundaries of what's possible.
We are seeking a talented and experienced Senior Data Engineer. As part of our diverse practices, including Data, Machine Learning, DevOps, Application Development, and QA, you will collaborate with a multidisciplinary team of data engineers, machine learning engineers, and application developers. You will encounter numerous technical challenges and have the opportunity to contribute to exciting open-source projects, build internal solutions, and engage in R&D activities, providing an excellent environment for professional growth.
Let's work together to build a better future for everyone!
Requirements:
5+ years of experience in data engineering;Experience working with Cloud Solutions (preferably AWS, possibly GCP or Azure);Experience with Cloud Data Platforms (preferably Databricks);Experience with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation;Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow (Dagster), Kafka, Spark, dbt;Proficiency in programming languages relevant to data engineering - Python and SQL;Experience in building scalable APIs (FastAPI, or Flask);Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization;Advanced or Fluent English skills;Strong problem-solving skills and the ability to work collaboratively in a fast-paced environment.Nice to Have:
Relevant AWS, GCP, Azure, Databricks certifications;Knowledge of BI Tools (Power BI, QuickSight, Looker, Tableau, etc.);Experience in building Data Solutions in a Data Mesh architecture;Familiarity with classical Machine Learning tasks and tools (e.g., OCR, AWS SageMaker, MLFlow, etc.).Responsibilities:
Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals;Collect and manage large volumes of varied data sets;Work directly with Data Scientists and ML Engineers to create robust and resilient data pipelines that feed Data Products;Define data models that integrate disparate data across the organization;Design, implement, and maintain ETL pipelines;Develop and continuously test data-driven solutions.
Tags
python
sql
data engineering
Apply to job