Home > Find Jobs

Job Search

A tropical beach
Teachable company logo

Teachable

São Paulo, São Paulo, Brazil

Posted on: 25 June 2024

Experience

n/a

Work

n/a

Employee Type

n/a

Salary Range

n/a

Analytics Engineer

About You

We are seeking a skilled Analytics Engineer to join our dynamic Data Team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, predominantly Finance, by organizing and structuring data to support robust analytics and reporting.


This role will be part of a highly collaborative team made up of US and Brazil-based Teachable and Hotmart employees.


What You’ll Do

  • Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.
  • Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.
  • Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.
  • Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.
  • Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes.
  • Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.
  • Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.
  • Cloud Technologies: While the focus is on data modeling, familiarity with cloud technologies and platforms (e.g., AWS) is a plus.

What You’ll Bring

  • 3+ years of experience working within data engineering, analytics engineering and/or similar functions.
  • Experience collaborating with business stakeholders to build and support data projects.
  • Experience with database languages, indexing, and partitioning to handle large volumes of data and create optimized queries and databases.
  • Experience in file manipulation and organization, such as Parquet.
  • Experience with the "ETL/ELT as code" approach for building Data Marts and Data Warehouses.
  • Experience with cloud infrastructure and knowledge of solutions like Athena, Redshift Spectrum, and SageMaker.
  • Experience with Apache Airflow for creating DAGs and various purposes.
  • Critical thinking for evaluating contexts and making decisions about delivery formats that meet the company’s needs (e.g., materialized views, etc.).
  • Knowledge in development languages, preferably Python or Spark.
  • Knowledge in SQL.
  • Knowledge of S3, Redshift, and PostgreSQL.
  • Experience in developing highly complex historical transformations. Utilization of events is a plus.
  • Experience with ETL orchestration and updates.
  • Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.
  • Experience with documentation and process creation.
  • Knowledge of data pipeline and LakeHouse technologies is a plus.

What You’ll Bring



Please mention the word **WORTHY** and tag RMTg4LjE2Ni4xMDAuMTkx when applying to show you read the job post completely (#RMTg4LjE2Ni4xMDAuMTkx). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Tags

design
python
support
cloud
analytics
reliability
engineer
engineering
apache
digital nomad
Apply to job