Home > Find Jobs

Job Search

A tropical beach
Addepar company logo

Addepar

Pune, Maharashtra, India

Posted on: 01 November 2023

Experience

n/a

Work

n/a

Employee Type

n/a

Salary Range

n/a

Data Engineer

The Role

Portfolio Data Integration is part of the broader Addepar Platform team. The overall Addepar Platform provides a single source of truth “data fabric” used by the Addepar product set, including a centralized and self-describing repository (a.k.a Data Lake), a set of API-driven data services, an integration pipeline, analytics infrastructure, warehousing solutions, and operating tools. The team has responsibility for all data acquisition, conversion, cleansing, disambiguation, modeling, tooling and infrastructure related to the integration of client portfolio data.

Addepar’s core business relies on the ability to quickly and accurately ingest data from a variety of sources, including 3rd party data providers, custodial banks, data APIs, and even direct user input. Portfolio Data integrations and feeds are a highly critical cross-section of this set, allowing our users to get automatically updated and reconciled information on their latest holdings onto the platform.

As a data engineer for this team, you will execute the development of new data integrations and maintenance of existing processes in order to expand and improve our data platform. You’ll be adding automation and functionality to our distributed data pipelines by writing PySpark code and integrating it within our Databricks Data Lake. As you gain more experience, you’ll contribute to increasingly challenging engineering projects within our platform with the ultimate goal of dramatically increasing the throughput of data ingestion for Addepar. This is a crucial, highly visible role within the company. Your team is a big component of growing and serving Addepar’s client base with minimal manual effort required from our clients or from our internal data operations team.

What You’ll Do

  • Complete individual project priorities, deadlines, and solutions.
  • Build pipelines that support the ingestion, analysis, and enrichment of financial data in partnership with business data analysts
  • Improve the existing pipeline to increase the throughput and accuracy of data
  • Develop and maintain efficient process controls and accurate metrics to ensure quality standards and organizational expectations are met
  • Partner with members of Product and Engineering to design, test, and implement new processes and tooling features that improve data quality as well as increase operational efficiency
  • Identify areas of automation opportunities and implement improvements
  • Understand data models and schemas, and work with other engineering teams to recommend extensions and changes

Who You Are

  • A computer science degree or equivalent experience
  • 2-6 years of professional software engineering experience
  • Competency with relevant programming languages (Java, Python)
  • Familiarity with relational databases and data pipelines
  • Experience or interest in data modeling, visualization, and ETL pipelines
  • Knowledge of financial concepts (e.g., stocks, bonds, etc.) is encouraged but not necessary
  • Passion for the finance and technology space and solving previously intractable problems at the heart of investment management


Please mention the word **SIGNIFICANT** and tag RMTg4LjE2Ni4xMDAuMTkx when applying to show you read the job post completely (#RMTg4LjE2Ni4xMDAuMTkx). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.

Tags

support
software
code
financial
investment
finance
operations
operational
analytics
engineer
engineering
Apply to job