Machine Learning Data Engineer
About Darwin Homes
At Darwin Homes, we fundamentally believe that the rental experience is broken. Too often, property management—serving as the middleman between investors and residents—often means shoddy service, hidden fees, and inefficient processes that shortchange everybody involved. Darwin was built to make residents' and owners' lives easier through an innovative ecosystem of technologies. We have created the best product in the market for residents to discover, tour and lease their future home; and for owners to have complete peace of mind from our modern management and leasing services built around our core values of transparency and professionalism. Darwin Homes is the destination for single-family rental services for property owners and residents.
The Darwin Homes team is composed of a diverse set of alumni from DoorDash, Square, Facebook, Apple, LinkedIn and other top technology companies. The founders and executive team have over 30+ years of combined experience in scaling disruptive technology and operations-focused businesses.
Darwin Homes was backed by top Silicon Valley venture capital (Khosla, Fifth Wall) and was acquired by Pagaya Technologies, a publicly traded company, in early 2023. Pagaya is an AI/ML data technology company with offices in Tel Aviv, New York and Austin.
We're looking for a Data Engineer at Darwin Homes who is excited at the opportunity to design data infrastructure, build clean pipelines, and maintain data products that are relied on by different business units across the company. This role will be tasked with designing, architecting, and building highly scalable and reliable data pipelines as well as data storage, transformation, and infrastructure to develop reporting and analytical capabilities in support of our business strategy. Strong written and verbal communication will be expected, as will the ability to juggle multiple projects on tight timelines. This role is remote-friendly.
What You'll Do:
- Design and build scalable and robust data pipelines to collect, process, and distribute large volumes of data efficiently
- Build ETL pipelines that will enable and streamline regular reporting on SLAs
- Provide clear documentation on data models with source, description and field definitions for better collaboration, maintainability, and usability
- Provide recommendations and guidance related to data integration strategy in order to meet future analytic and model needs
- Make our data more discoverable and easier to use for analytics and Machine learning modeling
- Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines.
- Work with AI/ML teams to understand their business and technical problems and deliver data to support the solutions to those problems
- Incorporate core data management competencies including data governance, data security and data quality.
- Test data movement, transformation code, and data components.
What You'll Have:
- Bachelor’s Degree in STEM related field or equivalent
- Three years of data engineering or equivalent experience
- Proven experience designing and implementing scalable, performant data pipelines, data services, and data products
- Built and maintained data warehouses with ETL/ELT pipelines
- Experience with data modeling, data lakes, and warehouse design
- Experience with various databases and datawarehouses such as Redshift and Snowflake
- Background in ETL and data processing, familiar with how to transform data to meet business goals
- Experience working with Fivetran and DBT is a bonus
- Strong knowledge of SQL and Python
- Strong problem solver who ensures systems are built with longevity and creates innovate ways to resolve issues.
- Have strong communication skills to collaborate with cross-functional partners and drive projects