Search

Data Engineer

The Corporate
locationBoston, MA, USA
PublishedPublished: 6/14/2022
Technology
Full Time

Job Description

Job Description

Job Title: Data Engineer

About the Role

We are seeking a skilled Data Engineer to join our team and help design, build, and optimize data pipelines and architectures that power business intelligence, analytics, and data-driven decision-making. This role is fully remote within the United States.

The ideal candidate has strong experience in building scalable data systems, working with large datasets, and collaborating with cross-functional teams including analysts, data scientists, and software engineers.

Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured data.

  • Build and optimize data architectures to support analytics, reporting, and machine learning initiatives.

  • Work with stakeholders to define data requirements and deliver solutions that meet business needs.

  • Ensure data quality, integrity, and governance across systems and pipelines.

  • Implement best practices for data modeling, warehousing, and storage solutions.

  • Monitor, troubleshoot, and optimize data workflows for performance and reliability.

  • Collaborate with engineering teams to integrate data solutions into applications and platforms.

  • Stay updated on emerging data technologies and recommend improvements to existing processes.

Qualifications

Required:

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field (or equivalent experience).

  • 3+ years of experience as a Data Engineer or in a related data-focused role.

  • Proficiency in SQL and data modeling.

  • Hands-on experience with cloud platforms (AWS, Azure, or GCP).

  • Strong knowledge of data warehousing tools (e.g., Snowflake, Redshift, BigQuery).

  • Experience with ETL tools (e.g., dbt, Airflow, Talend, Informatica).

  • Proficiency in at least one programming language (Python, Java, or Scala).

  • Familiarity with big data frameworks (Spark, Hadoop, Kafka).

Preferred:

  • Experience with containerization (Docker, Kubernetes).

  • Knowledge of data governance and security best practices.

  • Exposure to machine learning pipelines and data science workflows.

  • Strong communication skills with the ability to work across technical and non-technical teams.

&

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...