Data Engineer

Neon Redwood

Data Engineer

San Francisco, CA
Full Time
Paid
  • Responsibilities

    The company

    Neon Redwood is a data services consulting company, working on cutting-edge AI and data-driven solutions. We are a team of passionate engineers and data experts, and we are currently looking for a Data Engineer to join our team and help us develop and expand our data infrastructure and analytics capabilities.

    The Role

    We are seeking an experienced Data Engineer with a background in data engineering and a passion for working with large-scale data sets. Help us develop and expand our data infrastructure and analytics capabilities.

    The ideal candidate will have at least 2 years of professional experience and a solid understanding of Python, BigQuery, and Google Cloud Platform (GCP) or similar technologies. This full-time role will involve working closely with our CTO and other team members to design, develop, and maintain data pipelines, ETL processes, and data warehousing solutions.

    Responsibilities

    • Collaborate with the CTO and other team members to design, develop, and maintain data pipelines and ETL processes.
    • Write clean, efficient, and maintainable code in Python and other relevant technologies.
    • Implement and optimize data storage and processing solutions using BigQuery and Google Cloud Platform (GCP).
    • Ensure data quality and integrity through proper data validation and monitoring techniques.
    • Stay up-to-date with the latest industry trends and technologies to ensure our data infrastructure remains competitive.
    • Assist in the development and launch of new data-driven tools and products.
    • Mentor and guide junior engineers, fostering a culture of continuous learning and improvement.

    Requirements

    • Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience.
    • 2+ years of professional data engineering experience.
    • Proficiency in Python, BigQuery, and Google Cloud Platform (GCP) or equivalent technologies.
    • Experience with data pipeline and ETL process design and development.
    • Excellent problem-solving skills and the ability to work independently or as part of a team.
    • Strong communication, collaboration, and a passion to develop leadership skills.
    • Passion for working with large-scale data sets and staying current with industry trends.

    Additional Skills (Nice to Have)

    • Experience with other data processing technologies and platforms (e.g., Apache Beam, Dataflow, Hadoop, Spark).
    • Experience with data visualization tools and libraries (e.g., Looker, Sigma, D3.js).
    • Knowledge of machine learning and AI concepts.
    • Experience with real-time data processing and streaming technologies (e.g., Kafka, Pub/Sub).

    Benefits:

    Fully remote team and intend to remain this way, a lot of flexibility and frequent get togethers

    • Fully covered Health Insurance for employees and their eligible dependents

    • Fully covered Vision & Dental for employees and their eligible dependents

    • Unlimited Time Off

    • Company 401k plan with employer contributions

    • Supplemental monthly health and wellness stipend