Distributed Systems Database Performance Engineer

Infotech Sourcing

Distributed Systems Database Performance Engineer

Chicago, IL
Full Time
Paid
  • Responsibilities

    Job Title: Distributed Systems / Database Performance Engineer 

    Location: Chattanooga, TN; Chicago, IL; San Francisco, CA (in order of priority)

    Job Type: Full-Time, On-site (W2 Only)

    About the Role:

    We are seeking a highly skilled Distributed Systems / Database Performance Engineer to join our team. This role is critical to optimizing our database infrastructure, enhancing data resiliency, and ensuring the efficient processing of large datasets. The ideal candidate will have a deep understanding of DynamoDB, distributed systems, and performance optimization.

    Key Responsibilities:

    DynamoDB Optimization:

    Implement congestion control strategies (e.g., exponential backoff) to manage high read/write workloads and prevent unhandled exceptions in hot partition scenarios.

    Utilize API-side accounting or read/write throughput tracking inside a sliding window, supported by Redis, for a load-balanced horizontally scalable web frontend.

    Read Resiliency:

    Develop and implement read resiliency mechanisms for distributed data stores.

    Ensure that incomplete data due to failed reads (e.g., spotty cell network connectivity) is communicated back to the application, triggering a retry mechanism to ensure data integrity.

    LRU Cache Updates:

    Implement event listeners at the LRU caching layer to synchronize updates with in-memory instance classes.

    Address concurrency and race condition issues to prevent overwriting user changes during synchronized updates.

    Chunking for Large Writes:

    Develop a chunking abstraction for handling large inbound writes exceeding DynamoDB’s 400K size limit.

    Implement mechanisms to reassemble chunks during data reads to ensure data integrity and performance.

    Query Optimization:

    Optimize the processing of large datasets within our proprietary database query engine.

    Develop custom indexes or leverage existing tools (e.g., sqlite3) to enhance the performance of searching, sorting, and filtering graph data.

    Optimize path-based query evaluation logic to minimize network calls and improve efficiency.

    Efficient Database Migrations:

    Streamline the bootup process by optimizing database migrations.

    Implement fine-grained control over which migrations are necessary and what data needs to be bootstrapped, reducing computational overhead and improving startup times.

    Qualifications:

    Proven experience working with DynamoDB and distributed data stores.

    Strong knowledge of congestion control strategies, read resiliency, and cache synchronization.

    Expertise in handling large datasets and optimizing database performance.

    Proficiency in JavaScript, with a preference for experience in C++ or WASM for long-term database engine optimization.

    Experience with React Native and knowledge of front-end/backend integrations.

    Skills:

    Distributed Systems: In-depth understanding of distributed architectures, data consistency, and system design.

    Database Performance Optimization: Expertise in database tuning, query optimization, and performance benchmarking.

    DynamoDB: Proficiency in DynamoDB, including handling large-scale read/write operations, hot partitions, and chunking strategies.

    Programming Languages: Strong skills in JavaScript, with additional experience in C++ or WASM preferred.

    Caching: Knowledge of LRU cache management, event listeners, and cache synchronization techniques.

    Concurrency Management: Ability to address concurrency and race condition issues in distributed environments.

    Data Resiliency: Experience in developing mechanisms to ensure data integrity and resiliency in the face of read/write failures.

    Problem-Solving: Strong analytical and troubleshooting skills to diagnose performance bottlenecks and implement effective solutions.

    Database Migration: Experience optimizing database migration processes, reducing startup times, and controlling migration granularity.

    Collaboration: Excellent communication and teamwork skills to work effectively with cross-functional teams.