CAREERS

Database Kernel Engineer at Splice Machine
San Francisco, CA, US / St. Louis, MO, US
Use your database internals programming experience to maximum effect by working on Splice Machine’s ground-breaking Hadoop-based relational database.

Company Description

Named as one of the the 20 Red Hot, Pre-IPO Companies in 2015 B2B Tech by IDG, Splice Machine is disrupting the $30 billion traditional database with the first dual-engine database on Hadoop and Spark. Leveraging in-memory technology from Spark and scale-out capabilities from Hadoop, Splice Machine can replace Oracle® and MySQL™ databases, while increasing performance by 10-20 times at one-fourth the cost. We are headquartered in the South of Market (SOMA) neighborhood of San Francisco.

Job Description

As a member of the Product Development team focused on database internals (code generator, optimizer, executer, indexing, statistics, transations, etc.), you will help build out Splice Machine's Hadoop-based Relational Database Management System. Splice Machine's RDBMS is ACID-compliant and supports Analytical, Transactional, and mixed workloads.

Responsibilities:

Design and develop key features for the RDBMS, ensuring your work is performant and scalable in a concurrent multi-node Hadoop execution environment
Focus specifically on key database internal technologies: plan generation, use and management of statistics, join order and strategy selection, index selection, etc.
Work with Splice Machine’s CTO to set strategic direction for the database kernel
Collaborate with other engineers through the product lifecycle, including architecture, product support issues, beta testing, bug fixes, etc.
Ensure your work promotes product stability, reliability, and maintainability

Qualifications:

B.S./M.S./Ph.D. in Computer Science or equivalent
7+ years developing the internals of a commercially available or open source database, with deep knowledge of the mechanisms for query optimization
Strong knowledge of today’s competing databases and the technologies that differentiate them
Experience and/or strong familiarity with both analytical and transactional database architectures, ACID, etc.
10+ years developing commercial products
5+ years developing Java products
Strong concurrent programming experience
Hadoop ecosystem programming experience highly desirable, especially Apache Spark and Apache HBase