HBase / Spark Software Developer at Splice Machine
San Francisco, CA, US / St. Louis, MO, US
Named as one of the 20 Red Hot, Pre-IPO Companies in 2016 B2B Tech by IDG, Splice Machine is disrupting the $30 billion traditional database with the first dual-engine database on Hadoop and Spark.

Leveraging in-memory technology from Spark and scale-out capabilities from Hadoop, Splice Machine can replace Oracle® and MySQL™ databases, while increasing performance by 10-20 times at one-fourth the cost. We are headquartered in the South of Market (SOMA) neighborhood of San Francisco.

As a Database Kernel Engineer you’ll be focused on database internals including the code generator, optimizer, executer, indexing, statistics and transactions. You will be a key contributor, building Splice Machine's Hadoop-based Relational Database Management System. Splice Machine's RDBMS is ACID-compliant and supports Analytical, Transactional, and mixed workloads.


Design and develop key features for the RDBMS, ensuring your work is performant and scalable in a concurrent multi-node Hadoop execution environment
Focus specifically on key database internal technologies: plan generation, use and management of statistics, join order and strategy selection, index selection, etc.
Work with Splice Machine’s CTO to set strategic direction for the database kernel
Collaborate with other engineers through the product lifecycle, including architecture, product support issues, beta testing, bug fixes, etc.
Ensure your work promotes product stability, reliability, and maintainability

B.S./M.S./Ph.D. in Computer Science or equivalent
7+ years developing the internals of a commercially available or open source database, with deep knowledge of the mechanisms for query optimization
Strong knowledge of today’s competing databases and the technologies that differentiate them
Experience and/or strong familiarity with both analytical and transactional database architectures, ACID, etc.
10+ years developing commercial products
5+ years developing Java products
Strong concurrent programming experience
Hadoop ecosystem programming experience highly desirable, especially Apache Spark and Apache HBase