CAREERS

Java Engineer at Splice Machine
San Francisco, CA, US / St. Louis, MO, US
Named as one of the 20 Red Hot, Pre-IPO Companies in 2016 B2B Tech by IDG, Splice Machine is disrupting the $30 billion traditional database with the first open-source smart database that simultaneously powers OLAP and OLTP data.

As companies globally embrace artificial intelligence and machine learning applications, they are turning to Splice Machine’s transactional and analytical smart database to power their big data. Splice Machine's lambda architecture-in-a-box, (an open source RDBMS powered by HBase and Spark) is used to ingest high volumes of data at velocity while simultaneously performing analytics and serving concurrent queries.

As a Java Engineer on the Product Development team you will build out Splice Machine's Hadoop and Spark based Relational Database Management System. You will focus your efforts on maximizing Splice Machine's RDBMS solution leveraging its ACID-compliant, analytical, transactional, and mixed workloads.

This team frequently works on different layers of the Splice Machine stack building fundamental infrastructure components and capabilities that everything else relies on. You will have the unique opportunity to work on a variety of open source and proprietary technologies that will significantly impact our product and business.

About you

You have expert knowledge of distributed computing, parallel programming, concurrency control, transaction processing and databases.
You optimize and refactor other people's code as well as your own using Java.
You make pragmatic engineering decisions in a short amount of time while ensuring your work promotes product stability, reliability, and maintainability.
You build systems to manage and process large data sets distributed on multi-server, cloud-based systems from inception to execution.
You use or are at least familiar with open source technologies that solve big data problems like Apache HBase, Apache Spark, Apache Calcite, Apache Orca, Apache Arrow, Apache Presto, Apache Parquet, Apache Vertica or Apache Kudu.
About What You’ll Work On

Design and build a disaster recovery architecture for zero data loss with transactional integrity.
Create a dual storage representation system using both row based and columnar based techniques using Apache Calcite or Apache Orca.
Lead our query optimization team using state of the art sketching algorithms to achieve most efficient queries.
The opportunity to go outside your normal duties and work on our blog, attend hackathons and conferences, speak at events, contribute to StackOverflow and open­ source development, and anything else you’re interested in that can add to our community.
Requirements:

B.S./M.S. in Computer Science or equivalent
Experience developing commercial products
Expertise in Java and SQL
Strong concurrent programming experience
Hadoop ecosystem programming experience highly desirable, especially Apache Spark, Apache HBase or Apache Arrow
Database development experience highly desirable
Splice Machine is headquartered in downtown San Francisco convenient to all public transportation. Our people enjoy access to the best tools available, an open and collaborative work environment and a supportive culture inspiring them to do their very best. We offer great salaries, generous equity, employee health coverage, flexible time off, delicious catered meals, and an environment that gives you the flexibility to seize moments of inspiration among other perks.

More about Splice Machine:

Co-Founder and CEO Monte Zweben recognized for his serial entrepreneur success including a $2.9B IPO for Blue Martini and a $225M sale of Red Pepper Software to PeopleSoft.
An advisory board representing some of the most experienced minds in database and Big Data; Roger Bamford- The Father of Oracle RAC, Michael J. Franklin- Director of AMPLab at UC Berkeley, Abhinav Gupta- Co-Founder & VP, Engineering of Rocket Fuel, Marie-Anne Neimat- Co-Founder of TimesTen, Ken Rudin- Head of Growth and Analytics for Google Search.
A robust partner ecosystem and future Development Community that best meets the needs of our customers including Cloudera, Hortonworks, MapR, AWS, Redpoint, Tableau and LucidWorks.
We encourage you to learn more about working here!