CAREERS

Distributed Systems Engineer at Splice Machine
Spain
Splice Machine, an AI predictive platform startup company, is looking for a Distributed Systems Engineer with deep experience working in highly complex distributed environments using Spark.  Work from anywhere.
 
Splice Machine modernizes applications so companies don't have to rewrite them, making them data-rich and intelligent with distributed SQL, and in-database machine learning.  No matter where an enterprise is in its maturity on the AI curve, Splice Machine provides powerful starting points and a clear path to becoming an AI-powered company.  The Splice Machine Operational AI Data Platform combines a SQL RDBMS, data warehouse, and ML platform in one, delivering better business outcomes faster.
 
At Splice Machine you'll work on solutions that matter in a  culture that will inspire you to do your very best.  Our distributed teams work with the latest technology and tools in an open and collaborative flexible-work environment.  We offer competitive salaries, generous equity, and wellness coverage, as well as the opportunity to seize moments of inspiration from either your home office or shared workspace, anywhere in the world.
 
Splice Machine’s CEO/ Co-Founder, Monte Zweben, is a serial entrepreneur in AI, selling his first company, Red Pepper, to Peoplesoft/ Oracle for $225M and taking his second company Blue Martini, through one of the largest IPOs in the early 2000s ($2.9B). He started Splice Machine to disrupt the $30 billion traditional database market with the first open-source dual-engine database and predictive platform to power Big Data, AI and Machine Learning applications.
 
Splice Machine has recruited a team of legendary Big Data advisors including, Roger Bamford, “Father of Oracle RAC”, Michael Franklin, former Director of AMPLab at UC Berkeley, Ken Rudin, Head of Growth and Analytics for Google Search, Andy Pavlo, Assistant Professor of Computer Science at Carnegie Mellon University and Ray Lane, former COO of Oracle, to collaborate with the Splice Machine team as we blaze new trails in Big Data.
 
As a Distributed Systems Engineer on the Product Development team, you will build out Splice Machine's Spark compute engines while leveraging Splice Machine's RDBMS ACID-compliant, analytical, transactional, and mixed workloads. This team frequently works on different layers of the Splice Machine stack building fundamental infrastructure components and capabilities that everything else relies on and builds new features like Splice Machine’s Native Spark DataSource. You will have the unique opportunity to work on a variety of open source and proprietary technologies that will significantly impact our product and business.

About You

    • You live and breath distributed computing and parallel programming, constantly considering the impact of network bandwidth, data locality, and dreaming of ways to minimize data hops within and across servers.
    • You know that platform matters. Our distributed systems clusters work on bare metal clusters as well on virtual servers in the cloud.  As you design and implement, you consider the configuration.
    • You optimize other people’s code as well as your own, using a variety of techniques and programming languages.  We work mostly in Java but are sometimes working in Scala or Python.
    • You understand that importance of reliability and resiliency when it comes to data being available at all times.
    • Data flow is your jam, whether it’s data flowing in from a backup or import command, or data streaming out from a large query.  You can switch between lots of concurrent small I/Os and streaming big parallel queries often concurrently.
    • In addition to having deep Apache Spark distributed systems experience, you’re familiar with the services provided by other open source technologies, like Apache HBase, Apache Calcite, Apache Orca, Apache Arrow, Apache Presto, or Apache Parquet.  You might not have used these before, but you understand their use cases and can decide how and when to use them.
    • You can make pragmatic engineering decisions in a short amount of time while ensuring that your work promotes product stability, reliability, and maintainability standards.

About What You'll Work On

    • Building system components that manage and process petabytes of data sets at scale and develop components/subsystems for a multi-server cloud-based platform built with Kubernetes.
    • Implement installation and manageability solutions that collect and organize metrics for end-users.
    • Expanding our security strategy using Apache Ranger, integrated with standard security practices.
    • Develop resource isolation building blocks to enable multi-tenancy in database components.
    • Design and build a disaster recovery architecture for zero data loss with transactional integrity.
    • Create a dual storage representation system using both row-based and columnar based techniques.

Requirements

    • B.S./M.S./PhD in Computer Science or equivalent
    • Experience developing distributed, data-intensive commercial software products, ideally using Apache Spark    
    • Expertise in at least one programming language.  We mainly use Java 
    • Strong concurrent programming experience
    • Database engine development experience is a plus
At Splice Machine you'll work on solutions that matter in a  culture that will inspire you to do your very best.  Our distributed teams work with the latest technology and tools in an open and collaborative flexible-work environment.  We offer competitive salaries, generous equity, and wellness coverage, as well as the opportunity to seize moments of inspiration from either your home office or shared workspace, anywhere in the world. Splice Machine is proud to be an Equal Opportunity Employer building a diverse and inclusive workforce.