DevOps Engineer

Job description

Running a flexible Machine Learning engine at scale is hard. We must ingest and process large volumes of data uninterruptedly and store it in a scalable manner. The data needs to be prepared and served to hundreds of models constantly. All the predictions of the models, as well as other data pipelines, must be stored and reachable for our web application(s) to present the generated insights to our customers.
We work on the system that delivers this functionality which allows the Machine Learning engineers to deliver new and improved models at ease, manage existing models, monitor them, among other things, all of which are crucial to `day-to-day` operations.
You will be working and interacting with a wide array of technologies that constitute Jungle's core systems (data handling/processing, serving ML models, etc...) and managing and developing the infrastructure systems necessary to run our workloads. You will have the possibility to work with multiple cloud vendors as well as open-source technologies to help keep our Jungle look like a nicely groomed garden.

Who we are

Jungle develops and applies Artificial Intelligence to increase the uptime and performance of renewable energy sources. Built on existing sensors and data streams, the company’s technology enables solar and wind energy owners to squeeze more out of their assets, accelerating the world’s transition to renewable energy sources.

We have productised our services into a web application and are continuously improving it to ensure that our best analyses and visualisations help our users get the maximum energy out of their assets. We operate at a large scale - millions of data points per day - providing always-on predictive models, alarms and metrics visualisations for some of the largest and most sophisticated customers in the global renewable energy space. This is not your average dashboard, we’re talking about intelligently visualising handling large quantities of data to drive performant visualisations and functionality.

Why do we need you?

  • You’ll be working with/on a set of technologies that support our complex Machine Learning pipeline system and improve usability, performance and robustness of our internal system.
  • Our current stack runs on Kubernetes on top of AWS and we need someone to help us integrate more and more functionality into our stack.
  • You’ll work together with the engineering team to maintain and improve existing systems, build new ones and overcome difficulties arising from scaling up our systems to more and more data. Some examples:
    • Contribute to the improvement of the production infrastructure that is used to efficiently serve large amounts of data to our product and the development on-premises infrastructure where our services/models are built.
    • Help us improve our CI/CD pipelines for our tools and services.
    • Make use of modern open-source technologies to improve usability, performance and robustness of our tools and services.
    • Improve observability of our systems to make sure that they are running in perfect conditions; otherwise notify the team as early as possible.
    • Make architectural decisions on how to solve our engineering challenges and keep us future proof.

Why work with us?

  • Join a funded start-up.
  • Work with modern technologies (both in ML and software engineering).
  • You'll work on the last-mile delivery of our products, ensuring they have the best impact with our customers.
  • You have the opportunity to use your skills to create a meaningful change in this world.
  • Become part of a warm and skilled group of people, committed to each others success.
  • We care about your growth and assign you a personal mentor to help you achieve this.
  • As a remote-first company, we offer you a flexible work schedule, holiday policy, and work location.


  • At least 3 years professional experience as a Linux sysadmin, wielding your shell as a weapon that imposes fear on GUI users.
  • Demonstrable experience (2+ years) in working in a DevOps position using code to build and maintain large, complex and scalable system infrastructures.
  • At least 2 years professional experience working with cloud service providers such as AWS, Azure or GCP.
  • Experience with designing, developing, implementing and maintaining CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions, etc).
  • You have experience working with containerization frameworks and tools (Docker, Podman, rkt, etc).
  • Proficiency in at least one scripting language (Python, Bash, Golang).

Preferred Requirements (extra)

  • (Preferred) You have hands-on experience with monitoring and logging systems (Prometheus, Grafana, Zabbix, Fluent-bit).
  • (Preferred) You have professional experience in database administration (backups, restores, access rights) with either RDMBs (PostgreSQL, MySQL) or non-relational databases (Cassandra, MongoDB, Redis).
  • (Preferred) Your knowledge of networking goes beyond the basics, understanding routing, proxies, VPNs, NAT, TLS/SSL, service meshes and other more complex networking concepts.
  • (Preferred) You have experience with configuration management tools like Ansible, Chef, Puppet, Saltstack, or similar.
  • (Bonus) Knowledge in working with RESTful APIs, stream-based and general service and event/message-queue oriented architectures.
  • (Bonus) You have experience with Kubernetes (EKS, GKE, AKS).
  • (Bonus) You have experience using Terraform.
  • (Bonus) You have experience with Proxmox.
  • (Bonus) You understand what is a Machine Learning model and have experience with MLOps.

About you

  • You work meticulously. People around you trust your work results, rightly so.
  • You're pragmatic; you know when to trade off diving deep with quick fixes.
  • You’re eager to learn new technologies and expand your horizon.