Big data infrastructure internship | Adaltas

Task description

Huge Information and dispersed computing are at the main of Adaltas. We accompagny our partners in the deployment, routine maintenance, and optimization of some of the major clusters in France. Because just lately we also give help for day-working day operations.

As a fantastic defender and lively contributor of open up resource, we are at the forefront of the knowledge platform initiative TDP (TOSIT Facts Platform).

All through this internship, you will add to the improvement of TDP, its industrialization, and the integration of new open up supply factors and new functionalities. You will be accompanied by the Alliage skilled team in charge of TDP editor aid.

You will also get the job done with the Kubernetes ecosystem and the automation of datalab deployments Onyxia, which we want to make out there to our shoppers as perfectly as to learners as part of our teaching modules (devops, significant data, and so on.).

Your skills will assistance to extend the solutions of Alliage’s open supply aid giving. Supported open up resource factors include TDP, Onyxia, ScyllaDB, … For individuals who would like to do some internet function in addition to big facts, we already have a pretty practical intranet (ticket administration, time administration, innovative research, mentions and relevant articles, …) but other great characteristics are anticipated.

You will apply GitOps launch chains and create articles.

You will work in a group with senior advisors as mentor.

Firm presentation

Adaltas is a consulting agency led by a staff of open up resource gurus focusing on details management. We deploy and operate the storage and computing infrastructures in collaboration with our customers.

Husband or wife with Cloudera and Databricks, we are also open up resource contributors. We invite you to look through our web site and our lots of complex publications to find out extra about the business.

Competencies needed and to be obtained

Automating the deployment of the Onyxia datalab calls for expertise of Kubernetes and Cloud indigenous. You need to be comfy with the Kubernetes ecosystem, the Hadoop ecosystem, and the distributed computing model. You will grasp how the primary parts (HDFS, YARN, item storage, Kerberos, OAuth, and many others.) do the job with each other to meet the works by using of large information.

A great awareness of applying Linux and the command line is essential.

All through the internship, you will discover:

  • The Kubernetes/Hadoop ecosystem in get to contribute to the TDP job
  • Securing clusters with Kerberos and SSL/TLS certificates
  • Large availability (HA) of providers
  • The distribution of resources and workloads
  • Supervision of expert services and hosted programs
  • Fault tolerant Hadoop cluster with recoverability of shed data on infrastructure failure
  • Infrastructure as Code (IaC) through DevOps applications these kinds of as Ansible and [Vagrant](/en/tag/hashicorp- vagrant/)
  • Be snug with the architecture and procedure of a facts lakehouse
  • Code collaboration with Git, Gitlab and Github

Duties

  • Become familiar with the architecture and configuration techniques of the TDP distribution
  • Deploy and check protected and really out there TDP clusters
  • Lead to the TDP information foundation with troubleshooting guides, FAQs and article content
  • Actively add concepts and code to make iterative advancements to the TDP ecosystem
  • Analysis and assess the variations in between the most important Hadoop distributions
  • Update Adaltas Cloud using Nikita
  • Add to the improvement of a resource to collect purchaser logs and metrics on TDP and ScyllaDB
  • Actively contribute suggestions to build our guidance answer

Supplemental facts

  • Location: Boulogne Billancourt, France
  • Languages: French or English
  • Commencing day: March 2023
  • Length: 6 months

Substantially of the electronic entire world operates on Open Supply software and the Major Details business is booming. This internship is an option to obtain useful practical experience in each domains. TDP is now the only really Open up Resource Hadoop distribution. This is a wonderful momentum. As aspect of the TDP staff, you will have the probability to master 1 of the core large info processing types and participate in the progress and the upcoming roadmap of TDP. We believe that this is an exciting option and that on completion of the internship, you will be ready for a thriving occupation in Significant Knowledge.

Equipment obtainable

A laptop computer with the adhering to features:

  • 32GB RAM
  • 1TB SSD
  • 8c/16t CPU

A cluster designed up of:

  • 3x 28c/56t Intel Xeon Scalable Gold 6132
  • 3x 192TB RAM DDR4 ECC 2666MHz
  • 3x 14 SSD 480GB SATA Intel S4500 6Gbps

A Kubernetes cluster and a Hadoop cluster.

Remuneration

  • Salary 1200 € / month
  • Cafe tickets
  • Transportation move
  • Participation in just one global meeting

In the earlier, the conferences which we attended incorporate the KubeCon arranged by the CNCF basis, the Open up Source Summit from the Linux Basis and the Fosdem.

For any ask for for extra facts and to submit your software, remember to get in touch with David Worms:

Luis Robinson

Next Post

Traefik, Docker and dnsmasq to simplify container networking

Wed Dec 28 , 2022
Good tech adventures start with some frustration, a need, or a requirement. This is the story of how I simplified the management and access of my local web applications with the help of Traefik and dnsmasq. The reasoning applies just as well for a production server using Docker. My dev […]
Traefik, Docker and dnsmasq to simplify container networking

You May Like