Author Archives: Fede Diaz

Azure Batch: Task in Containers

In today’s post I’ll be talking about how to send tasks to Microsoft Azure Batch able to run in containers. The task I want to solve in this example is calculating whether a number is prime or not. This Python code does the work for us. Then, I’ve write a Dockerfile adding the piece of code to the image. Now, we’re able to run the script from a Docker container.

Let’s move forward to Azure Batch, you need to create a Docker registry where you’ll push the Docker image and an Azure Batch account.

Docker Registry

Login into your Azure account and move to All resources, click Add and look for Container Registry. Then click in Create. Fill up the information with the name of the registry, the resource group, choose a location closer to you, enable Admin user to be able to push to the repo and the SKU (choose standard here).

Then Create

In a few seconds the registry will be ready. So go to the Dashboard and click on the registry name (the one you chose before). Click in Settings -> Access keys. Here are the credentials you’ll need to manage the registry.

Batch Account

From the All resources look for Batch service. Fill up the information with the Account name and Location, Subscription and Resource group should be ready.

Click Review + create and then Create. In a few seconds the service should be ready.

Building the Container

Clone the repo

git clone https://github.com/nordri/microsoft-azure

and build and push the container

cd batch-containers
docker build -t YOUR_REGISTRY_SERVER/YOUR_REGISTRY_NAME/YOUR_IMAGE_NAME .
# for example:
docker build -t pythonrepo.azurecr.io/pythonrepo/is_prime:latest .
# Check the image works:
docker run -ti --rm pythonrepo.azurecr.io/pythonrepo/is_prime python is_prime.py 7856
The number: 7856 is not prime
docker run -ti --rm pythonrepo.azurecr.io/pythonrepo/is_prime python is_prime.py 2237
The number: 2237 is prime
# login first
docker login pythonrepo.azurecr.io
Username: pythonrepo
Password: 
# Push
docker push pythonrepo.azurecr.io/pythonrepo/is_prime

Azure Batch

Now it’s time to send the task to Azure Batch. To do this, I’ve worked over this Python script. This script creates a pool, a job and three tasks to upload files to Azure Storage. So, I’ve made some modifications to fit it to my needs.

Creating the Pool

I need my pool to be created using instances able to run containers

...
def create_pool(batch_service_client, pool_id):
    print('Creating pool [{}]...'.format(pool_id))

    image_ref_to_use = batch.models.ImageReference(
        publisher='microsoft-azure-batch',
        offer='ubuntu-server-container',
        sku='16-04-lts',
        version='latest'
    )

    # Specify a container registry
    # We got the credentials from config.py
    containerRegistry = batchmodels.ContainerRegistry(
        user_name=config._REGISTRY_USER_NAME, 
        password=config._REGISTRY_PASSWORD, 
        registry_server=config._REGISTRY_SERVER
    )

    # The instance will pull the images defined here
    container_conf = batchmodels.ContainerConfiguration(
        container_image_names=[config._DOCKER_IMAGE],
        container_registries=[containerRegistry]
    )

    new_pool = batch.models.PoolAddParameter(
        id=pool_id,
        virtual_machine_configuration=batchmodels.VirtualMachineConfiguration(
            image_reference=image_ref_to_use,
            container_configuration=container_conf,
            node_agent_sku_id='batch.node.ubuntu 16.04'),
        vm_size='STANDARD_A2',
        target_dedicated_nodes=1
    )

    batch_service_client.pool.add(new_pool)
...

The key is the ImageReference where we set the instances to run with an OS able to run Docker. You must set the registry credentials and the default Docker image that will be pulled when the instance boots.

Creating the Task

I’ve also changed the Task for the same reason. This task is ready to launch a container in the instance.

...
def add_tasks(batch_service_client, job_id, task_id, number_to_test):
    print('Adding tasks to job [{}]...'.format(job_id))

    # This is the user who run the command inside the container.
    # An unprivileged one
    user = batchmodels.AutoUserSpecification(
        scope=batchmodels.AutoUserScope.task,
        elevation_level=batchmodels.ElevationLevel.non_admin
    )

    # This is the docker image we want to run
    task_container_settings = batchmodels.TaskContainerSettings(
        image_name=config._DOCKER_IMAGE,
        container_run_options='--rm'
    )
    
    # The container needs this argument to be executed
    task = batchmodels.TaskAddParameter(
        id=task_id,
        command_line='python /is_prime.py ' + str(number_to_test),
        container_settings=task_container_settings,
        user_identity=batchmodels.UserIdentity(auto_user=user)
    )

    batch_service_client.task.add(job_id, task)
...

You can see how I’ve defined the user inside the container as a non admin user. The Docker image we want to use and the arguments we need to pass in the command line, remember we launch the container like:

docker ... imagename python /is_prime.py number

Launching the Script

Configure

In order to launch the script we need to fill up some configuration. Open the config.py file and write all the credentials needed. Remember, all the credentials are in the Access keys section.

Installing Dependencies

You need Azure Python SDK installed to run the script.

pip install -r requirements.txt

Let’s go

Now we’re ready to launch the script:

python batch_containers.py 89
Sample start: 2018-11-10 10:11:11

Creating pool [ContainersPool]...
No handlers could be found for logger "msrest.pipeline.requests"
Creating job [ContainersJob]...
Adding tasks to job [ContainersJob]...
Monitoring all tasks for 'Completed' state, timeout in 0:30:00.....................................................................................................................................................................
  Success! All tasks reached the 'Completed' state within the specified timeout period.
Printing task output...
Task: ContainersTask
Standard output:
The number: 89 is prime

Standard error:


Sample end: 2018-11-10 10:14:31
Elapsed time: 0:03:20

Delete job? [Y/n] y
Delete pool? [Y/n] y

Press ENTER to exit...

If there’s a problem with the script we’ll see the error on stderr.txt.

Sample start: 2018-11-10 11:29:56

Creating pool [ContainersPool]...
No handlers could be found for logger "msrest.pipeline.requests"
Creating job [ContainersJob]...
Adding tasks to job [ContainersJob]...
Monitoring all tasks for 'Completed' state, timeout in 0:30:00..................................................................................................................................................................
  Success! All tasks reached the 'Completed' state within the specified timeout period.
Printing task output...
Task: ContainersTask
Standard output:

Standard error:
usage: is_prime.py [-h] number
is_prime.py: error: argument number: invalid int value: 'o'


Sample end: 2018-11-10 11:33:10
Elapsed time: 0:03:14

Delete job? [Y/n] y
yDelete pool? [Y/n] y

Press ENTER to exit...

Remember at the end to eliminate resources so that they do not infringe costs.

References

batch-python-quickstart
Run container applications on Azure Batch

Kubernetes Pipeline

Let’s explain an easy way to build an integration pipeline (CI) on Minikube.

Launch Minikube

If you don’t have Minikube running on your system,

$ minikube start --memory 4000 --cpus 2

Wait for a few minutes, you’ll see something like.

Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Installing Helm

Helm is The Kubernetes Package Manager, it helps you to deploy services into Kubernetes.

$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.10.0-linux-amd64.tar.gz -O helm.tar.gz
$ tar zxf helm.tar.gz
$ sudo cp linux-amd64/helm /usr/local/bin/helm
$ sudo chmod +x /usr/local/bin/helm

Applying the RBAC policy

$ kubectl create -f https://raw.githubusercontent.com/nordri/kubernetes-experiments/master/Pipeline/ServiceAccount.yaml

and then launch helm.

helm init --service-account tiller

Checking

$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}

Deploying Jenkins

I’m using a custom values file for this chart. What I’m adjusting is:

AdminPassword: set to admin1234
ServiceType: set to NodePort (because is Minikube)
In plugins:
– kubernetes:1.2
– workflow-aggregator:2.5
– workflow-job:2.17
– credentials-binding:1.15
– git:3.7.0

And then the deployment:

$ helm install --name jenkins -f jenkins-helm-values.yaml stable/jenkins

After a few minutes we could be able to access Jenkins with:

$ minikube service jenkins

Configuring Jenkins

First, set the credentials to access Docker Hub where we’ll push the Docker images. The only field you must keep is ID because is needed by the pipeline in a next step. Fill it with your information:

Back to Jenkins main screen, add a new item type Pipeline

And finally, configure the pipeline in the Pipeline section:

Save the changes and click on Build now

And that’s it!

The pipeline

Let’s deep into the pipeline

The head

The pipeline starts setting the worker id so the pod has different label on each execution.

Follow the pod definition where we can define the containers who will run inside the pod. For this example we’ll need:

  1. maven
  2. docker
  3. mysql, this one with environment variables
  4. java, also with environment variables

Then the volumes, we need the docker sock in order to run docker in docker and a folder to save the artefacts downloaded from the Internet (it’s a Maven project!) between the executions. Saving time and bandwidth.

Cloning the repo…

What we do here is clean the workspace and clone the repository. It a SpringBoot application with MySQL.

Building…

We build the package using maven container.

Testing…

In this stage we launch our app inside Java container and after 30 seconds we check if it online, a simple smoky test. We save the return value in RES to decide if it’s ok or not. If not, finish with fail. As we defined all the containers at the beginning there’s a MySQL running inside the pod.

Building & Uploading Docker images…

If the testing stage went OK, we can push it to Docker Hub. To set the tag we use the commit ID cut to eight characters. To login into Docker Hub we use the withCredentials who takes a credential by id and fill the environment variables.

References

Set Up a Jenkins CI/CD Pipeline with Kubernetes

Repository

GitHub

Elastest & Amazon Dash Button

Here, at Universidad Rey Juan Carlos, where we’re working hard to deploy Elastest, we’ve the need to run instances on AWS all the time. We, of course, have defined CloudFormation Templates to do the work but we can go further with just push a button (a physical one, indeed) so every time a team member needs to test something on Elastest, push the button!

Let’s see how it works

First of all, you need an Amazon Dash Button, I chose Optimum Nutrition a brand for gym diet, don’t ask why.

To configure the device, we follow the steps from Amazon’ guide till we have to choose a product. We avoid that because we don’t want to order products each time we push the button. So we’ve the device attached to our network and we need to know the MAC address.

We’re gonna use a Python project Amazon Dash to do that. First, we install it:

$ sudo pip install amazon-dash  # and after:
$ sudo python -m amazon_dash.install

Scan the network to find the device:

$ sudo amazon-dash discovery

Launch the command and wait like 10 seconds, then push the button and you’ll see the MAC address. Copy it for the next step.

Dash Button is based on systemd (which means that you must stop and start it on each change) and it has a yaml file for configuration (in /etc/amazon-dash.yml). It’s quite simple for out needs, should look like:

# amazon-dash.yml
# ---------------
settings:
  # On seconds. Minimum time that must pass between pulsations.
  delay: 10
devices:
  fc:65:de:1b:2e:8a:
    name: Elastest-dashbutton
    user: nordri
    cmd: PATH/TO/deploy-elastest.sh

For each device we have, we can define a stanza with a command to run. We set the user that will be the user who run the command. As you can imagine, the script is an AWS Cli command for CloudFormation like this one:

#!/bin/bash

DATE_SUFFIX=$(date +%s)

aws cloudformation create-stack \
  --stack-name Elastest-dashbutton-$DATE_SUFFIX \
  --template-url https://s3-eu-west-1.amazonaws.com/aws.elastest.io/cloud-formation-latest.json \
  --parameters '[{ "ParameterKey": "KeyName", "ParameterValue": "kms-aws-share-key" }]' \
  --profile naeva

Feel free to use the script on its own. Just remember to change the RSA key you use to access your instances on AWS and the auth profile.

That’s it, from now on, every push will result in an Elastest instance running on AWS.

Here, one could say

OK, mate, that’s pretty cool but deploying Elastest takes like 400 seconds and pushing a button won’t tell me when the instance is ready and, where should I connect to use the platform

Fair enough! Let’s configure the notifications.

Amazon Dash has confirmations to actually confirm that the button was pushed, but as our friend said before, we should notify when the instance is ready, to do that, we’re using a Telegram Bot. It’s quite simple to ask the Telegram’ @botfather for an Auth Token and start using it. So, out script to deploy Elastest now is like:

#!/bin/bash

DATE_SUFFIX=$(date +%s)

# Telegram data
USERID="...."
KEY="...."
TIMEOUT="10"
URL="https://api.telegram.org/bot$KEY/sendMessage"

aws cloudformation create-stack \
  --stack-name Elastest-dashbutton-$DATE_SUFFIX \
  --template-url https://s3-eu-west-1.amazonaws.com/aws.elastest.io/cloud-formation-latest.json \
  --parameters '[{ "ParameterKey": "KeyName", "ParameterValue": "kms-aws-share-key" }]' \
  --profile naeva

aws cloudformation wait stack-create-complete --stack-name Elastest-dashbutton-$DATE_SUFFIX --profile naeva
ELASTEST_URL=$(aws cloudformation describe-stacks --stack-name Elastest-dashbutton-$DATE_SUFFIX --profile naeva | jq --raw-output '.Stacks[0] | .Outputs[0] | .OutputValue')

TEXT="New Elastest deployed on $ELASTEST_URL"
curl -s --max-time $TIMEOUT -d "chat_id=$USERID&disable_web_page_preview=1&text=$TEXT" $URL > /dev/null

And, anyway, let’s configure the confirmation, so we’ll know when the button is pushed. To do so, see this Amazon Dash’ configuration file:

devices:
  fc:65:de:1b:2e:8a:
    name: Elastest-dashbutton
    user: nordri
    cmd: PATH/TO/deploy-elastest.sh
    confirmation: Telegram
confirmations:
  Telegram:
    service: telegram
    token: '...'
    to: ...
    is_default: true

Now, every time one of the team member needs an AWS instance to test Elastest, we’ll receive a notification at the beginning and another as soon as it’s ready.

Dash button cost 5€ and keep me busy for a day, not bad investment at all 😉

Kubernetes Network Policy

On this post, I will show how you can isolate services within the same namespace in Kubernetes.

Why would you want to do that? Think of this as if you want to test your app behind a firewall.

To achive this, Kubernetes provides Network Policy, it allows us to protect who can connect to a service. So, the first step will be deny all traffic in our namespace:

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Now, every service you deploy won’t be reachable within the cluster.

Let’s deploy now an Apache Server.

---
kind: Service
apiVersion: v1
metadata:
  name: apache1
  labels:
    app: web1
spec:
  selector:
    app: web1
  ports:
  - protocol: TCP
    port: 80
    name: http
---
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta2
kind: Deployment
metadata:
  name: apache1
  labels:
    app: web1
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: web1
  replicas: 1 
  template: 
    metadata:
      labels:
        app: web1
    spec:
      containers:
      - name: apache-container
        image: httpd:2.4
        ports:
        - containerPort: 80

And a simple container to test the connection:

---
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: ubuntu1
  labels:
    allow-access: "true"
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      allow-access: "true"
  replicas: 1 
  template: 
    metadata:
      labels:
        allow-access: "true"
    spec:
      containers:
      - name: nordri-container
        image: nordri/nordri-dev-tools
        command: ["/bin/sleep"]
        args: ["3600"]

Here, you can see a label allow-access: “true” we’ll use that to grant access to the service.

And, finally, the Network Policy.

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: net1
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web1
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          allow-web1: "true"
    ports:
    - protocol: TCP
      port: 80

As you can see, we can protect the Service web1 allowing the access only from those pods with label allow-access: “true”.

At this point you have something like that

NAME                         READY     STATUS              RESTARTS   AGE
po/apache1-5565b647c-dt6kk   1/1       Running   0          38s
po/ubuntu1-c8cffc57b-q62tw   1/1       Running   0          38s

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc/apache1      ClusterIP   10.111.18.130           80/TCP    39s
svc/kubernetes   ClusterIP   10.96.0.1               443/TCP   7m

NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/apache1   1         1         1            0           38s
deploy/ubuntu1   1         1         1            0           38s

NAME                   POD-SELECTOR   AGE
netpol/default-deny             59s
netpol/deny-metadata            6m
netpol/net1            app=web1       38s

And you can test from the container:

kubectl exec ubuntu1-c8cffc57b-q62tw -- curl apache1
<html><body><h1>It works!</h1></body></html>

But, from any other pod

kubectl run nordri-dev-tools --rm -ti --image=nordri/nordri-dev-tools /bin/bash
If you don't see a command prompt, try pressing enter.
root@nordri-dev-tools-766cc58546-kmhbf:/# 
root@nordri-dev-tools-766cc58546-kmhbf:/# curl apache1
curl: (7) Failed to connect to apache1 port 80: Connection timed out
root@nordri-dev-tools-766cc58546-kmhbf:/# 

Cool, isn’t it?

Let’s do something more complex and interesting. Check out this scenario:

What we want is:

Master  -> apache1 OK
Master  -> apache2 OK
Ubuntu1 -> apache1 OK
Ubuntu2 -> apache2 OK
Ubuntu1 -> apache2 Time out
Ubuntu2 -> apache1 Time out

So, clone this GitHub repository

git clone https://github.com/nordri/kubernetes-experiments

And run…

cd NetworkPolicy
./up.sh

This script will deploy all the components of our experiment. Now, there’s something like this in your Kubernetes:

NAME                          READY     STATUS    RESTARTS   AGE
po/apache1-5565b647c-fxcpc    1/1       Running   0          1m
po/apache2-587775d7bd-pjg98   1/1       Running   0          1m
po/master-6df9c89f5b-pqtl2    1/1       Running   0          1m
po/ubuntu1-588f5bdbfc-qkh8t   1/1       Running   0          1m
po/ubuntu2-f689cc5d-ncbzh     1/1       Running   0          1m

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc/apache1      ClusterIP   10.110.39.59            80/TCP    1m
svc/apache2      ClusterIP   10.109.61.240           80/TCP    1m
svc/kubernetes   ClusterIP   10.96.0.1               443/TCP   23m

NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/apache1   1         1         1            1           1m
deploy/apache2   1         1         1            1           1m
deploy/master    1         1         1            1           1m
deploy/ubuntu1   1         1         1            1           1m
deploy/ubuntu2   1         1         1            1           1m

NAME                   POD-SELECTOR   AGE
netpol/default-deny             1m
netpol/deny-metadata            22m
netpol/etm-net2        app=web2       1m
netpol/net1            app=web1       1m

And we can check that apache1 and apache2 only can be reached from master and the container sharing the Network policy.

To clean the house just run:

cd NetworkPolicy
./down.sh

References:

Lanzar Workers de Jenkins en AWS

En esta entrada veremos lo sencillo que resulta configurar Jenkins para levantar instancias en AWS EC2 para el entorno de CI.

1. Crear la AMI con packer.

Tenemos ya algunos nodos en el entorno de CI que se provisionan con Ansible. Utilizamos packer y la receta de ansible para crear una imagen con todo lo necesario para los jobs de Jenkins.

El fichero Json para packer

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "region": "eu-west-1",
    "source_ami": "ami-a8d2d7ce",
    "instance_type": "t2.small",
    "ssh_username": "ubuntu",
    "ami_name": "codeurjc-jenkins-worker {{ timestamp }}"
  }],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo apt-get update",
        "sudo apt-get install -y python"
      ]
    },
    {
      "type": "ansible",
      "playbook_file": "./playbook.yml",
      "user": "ubuntu"
    }]
}
$ packer build -var 'aws_access_key=ACCESS_KEY' -var 'aws_secret_key=SECRET_KEY' packer-jenkins-worker.json

2. Configuración de seguridad

Por un lado tenemos que crear un usuario para manejar la API. Vamos a nuestra consola de AWS y clickamos AMI, en Users le damos a Add user, le damos un nombre por ejemplo jenkins y en Access type elegimos Programmatic access. En el siguiente paso otorgamos permisos al usuario para manejar el servicio de EC2 creando un grupo especifico para ello, si no lo tenemos creado previamente. Repasamos y creamos el usuario, veremos las claves para acceder, copiar y guardarlas en lugar seguro.

Ahora vamos a crear un grupo de seguridad para las instancias que levantemos de Jenkins. Tenemos que permitir acceso por SSH a la instancia, asi que nos vamos a la consola de EC2 en el apartado de Security group y creamos uno nuevo (si no está creado previamente) y damos permisos para acceder por SSH.

Por último tenemos que importar la clave pública que permitirá a Jenkins configurar los nodos que despleguemos en AWS. Vamos a la consola de EC2, Key Pairs y hacemos click en Import Key Pair, en el cuadro de diálogo pegamos la clave pública que usamos en los nodos de nuestra infraestructura. Le ponemos un nombre por ejemplo Jenkins

3. Plugin de Jenkins

En Jenkins necesitamos este plugin.

La configuración es secilla, vamos a Manage Jenkins -> Cloud -> New cloud

Le ponemos un nombre descriptivo.

Añadimos las credenciales que hemos creado en el paso 2.

Seleccionamos la región, en mi caso eu-west-1

Para la clave, como tenemos workers en nuestra infraestructura tenemos que pegar la clave privada que tiene Jenkins.

Podemos ahora probar que la conexión a la API de AWS funciona.

Pasamos a la configuración de la AMI.

De nuevo un nombre descriptivo

La ID de la AMI que hemos creado con Packer anteriormente.

El tipo de instancia, como tenemos mucha carga usamos T2Large.

De nuevo la zona de disponiblidad, eu-west-1a

Introducimos el grupo de seguridad que hemos creado en AWS para que nos permita acceder por SSH a la instancia.

En el usuario remoto, nosotros usamos jenkins.

El tipo de AMI es obviamente una UNIX, esto es necesario porque dependiendo del tipo, la conexión se hace de una forma u otra.

Añadir el label para asignar trabajos al nodo.

En el uso ponemos que solo ejecute jobs que coincidan con el nodo.

En Idle termination time establecemos los minutos que tiene que estar la instancia ociosa antes de que se apague (stop o terminate).

En Init script podemos escribir algunas instrucciones necesarias en tiempo de arranque para la instancia, por ejemplo podríamos provisionar la instancia haciendo que se clonara el repo de Ansible y ejecutarlo con conexión local, en nuestro caso la AMI ya viene completa y no necesitamos nada.

Desplegamos más opciones haciendo click en Advance. Las opciones que más nos interesan son:

Override temporary dir location nos permite establecer un directorio temporal donde copiar el slave.jar para ejecutar Jenkins.

User data, para pasar información a la instancia.

Number of Executors, depende del Instance type que hayamos elegido, en nuestro caso 8.

Stop/Disconnect on Idle Timeout, si elegimos esta opción la instancia se parará, no se destruirá, esto con los disco EBS puede repercutir en cargos adicionales a la factura.

Subnet ID for VPC, para establecer la red a la que debe estar conectada la instancia.

Instance Cap, el limite de instancias a levantar en AWS, a más instancias más factura.

Block device mapping, elegimos un volumen EBS personalizado porque el de 8 GB que viene de serie es demasiado pequeño, la línea queda de esta manera.

/dev/sda1=:120:true:::

De forma que el disco raiz se mapea a un dispositivo que se creara en el momento de lanzar la instancia con 120GB y se borrará cuando se destruya la instancia.

Podemos conservar el volumen con false. Si ya tenemos algún disco provisionado podemos poner el nombre entre el igual y los dos puntos.

4. Configurar el Job

Ahora solo nos quedaría configurar un job con el label que hemos elegido para las AMIs, esto se entrando en la configuración de Job en cuestión y en el campo Restrict where this project can be run colocar el label.