Skip to main content

art.yaml

Overview

When a deployment is initiated, two configuration files are automatically generated. They are the:

  • .art/art.yaml
  • .github/workflows/argonaut-<envname>-<appname>-deploy.yaml

This section provides a breakdown of the .art/art.yaml file and explains the functioning.

File breakdown

The .art/art.yaml file can be split into two major parts:

  • General configuration.
  • Advanced configuration.

General configuration

---
version: "v1"
appName: "battlemeister"
image: "053791121117.dkr.ecr.us-east-1.amazonaws.com/argonaut/battlemeister"
imageTag: "latest"

services:
- port: 80
protocol: "tls-terminated" # tls-passthrough, tls-terminated, tcp, http, grpc need to be supported
external:
hosts:
- "battleship.violet.argonaut.live"
hostPort: 443
paths: ["/"]

argonaut:
env: violet
region: us-east-1
cluster: violet
imageRegistry: ecr # corresponding to the image that is to be deployed
serviceType: "stateless" # One of {stateful, stateless, managed}
repoName: battleships
persistentStorage: []

replicas: 1
minReplicas: 1
maxReplicas: 1
resources:
requests:
cpu: "500m"
memory: "512M"
limits:
cpu: "1000m"
memory: "1500M"
  • The first section of this part of the file contains details about the docker image generated from deployment; image version, app name, image URI and image tag.

  • The services object holds some of your app service configuration provided at the point of deployment.

  • The hosts key has a value, which is the live link to your app (i.e to view your deployed app, copy and paste the link in your browser).

  • The argonaut object contains details of your app environment and service.

If you specified your app's need for persistent storage, the storage details go into the array. For example,

capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

Capacity can be specified in M, Mi, G, Gi units. The hostPath specifies that the volume is at /mnt/data on the cluster's Node and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node.

  • Replicaset maintains the specified number of pod instances running in a cluster to prevent users from losing access to their application in the case whereby a pod is inaccessible or fails.

If your application is expecting a high volume of traffic and a high level of critical tasks, it is advised to have more than two replicas. Specify the number of replicas (min and max) according to your app needs.

  • The resources object is about the resources requests and limits provided for your application.

Advanced configuration

#########################################################################################
# Everything below this is optional and advanced configuration #
# and irrelevant in most scenarios. #
#########################################################################################
# Can only do one of the httpGet and exec handler methods for livenessProbe
livenessProbe:
httpGet:
path: /
port: 80

# exec:
# command:
# - sh
# - -c
# - |
# #!/usr/bin/env sh
# test -f /etc/
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 3
periodSeconds: 10
timeoutSeconds: 5
# Can only do one of the httpGet and exec handler methods for readinessProbe
readinessProbe:
httpGet:
path: /
port: 80

# # Handler 2
# exec:
# command:
# - sh
# - -c
# - |
# #!/usr/bin/env sh
# test -f /etc/
# Common fields
failureThreshold: 5
initialDelaySeconds: 10
successThreshold: 3
periodSeconds: 10
timeoutSeconds: 5
externalServices: []
podAnnotations:
{}
# iam.amazonaws.com/role: myapp-cluster
# additionals labels
labels: {}

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security
secretMounts:
[]
# - name: beamd-cert
# secretName: beamd-cert
# path: /usr/share/myapp/config/certs
sidecarResources:
{}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
# networkHost: "0.0.0.0"
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
# maxUnavailable: 25%
updateStrategy: RollingUpdate
# How long to wait for myapp to stop gracefully
terminationGracePeriod: 30
lifecycle:
{}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
# This doesn't apply if antiAffinity is not set
antiAffinityTopologyKey: "kubernetes.io/hostname"
# "hard" means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to "soft" will do this best effort
antiAffinity: ""
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
podSecurityContext:
{}
# fsGroup: 1000
# runAsUser: 1000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# # readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
nodeSelector: {}
tolerations: []
initContainer:
enabled: false
# command: ["echo", "I am an initContainer"]
# image: nginx
initResources:
{}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
extraInitContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraVolumes:
[]
# - name: extras
# emptyDir: {}
extraVolumeMounts:
[]
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
# Allows you to add any config files in /usr/share/myapp/config/
# as a ConfigMap
extraConfig:
[]
# - name: configName
# path: "/path/to/config/folder/"
# readOnly: true
# data:
# pokedex.yaml: |
# pokemonName: Pikachu
# pokemonType: Lightning
# battle.yaml: |
# pokemon1: Pikachu
# pokemon2: MewTwo
# - name: configName2
# path: "/path/to/config/anotherfolder/"
# readOnly: true
# data:
# pokedex.yaml: |
# pokemonName: Pikachu
# pokemonType: Lightning
# battle.yaml: |
# pokemon1: Pikachu
# pokemon2: MewTwo
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here

The configuration in this section is optional and in most cases, you don't get to make changes to it. Nevertheless; for the occasions that call for an edit to this section, find details below;

  • livenessProbe and readinessProbe: Liveness and Readiness probes control the health of an application running in a pod's container.

Liveness probe comes in handy in a situation whereby your app is running in a container and for some reason (e.g CPU usage or application deadlock), your app stops running. Liveness probe checks the container's health and if the liveness probe fails, it restarts the container.

Readiness probe is used in a situation whereby you will like your app to be alive but only serve traffic if some conditions are met (e.g waiting for other services to be alive). Only when the conditions in the readiness probe pass, will your application serve traffic. Learn more here.

In this file, you can define the liveness and readiness probes using either the httpGet, or exec method.

  • podAnnotations and labels: Both labels and annotations are ways to add additional metadata to Kubernetes objects. They also both use key/value pairs.

Pod annotations are used for non-identifying pod information, which means they are not used to identify and select pods. It provides additional information about the pod that can be used by tools and libraries such as Prometheus and third-party tools.

Labels are used in conjunction with selectors to identify groups of related resources. Thus, in this instance, they should be used when you want to group a set of resources related to pods.

Read more here.

imageregistry: "https://hub.docker.com"
iam.amazonaws.com/role: myapp-cluster
  • envFrom array, is a list of secret sources and their paths to populate environment variables in the container. The secret must be in the same namespace as the pod. It is important to note that the pod will remain in a pending state until the secret is available.

  • secretMounts: Secrets are namespaced objects that can be mounted as data volumes or environment variables to be used by a container in a pod.

  • lifecycle: The lifecycle of a pod starts from pending>> running >> succeeded or failed, but in an instance where an error in node communication occurs, the pod state could be unknown. Kubernetes supports the postStart and preStop events. Kubernetes sends the postStart event immediately after a Container is started, and it sends the preStop event immediately before the Container is terminated. This means that the prestop hook is not triggered if the pod is successfully completed.

  • rbac stands for Role-Based Access Control and is an approach used by Kubernetes to add security to a Kubernetes cluster by restricting access. In this field, specify if RBAC resources should be created for your pod.

  • serviceAccountAnnotations & serviceAccountName: Service accounts provides identity for processes that run in a pod. In these fields, provide your service account name and annotations.

  • priorityClassName: Priority class is used to order pod scheduling queue. You create a priority class and assign it a value and a name. Where value must be equal to or lower than 1,000,000,000 (One Billion) and where highest value means the highest priority. Enter the priority class name of your pod here. To learn more about priority setting, click here.

  • nodeAffinity: Node affinity is a set of rules used to determine where a pod can be placed by the scheduler. It allows the pod to specify an affinity (or anti-affinity) towards a group of nodes. There are two types of node affinity rules: required and preferred.

Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement.

apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: e2e-az-NorthSouth
operator: In
values:
- e2e-az-North
- e2e-az-South
containers:
- name: with-node-affinity
image: docker.io/ocpqe/hello-pod
  • podSecurityContext: This references specific constraints for access and permissions at the level of individual pods that are configured at runtime. The goal of these constraints are to limit any given pod’s susceptibility to compromise and also limit the blast radius of any potential attack beyond a given set of containers. Check here for complete list of options for the security context.

  • schedulerName: Specify the name of the pod scheduler. If no scheduler name is specified, the pod is scheduled automatically using the default scheduler. Learn more about schedulers.

  • nodeSelector: A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node.

  • tolerations: Taints and tolerations allow nodes to control which pods should (or should not) be scheduled on them. A taint allows a node to refuse a pod to be scheduled unless that pod has matching toleration. The tolerations applied to your pod must match the taint applied to your node. For example;

Taint
Taints:special=true:NoSchedule
Toleration
tolerations:
- key: "special"
operator: "Equal"
value: "true"
effect: "NoSchedule"

There are different ways toleration can match a taint. Read more about taints and tolerations.

  • initContainers: are containers that are run before the app containers are started. You can use an init Container resource to perform tasks before the rest of a pod is deployed. An init container can also run utilities that are not desirable to include in the app container image for security reasons. Learn more about init containers.

  • extraInitContainers, extraVolumes, extraVolumesMounts, extraContainers and extraEnv are additional configuration data to add to the pod.

  • extraConfig: allows you to add any config files to the pod. The config files are created as a ConfigMap and mounted into the pod. Note that the mountPath will be overwritten by the configMap mounted at that location.