K8s YAML Generation Guide
On KIWI's [Service Management] page, you can generate Kubernetes deployment YAML files automatically by selecting options in a form, without writing YAML manually. This guide explains the available resource kinds and their options, along with practical examples of how to configure YAML for real projects.
KIWI automatically generates standard Kubernetes YAML based on the options you enter. You can review the generated YAML in the preview, and edit it directly if needed.
Accessing the YAML Generation Page
- Go to the [Service Management] page and select a service.
- Click the Deploy stage in the pipeline.
- Select K8s deployment in Deploy Environment Settings to see the YAML generation form.
Part 1: Options by Kind
KIWI supports generating 6 types of Kubernetes resources. Combine the Kinds you need based on your project structure.
| Kind | Purpose | Required |
|---|---|---|
| Deployment | Application Pod deployment | ✅ Almost always |
| Service | Network access to Pods | ✅ Almost always |
| Ingress | External domain routing | Optional |
| ConfigMap | Separated configuration management | Optional |
| PVC | Persistent storage | Optional |
| Job | One-time tasks (migration, etc.) | Optional |
- Ingress Controller (recommended): Set Service to ClusterIP and create an Ingress. Best for domain-based routing and the most common approach.
- NodePort (external Nginx): Access via NodePort through an external reverse proxy. No Ingress needed.
- LoadBalancer (external IP): Assigns an external IP directly. Mainly used in cloud environments. No Ingress needed.
Deployment
Defines how application Pods are deployed.
Running containers directly provides no automatic recovery on failure. A Deployment maintains your desired number of Pods, automatically restarts on failure, and handles zero-downtime updates when deploying new versions.
Basic Options
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | Deployment resource name. Example: my-app |
| Namespace | default | Target namespace for deployment | |
| Replicas | 1 | Number of Pods to run simultaneously. 3+ recommended for production | |
| imagePullPolicy | Always | Image pull policy. Choose from Always, IfNotPresent, Never | |
| Labels | - | Key-value pairs matching Service selectors. Example: app: my-app |
Containers (at least 1 required)
Defines the containers to run within the Pod.
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | Container identifier. Example: frontend |
| Image | ✅ | - | Container image address. Select from build list or enter manually |
| Port | 80 | Port the container listens on | |
| Environment Variables | - | Supports direct input (key-value), secretKeyRef, configMapKeyRef | |
| Command / Args | - | Command and arguments to run at startup. Example: ["node", "server.js"] | |
| Volume Mounts | - | Mount a PVC to a container path. Specify name, mountPath, claimName |
Deployment Strategy
Determines how Pods are replaced during version updates.
| Strategy | Description | Additional Options |
|---|---|---|
RollingUpdate (default) | Gradual replacement. Suitable for zero-downtime deployment | maxSurge (default 1), maxUnavailable (default 0) |
Recreate | Terminates all Pods before creating new ones. May cause downtime | - |
Resources (optional)
Configures CPU and memory allocation for the Pod.
| Type | Description | Example |
|---|---|---|
| requests | Minimum guaranteed amount. Used for scheduling | cpu: 100m, memory: 128Mi |
| limits | Maximum ceiling. Throttled when exceeded | cpu: 500m, memory: 512Mi |
Health Checks (optional)
Configures probes to check Pod health.
| Probe | Role | Action on Failure |
|---|---|---|
| Liveness Probe | Checks if the Pod is alive | Restarts the Pod |
| Readiness Probe | Checks if the Pod is ready for traffic | Blocks traffic |
Options for each probe:
| Option | Description |
|---|---|
| path | Health check request path. Example: /health |
| port | Health check request port |
| initialDelaySeconds | Wait time (seconds) before the first check after Pod starts |
| periodSeconds | Check interval (seconds) |
| timeoutSeconds | Response timeout (seconds) |
| failureThreshold | Number of consecutive failures allowed |
Service
Defines network endpoints for accessing Pods.
Pod IPs change every time they restart. A Service provides a stable address so other Pods and external clients can reliably access your application.
Basic Options
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | Service resource name. Example: my-app-svc |
| Namespace | default | Target namespace for deployment | |
| Selector | ✅ | - | Labels to select which Pods receive traffic. Must match Deployment labels |
Type
Choose based on the network exposure method.
| Type | Description | Use Case |
|---|---|---|
ClusterIP (default) | Accessible only within the cluster | Used with Ingress |
NodePort | Accessible externally through a specific node port | Used with external reverse proxies (Nginx, etc.) |
LoadBalancer | Assigns an external IP for direct exposure | Primarily used in cloud environments |
Ports (at least 1)
| Option | Required | Default | Description |
|---|---|---|---|
| name | - | Port identifier. Example: http | |
| protocol | TCP | Protocol | |
| port | ✅ | - | Port the Service listens on |
| targetPort | ✅ | - | Container port to forward traffic to |
| nodePort | - | Port opened on the node for NodePort type |
Ingress
Defines rules for routing external traffic to internal Services.
Services alone cannot handle domain-based routing or HTTPS. An Ingress routes requests coming to domains like app.example.com to the appropriate Services based on path, and also manages TLS certificates.
Basic Options
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | Ingress resource name. Example: my-app-ingress |
| Namespace | default | Target namespace for deployment | |
| Annotations | - | Key-value pairs to control Ingress Controller behavior |
Ingress Class
Select the Ingress Controller to use.
| Ingress Class | Description |
|---|---|
traefik (recommended) | Supports automatic certificates and middleware chains |
nginx | Community project. EOL scheduled for March 2026; migration recommended |
haproxy | High-performance load balancing. Suitable for enterprise environments |
kong | API Gateway integration with plugin extensions |
contour | Envoy proxy-based CNCF project |
alb | AWS Application Load Balancer. For AWS cloud environments only |
Rules (at least 1)
Defines routing rules per domain.
| Option | Description |
|---|---|
| host | Domain name. Example: app.example.com |
| paths | Per-path routing. Specify path, pathType (Prefix), serviceName, servicePort |
TLS (optional)
Enables HTTPS. Specify hosts (domain list) and secretName (certificate Secret).
ConfigMap
Stores application configuration values as key-value pairs.
When you enter environment variables directly in a Deployment, the values are exposed in the YAML. By separating them into a ConfigMap, you can swap configurations per environment and share the same settings across multiple Deployments. For sensitive information like passwords or API keys, use Secret instead of ConfigMap.
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | ConfigMap resource name. Example: app-config |
| Namespace | default | Target namespace for deployment | |
| Data | ✅ | - | Key-value pairs. Example: DATABASE_HOST: postgres, LOG_LEVEL: info |
| Labels | - | Key-value pairs |
PersistentVolumeClaim (PVC)
Requests persistent storage for Pods. Data is preserved even when Pods restart or are deleted.
Pod file systems are ephemeral — data is lost when a Pod restarts. Data requiring permanent retention, such as databases or uploaded files, must be stored on external storage via a PVC.
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | PVC resource name. Example: app-data-pvc |
| Namespace | default | Target namespace for deployment | |
| Storage Size | 10Gi | Requested disk size | |
| Storage Class | nfs-client | Storage provisioner to use |
Storage Class
Options include nfs-client, local-path, longhorn, ceph-rbd, and default.
Access Modes
| Mode | Description | Use Case |
|---|---|---|
ReadWriteOnce (default) | Read/write from a single node | Suitable for most cases |
ReadWriteMany | Simultaneous read/write from multiple nodes | Requires shared storage like NFS |
ReadOnlyMany | Read-only from multiple nodes | Sharing config files, etc. |
Deleting a PVC may permanently destroy the associated data. Always verify backups before deletion.
Job
Runs one-time or batch tasks. The Pod terminates after the task completes.
Use Jobs for tasks that run once and finish, such as DB migrations, initial data seeding, or one-time script execution. Unlike Deployments, the Pod automatically terminates after the task completes.
Basic Options
| Option | Required | Default | Description |
|---|---|---|---|
| Name | ✅ | - | Job resource name. Example: db-migrate-job |
| Namespace | default | Target namespace for deployment | |
| Image | ✅ | - | Container image to run in the Job |
| Command / Args | - | Command and arguments. Example: ["python"], ["manage.py", "migrate"] | |
| Environment Variables | - | Enter directly as key-value pairs | |
| backoffLimit | 3 | Retry count on failure. Job fails after exceeding this count |
Job Type
| Type | Description |
|---|---|
| DB Migration | For database schema changes |
| Initialization | For initial data setup, seed data insertion, etc. |
| Custom | User-defined one-time tasks |
envFrom (optional)
Injects an entire ConfigMap or Secret as environment variables.
| Reference Type | Description |
|---|---|
configMapRef | Loads all keys from a ConfigMap as environment variables |
secretRef | Loads all keys from a Secret as environment variables |
restartPolicy
| Policy | Description |
|---|---|
OnFailure (default) | Restarts the container within the same Pod on failure |
Never | Creates a new Pod instead of restarting on failure |
Part 2: YAML Generation Examples
Example 1: Basic Web Service (Frontend + Backend + Database)
A typical web service composed of React frontend + Node.js API backend + PostgreSQL database.
Service Architecture
User → Ingress (app.example.com)
├─ / → frontend-svc(:80) → frontend Pod (React/Nginx)
└─ /api → backend-svc(:3000) → backend Pod (Node.js)
↓
postgres-svc(:5432) → postgres Pod
Generated Resources
- 3 Deployments (frontend, backend, postgres)
- 3 Services (frontend-svc, backend-svc, postgres-svc)
- 1 Ingress (app-ingress)
File Structure
k8s/
├── deployment.yaml ← frontend + backend + postgres Deployments
├── service.yaml ← frontend-svc + backend-svc + postgres-svc
└── ingress.yaml ← app-ingress
YAML is generated with separate files per Kind. When there are multiple resources of the same Kind, they are combined into one file separated by ---.
Frontend — Deployment & Service
KIWI settings summary:
| Option | Value |
|---|---|
| Name | frontend |
| Namespace | my-web-app |
| Replicas | 2 |
| Image | harbor.example.com/my-project/frontend:v1.0 |
| Port | 80 |
| Strategy | RollingUpdate (maxSurge: 1, maxUnavailable: 0) |
| Resources | requests: 100m/128Mi, limits: 300m/256Mi |
| Service Type | ClusterIP |
- The built React app is served via Nginx, so port 80 is used.
- Frontend serves static files, so resource allocation is kept small.
View generated YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: my-web-app
labels:
app: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: harbor.example.com/my-project/frontend:v1.0
ports:
- containerPort: 80
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 300m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: my-web-app
spec:
type: ClusterIP
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Backend — Deployment & Service
KIWI settings summary:
| Option | Value |
|---|---|
| Name | backend |
| Namespace | my-web-app |
| Replicas | 2 |
| Image | harbor.example.com/my-project/backend:v1.0 |
| Port | 3000 |
| Strategy | RollingUpdate (maxSurge: 1, maxUnavailable: 0) |
| Env Vars | DATABASE_HOST, DATABASE_PORT, DATABASE_NAME + Secret ref |
| Resources | requests: 200m/256Mi, limits: 500m/512Mi |
| Health Checks | Liveness + Readiness (/health:3000) |
| Service Type | ClusterIP |
DATABASE_HOSTuses the postgres Service name. It is automatically resolved via DNS within the same namespace.- The password is fetched from a Secret via
secretKeyRef, avoiding direct exposure in YAML. - Health checks are configured to automatically detect unhealthy Pods.
View generated YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: my-web-app
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: harbor.example.com/my-project/backend:v1.0
ports:
- containerPort: 3000
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: postgres-svc
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: myapp
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: backend-svc
namespace: my-web-app
spec:
type: ClusterIP
selector:
app: backend
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
Database — Deployment & Service
KIWI settings summary:
| Option | Value |
|---|---|
| Name | postgres |
| Namespace | my-web-app |
| Replicas | 1 |
| Image | postgres:15 |
| Port | 5432 |
| Strategy | Recreate |
| imagePullPolicy | IfNotPresent |
| Env Vars | POSTGRES_DB + Secret ref |
| Service Type | ClusterIP |
- Database is set to replicas 1 for data consistency with a single instance.
- Strategy is set to
Recreatebecause multiple Pods accessing the same volume simultaneously can cause issues. - Uses
ClusterIPsince external exposure is not needed.
View generated YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: my-web-app
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
---
apiVersion: v1
kind: Service
metadata:
name: postgres-svc
namespace: my-web-app
spec:
type: ClusterIP
selector:
app: postgres
ports:
- name: tcp
protocol: TCP
port: 5432
targetPort: 5432
Ingress
KIWI settings summary:
| Option | Value |
|---|---|
| Name | app-ingress |
| Namespace | my-web-app |
| Ingress Class | traefik |
| Host | app.example.com |
| TLS | app-tls-secret |
| Routing | /api → backend-svc:3000, / → frontend-svc:80 |
- The
/apipath is defined before/because Ingress matches in order, preventing API requests from going to the frontend. - TLS is configured to enable HTTPS.
View generated YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: my-web-app
spec:
ingressClassName: traefik
tls:
- secretName: app-tls-secret
hosts:
- app.example.com
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend-svc
port:
number: 3000
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
Example 2: Advanced Project (Volume + Job + ConfigMap)
A project composed of Spring Boot API + PostgreSQL with ConfigMap for configuration separation, PVC for persistent data storage, and Job for DB migration.
Service Architecture
[ConfigMap: app-config] ← Environment settings (DB connection, log level, etc.)
│
├── envFrom ──→ [Job: db-migrate] ← Runs migration then exits
│
└── envFrom ──→ [Deployment: api-server]
│
└──→ [Service: api-svc] → [Ingress: api-ingress]
[PVC: postgres-data] ← 10Gi persistent storage
│
└── mount ──→ [Deployment: postgres]
│
└──→ [Service: postgres-svc]
Generated Resources
- 1 ConfigMap (app-config)
- 1 PVC (postgres-data)
- 1 Job (db-migrate)
- 2 Deployments (api-server, postgres)
- 2 Services (api-svc, postgres-svc)
- 1 Ingress (api-ingress)
File Structure
k8s/
├── configmap.yaml ← app-config
├── pvc.yaml ← postgres-data
├── job.yaml ← db-migrate
├── deployment.yaml ← api-server + postgres Deployments
├── service.yaml ← api-svc + postgres-svc
└── ingress.yaml ← api-ingress
ConfigMap
KIWI settings summary:
| Option | Value |
|---|---|
| Name | app-config |
| Namespace | my-api-project |
| Data | DATABASE_HOST, DATABASE_PORT, DATABASE_NAME, LOG_LEVEL, SERVER_PORT, SPRING_PROFILES_ACTIVE |
- Manages DB connection info, log level, and server settings in one place.
- Both Deployment and Job reference the same ConfigMap, preventing configuration mismatches.
View generated YAML
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: my-api-project
labels:
app: app-config
data:
DATABASE_HOST: postgres-svc
DATABASE_PORT: "5432"
DATABASE_NAME: myapi
LOG_LEVEL: info
SERVER_PORT: "8080"
SPRING_PROFILES_ACTIVE: production
PersistentVolumeClaim
KIWI settings summary:
| Option | Value |
|---|---|
| Name | postgres-data |
| Namespace | my-api-project |
| Size | 10Gi |
| Storage Class | nfs-client |
| Access Mode | ReadWriteOnce |
- Set to
ReadWriteOncesince the database only needs access from a single node. - Using
nfs-clientstores data on an NFS server, preserving data even during node failures.
View generated YAML
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: my-api-project
labels:
app: postgres-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client
resources:
requests:
storage: 10Gi
Job — DB Migration
KIWI settings summary:
| Option | Value |
|---|---|
| Name | db-migrate |
| Namespace | my-api-project |
| Image | harbor.example.com/my-project/api-server:v1.0 |
| Command | java -jar app.jar --spring.flyway.enabled=true --spring.main.web-application-type=none |
| envFrom | app-config (ConfigMap) + db-credentials (Secret) |
| backoffLimit | 3 |
| restartPolicy | OnFailure |
- Uses the same image as the application but only runs Flyway migration without starting the web server.
envFrominjects the entire ConfigMap and Secret as environment variables, eliminating the need to map individual keys.backoffLimit: 3handles transient failures like DB connection issues.
Jobs are created simultaneously with Deployments. The application may start before migration completes, so the application should include DB connection retry logic. Alternatively, you can use initContainers to wait for migration completion.
View generated YAML
apiVersion: batch/v1
kind: Job
metadata:
name: db-migrate
namespace: my-api-project
labels:
app.kubernetes.io/part-of: db-migrate
app.kubernetes.io/component: db-migrate
spec:
backoffLimit: 3
template:
spec:
restartPolicy: OnFailure
containers:
- name: db-migrate
image: harbor.example.com/my-project/api-server:v1.0
command:
- java
args:
- -jar
- app.jar
- --spring.flyway.enabled=true
- --spring.main.web-application-type=none
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: db-credentials
API Server — Deployment & Service
KIWI settings summary:
| Option | Value |
|---|---|
| Name | api-server |
| Namespace | my-api-project |
| Replicas | 3 |
| Image | harbor.example.com/my-project/api-server:v1.0 |
| Port | 8080 |
| Strategy | RollingUpdate (maxSurge: 1, maxUnavailable: 0) |
| envFrom | app-config (ConfigMap) + db-credentials (Secret) |
| Resources | requests: 300m/512Mi, limits: 1/1Gi |
| Health Checks | Liveness + Readiness (/actuator/health:8080) |
| Service Type | ClusterIP |
envFrominjects the ConfigMap and Secret in bulk, sharing the same configuration with the Job.- Uses Spring Boot's Actuator health check endpoint (
/actuator/health). - JVM-based applications start slowly, so
initialDelaySecondsis set generously.
View generated YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
namespace: my-api-project
labels:
app: api-server
spec:
replicas: 3
selector:
matchLabels:
app: api-server
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: api-server
spec:
containers:
- name: api-server
image: harbor.example.com/my-project/api-server:v1.0
ports:
- containerPort: 8080
imagePullPolicy: Always
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: db-credentials
resources:
requests:
cpu: 300m
memory: 512Mi
limits:
cpu: "1"
memory: 1Gi
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
namespace: my-api-project
spec:
type: ClusterIP
selector:
app: api-server
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
Database — Deployment & Service (with PVC)
KIWI settings summary:
| Option | Value |
|---|---|
| Name | postgres |
| Namespace | my-api-project |
| Replicas | 1 |
| Image | postgres:15 |
| Port | 5432 |
| Strategy | Recreate |
| Env Vars | POSTGRES_DB, PGDATA + Secret ref |
| Volume Mount | postgres-data PVC → /var/lib/postgresql/data |
| Service Type | ClusterIP |
volumeMountsandvolumesconnect the PVC. Data at/var/lib/postgresql/datais preserved even when the Pod restarts.- The
PGDATAenvironment variable points to a subdirectory (pgdata). This prevents PostgreSQL initialization failure when hidden files exist at the NFS mount root directory.
View generated YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: my-api-project
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_DB
value: myapi
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-data
---
apiVersion: v1
kind: Service
metadata:
name: postgres-svc
namespace: my-api-project
spec:
type: ClusterIP
selector:
app: postgres
ports:
- name: tcp
protocol: TCP
port: 5432
targetPort: 5432
Ingress
KIWI settings summary:
| Option | Value |
|---|---|
| Name | api-ingress |
| Namespace | my-api-project |
| Ingress Class | traefik |
| Host | api.example.com |
| TLS | api-tls-secret |
| Routing | / → api-svc:8080 |
View generated YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: my-api-project
spec:
ingressClassName: traefik
tls:
- secretName: api-tls-secret
hosts:
- api.example.com
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-svc
port:
number: 8080
Related Guides
- K8s Deployment - Execute actual deployment with generated YAML
- Rollback - Restore to a previous version when issues occur
- Domain/SSL Setup - Custom domain and HTTPS configuration
- HPA Autoscaling - Automatic Pod scaling based on traffic