Installing a BlazeMeter Agent using a Helm Chart

You want to install a BlazeMeter Agent for Kubernetes on your server/instance behind your firewall. This article shows you how to deploy a BlazeMeter Private Location to your Kubernetes cluster using Helm, the package manager for Kubernetes. This procedure uses a Helm chart which enables you to make advanced configurations to your BlazeMeter Private Location deployment.

On this page:

  1. Prerequisites

  2. Obtain Location ID, Agent ID, and Auth Token from BlazeMeter

  3. Download the helm chart

  4. Configure the helm chart values

  5. Configure the image override settings

  6. Configure optional settings

  7. Verify the helm chart

  8. Install the helm chart

  9. Verify helm chart installation

  10. Upgrade an existing helm chart

  11. Uninstall the helm chart

Prerequisites

  • A BlazeMeter account

  • A Kubernetes cluster

  • Latest Helm installed

  • Before proceeding with the installation, ensure that your Kubernetes cluster meets the minimum requirements.
    For more information, see Private Location System Requirements.

 

Obtain Location ID, Agent ID, and Auth Token from BlazeMeter

For this installation, identify your Private Location ID (formerly Harbour_ID), BlazeMeter Agent ID (formerly Ship_ID), and your BlazeMeter API Auth_token, and store them in a text file in a safe location.

harbor ship

Choose one of the following alternatives how to obtain these values:

Get the Location ID, Agent ID and Auth Token through BlazeMeter web UI

For more information, see Where can I find the Harbor ID and Ship ID?

  1. Log in to BlazeMeter with Workspace Admin permissions.

  2. Create a Private Location.

  3. After the Private Location has been created in BlazeMeter, copy the Private Location ID.

  4. Start following the procedure how to Create an Agent.

    1. Give the agent a name.

    2. (Optional) Enter the IP address of the agent.

    3. Click Create.

    4. Go to the Helm chart tab. Do not run any generated dock command.
      Instead, copy the following values into a text file:

      • HARBOR_ID — your Private Location ID

      • SHIP_ID — your Agent ID

      • AUTH_TOKEN — your API authentication token

Get the Location ID, Agent ID and Auth Token through BlazeMeter API

Use your BlazeMeter API key and secret ready to log in.

  1. Create a Private Location using an API call.

  2. Copy the Private Location ID (Harbour ID) from the response.

  3. Create an Agent using an API call.

  4. Copy the Agent ID (Ship_ID) from the response.

  5. Generate the docker command using an API call.

  6. Copy the Auth_token from the response.

Do not run the generated command.

Download the helm chart

  1. Download the latest Helm chart TAR file from the GitHub repository.

  2. Unpack the chart with the given version using the following command:

    tar -xvf helm-crane-version.tgz

Configure the helm chart values

The values.yaml file contains the default values for your chart. Customize it now for your requirements.

Setting up the minimum required

Before installing the chart, provide your BlazeMeter harbour_id, ship_id, and authtoken in the values.yaml file. These values are required for the Crane deployment to register and authenticate with BlazeMeter. See Get IDs for how to obtain these values.

  1. Open the values.yaml file in a text editor of your choice.

  2. Add your Location ID, Agent ID, and Auth Token, in quotation marks, to the env: section, using the following format:

    env: 
      authtoken: "your BlazeMeter API Auth token"
      harbour_id: "your Private Location ID"
      ship_id: "your Agent ID"
  3. Replace the example values above with your actual credentials.

Using Kubernetes Secrets or External Secret Managers

To keep your Crane credentials secure and not store them directly in the values.yaml file, use one of the following integrations:

  • SecretProviderClass (CSI Driver)

  • ExternalSecrets Operator

When you enable either of these integrations, the env.authtoken, env.harbour_id and env.ship_id values in values.yaml are ignored, and the credentials are sourced from your external secret store.

Only set credentials in one place. If both env and a secret integration are set, the secret integration takes precedence.
  1. Enable one of the integrations:

    • To enable SecretProviderClass integration:

      secretProviderClass:
          enable: yes
          provider: aws
          # ... other configuration...
    • To enable ExternalSecrets Operator integration:

      externalSecretsOperator:
          enable: yes
          # ... other configuration...
  2. If you use a secret manager, ensure your secret keys and environment variable mappings are correct. See Configure the SecretProviderClass and Configure ExternalSecrets Operator sections for details.

Do not commit sensitive values to version controlled files.

Setting up deployment settings

The .Values.deployment section in values.yaml controls how the main Crane deployment is created, including service account, RBAC roles, and restart policy of the deployment (should it fails).

Copy

values.yaml

deployment:
  role:                # (Optional) Name of an existing Role to use in the namespace. If not set, defaults to <releaseName>-role.
  clusterrole:         # (Optional) Name of an existing ClusterRole to use. If not set, defaults to <releaseName>-clusterrole.
  serviceAccount:
    create: false      # Set to true to create a new ServiceAccount, or false to use an existing one.
    name:              # (Optional) Name of the ServiceAccount to use. Leave empty to use the default.
    annotations:       # (Optional) Annotations to add to the ServiceAccount.
      eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/example-role
      custom.annotation/key: custom-value
  restartPolicy:       # (Optional) Pod restart policy. Defaults to "Always".

The deployment section uses the following parameters:

  • role
    Use an existing Kubernetes Role for role-based access control (RBAC).
    Default: Leave empty to let the chart create one.

  • clusterrole
    Use an existing ClusterRole for cluster-wide permissions.
    Default: Leave empty to let the chart create one.

  • serviceAccount.create
    If set to true, the chart creates a new ServiceAccount.
    If set to false, specify an existing ServiceAccount in serviceAccount.name.

  • serviceAccount.name
    Name of the ServiceAccount to use.
    Default: If empty, the default ServiceAccount is used.

  • serviceAccount.annotations
    Optionally, add custom annotations, for example, for IAM roles or workload identity.

  • restartPolicy
    The Pod restart policy accepst the following values: Always, OnFailure, Never.
    Default: Always.

For most installations, you can leave these fields at their defaults unless you have specific security or compliance requirements.

  • If your cluster uses IAM roles for service accounts (IRSA) or workload identity, add the required annotations under serviceAccount.annotations.

  • If you set create: false, the chart does not create or modify the existing ServiceAccount, and the annotations in values.yaml are ignored.

  • To use pre-existing RBAC roles, specify their names in role and clusterrole.

Example for creating a new ServiceAccount with a custom IAM role:

Copy
deployment:
    serviceAccount:
        create: true
        name: my-existing-sa
        annotations:
            eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-crane-role

Configure the image override settings

You can override the default image settings used for Crane and its components through the imageOverride section in your values.yaml file. Use it to specify custom registries, images, tags, and pull policies for all relevant containers.

If you do not need to override images, leave this section commented or empty, and the chart will use the default images provided by BlazeMeter.

The imageOverride section has the following parameters:

  • docker_registry
    Defines the custom Docker registry for all images.

  • craneImage
    Defines the path to the Crane image.

  • tag
    Defines the Image tag to use.

  • auto_update
    Enables or disables automatic updates.

  • auto_update_running_containers
    Controls auto-update for running containers.

  • executorImages
    Defines the map of executor/component images to override.

  • pullPolicy
    Defiens the image pull policy (Always, IfNotPresent, etc.).

  • testImage and testTag
    Define the Image and tag for the test hook.

Example configuration:

Copy
imageOverride:
  docker_registry: "gcr.io/<custom-registry>"
  craneImage: "gcr.io/<custom-registry>/blazemeter/crane"
  tag: "latest-master"
  auto_update: true
  auto_update_running_containers: false
  executorImages:
    taurus-cloud:latest: "pathToYourRepo/taurus-cloud:version"
    torero:latest: "pathToYourRepo/torero:version"
    blazemeter/service-mock:latest: "pathToYourRepo/service-mock:version"
    blazemeter/mock-pc-service:latest: "pathToYourRepo/mock-pc-service:version"
    blazemeter/sv-bridge:latest: "pathToYourRepo/sv-bridge:version"
    blazemeter/doduo:latest: "pathToYourRepo/doduo:version"
  pullPolicy: "Always"
  testImage: "gcr.io/verdant-bulwark-278/cranehook"
  testTag: "latest"

Configure optional settings

Configure the following settings depending on your requirements. Only set the proxy values if your cluster requires outbound traffic to go through a proxy. The no_proxy field helps exclude internal or local addresses.

Configure the proxy

If your environment requires a proxy, configure it now.

The proxy section of values.yaml has the following parameters:

  • enable
    Set this to yes to enable proxies.
    Default: no.
  • http_path
    (Optional) Defines the HTTP proxy URL.

  • https_path
    (Optional) Defines the HTTPS proxy URL.

  • no_proxy
    (Optional) Set this to a comma-separated list of hosts or domains that should bypass the proxy.
    Default: "kubernetes.default,127.0.0.1,localhost"

Your proxy section should look similar to the following example:

proxy:
  enable: yes
  http_path: "http://your_http_proxy:your_port" 
  https_path: "https://your_https_proxy:port"
  no_proxy: "kubernetes.default,127.0.0.1,localhost,yourHostname.com"

Configure certificates

Only configure certificates if required and for BlazeMeter Service Virtualisation only.

To configure your Kubernetes installation to use CA certificates, make changes to the ca_bundle section of the values.yaml file.

The ca_bundle section has the following parameters:

  • enable
    Set this option in ca_bundle to yes.
    Default: enable is no.

  • ca_subpath and aws_subpath
    Provide the path to your certificate file respectively.

    Copy or move these cert files into the same directory as the Helm chart and define the name of the certs instead of a complete path.

Example:

ca_bundle:
  enable: yes
  request_ca_bundle: "certificate.crt"
  aws_ca_bundle: "certificate2.crt"
  volume:
    volume_name: "volume-cm"
    mount_path: "/var/cm"
    readOnly: true

Add gridProxy configuration

Only configure a gridProxy if required, and for GUI functional testing only.

Grid Proxy enables you to run Selenium GUI functional tests in BlazeMeter without using a local server. If you plan to configure your crane installation to use a Grid Proxy, make changes to the following section of the values.yaml file.

You can run Grid Proxy over the HTTPS protocol using the following methods:

gridProxy:
  enable: yes
  a_environment: 'https://your.environment.net'
  tlsKeyGrid: "certificate.key"   # The private key for the domain used to run the BlazeMeter Grid proxy over HTTPS. Value in string format.
  tlsCertGrid: "certificate.crt"   # The public certificate for the domain used to run the BlazeMeter Grid proxy over HTTPS. Value in string format.
  mount_path: "/etc/ssl/certs/doduo"
  doduoPort: 9070          # The user-defined port where to run Doduo (BlazeMeter Grid Proxy). By default, Doduo listens on port 8000.
  volume:
    volume_name: "tls-files"
    mount_path: "/etc/ssl/certs/doduo"
    readOnly: true

The values TLS_CERT_GRID and TLS_KEY_GRID reference the file in the pod where the ConfigMap is mounted.

Deploy non_privilege container - non-root deployment

Non-root deployment requires an additional feature to be enabled at account level, please contact support for enabling this feature.

If you plan to deploy the BlazeMeter crane as a non-privileged installation, make changes to the non_privilege_container section of the values.yaml file. By default, enable is set to no. Change the enable to yes to automatically run the deployment and consecutive pods as non-root/non-privilege.

You can amend the runAsGroup and runAsUser to any value of your choice. The user/groupId for both crane and child resources must be the same.

Example:

non_privilege_container:
  enable: no
  runAsGroup: 1337
  runAsUser: 1337

Configure deployment to support BlazeMeter Service Virtualisation

If your Private Location will run Service Virtualisation (mock services), enable the service_virtualization section in your values.yaml file. This allows you to expose virtual services using either Istio or NGINX ingress controllers.

Only one ingress type can be enabled at a time. Ensure the corresponding ingress controller (NGINX or Istio) is installed and configured in your cluster.

This section supports the following parameters:

  • enable
    Set to yes to activate BlazeMeter Service Virtualization.

  • ingressType
    Set to either nginx or istio based on your ingress controller.

  • credentialName
    Set to the name of the credential (for example, wildcard certificate) to use.

  • web_expose_subdomain
    Set to the name of the subdomain to expose mock services.

Example:

service_virtualization:
    enable: yes
    ingressType: nginx         # nginx or istio, depending on your cluster setup
    credentialName: "wildcard-credential"
    web_expose_subdomain: "mydomain.local"

For more information, see Installing a BlazeMeter Agent for Kubernetes (Service Virtualization).

Configure Labels for Crane and child resources

You can add custom labels to the main Crane deployment, crane pod, and its child resources (such as executor pods) using the following sections in your values.yaml file. These labels are added in addition to any default labels set by the helm chart and BlazeMeter. This is useful for organizing, tracking, or applying policies to your resources according to your organization’s standards

Each section has the following parameters:

  • enable: Set to yes to apply the labels. If enable is set to no, labels will not be applied for that resource type.

  • syntax: Provide your labels in JSON format.

Use the section labelsCrane for labels set for crane, and use labelsExecutors for labels set for child pods.

Add labels in a JSON format as shown in the following example:

labelsCrane:
  enable: yes
  syntax: {"purpose": "loadtest", "owner": "devops"}
labelsExecutors:
  enable: yes
  syntax: {"type": "executor", "region": "us-east-1"}

Configure deployment to support tolerations

Use this configuration to specify the tolerations for crane and child pods. Switch the enable to yes and add tolerations for crane and child resources.

Use the section tolerationCrane to set tolerations for crane. Use the section tolerationExecutors to set tolerations for child pods.

Add tolerations in a JSON format as shown in the following example:

tolerationCrane:
  enable: no
  syntax: [{ "effect": "NoSchedule", "key": "lifecycle", "operator": "Equal", "value": "spot" }]
tolerationExecutors:
  enable: no
  syntax: [{ "effect": "NoSchedule", "key": "lifecycle", "operator": "Equal", "value": "spot" }]

Configure deployment to support node selector for crane and child resources

Use this configuration to specify the node selector for crane and child pod resources. Switch the enable to yes and add node selectors for crane and child resources.

Use the section nodeSelectorCrane to set node selectors for crane. Use the section nodeSelectorExecutor to set node selectors for child pods.

Add node selectors in a JSON format as shown in the following example:

nodeSelectorCrane:
  enable: no
  syntax: {"label_1": "label_1_value", "label_2": "label_2_value"}
nodeSelectorExecutor:
  enable: no
  syntax: {"label_1": "label_1_value", "label_2": "label_2_value"}

Configure resource limits and requests for the crane and child resources

If you do not need to set resource limits or requests, you can omit these sections or leave them commented.

If users or admins require a CPU or memory or EphemeralStorage limits/requests to be applied to crane and its child resources, configure the resourcesCrane or resourcesExecutors values.

  • CPU
    Defines the number of CPU cores as an integer.

  • MEM
    For resourcesExecutors, set the MEM value to an integer (in Mi), not a string (that is, 4096, not 4096Mi).

  • storage
    (Optional) Defines the ephemeral storage in MB.

The values in resourcesCrane values will be applied to crane deployment, while the values in resourcesExecutors will be applied to the child resources. You can either use one of them or both.

Add required values in the below value section in the values.yaml file.

Copy
# Resource requests and limits for the Crane deployment.
resourcesCrane:  
  requests:     
    CPU: 250m
    MEM: 512Mi 
    storage: 100      # Ephemeral storage in MB (optional)
  limits:
    CPU: 1            # Example: 1 core
    MEM: 2Gi
    storage: 1024     # Ephemeral storage in MB (optional)

# Resource requests and limits for child resources (executors/agents).
resourcesExecutors: 
  requests:           
    CPU: 1000m        
    MEM: 4096         # This value should be an integer (Mi), unlike other values that support k8s standard notation.
    storage: 100      # Ephemeral storage in MB (optional)
  limits:
    CPU: 2
    MEM: 8Gi
    storage: 1024

 

Configure the Pod Disruption Budget

A Pod Disruption Budget (PDB) ensures that a minimum number of pods remain available during voluntary disruptions (such as node drains or cluster upgrades). You can configure a PDB for the Crane deployment by enabling the following settings in your values.yaml file.

  • enable
    In the podDisruptionBudget section, set enable to yes to activate the PDB.
    If you do not require a PDB, leave enable as no.

  • minAvailable or maxUnavailable
    Specify either minAvailable (minimum pods that must be available) or maxUnavailable (maximum pods that can be unavailable). If both are set, minAvailable takes precedence.

  • matchLabels
    You can then specify the labels to match pods for the PDB.

Example configuration:

podDisruptionBudget:
    enable: yes
    # Only one of minAvailable or maxUnavailable should be set.
    minAvailable: 1
    # maxUnavailable: 1
    matchLabels: {"app": "crane"}

Configure the SecretProviderClass

Use the SecretProviderClass resource with the Secrets Store CSI Driver to mount secrets, keys, or certificates from external secret management systems (such as Azure Key Vault, AWS Secrets Manager, or HashiCorp Vault) into Kubernetes pods as files or Kubernetes secrets.

Customize the parameters and secretObjects fields based on your secrets provider and use case. You can enable and configure SecretProviderClass for the Crane deployment by updating the following section in your values.yaml file:

  • enable
    Set enable in the SecretProviderClass section to yes to activate the integration.
    If you do not require SecretProviderClass integration, leave enable as no.

  • provider
    Specify the external secrets provider (e.g., azure, aws, vault).

  • objects
    Add a list of provider-specific objects (such as secrets, alias or keys, etc.)

  • secretObjects
    (Optional) Define Kubernetes secrets to be created from the mounted content.

  • envName
    Define the env variable that the specific secret is going to replace/populate. This is not a standard parameter in secretProviderClass.

Example configuration:

Copy
secretProviderClass:
  enable: yes
  provider: aws
  # This is in JSON, to allow users configure different spec, like: secretPath, secretKey, objectAlias, etc. 
  objects: [{ "objectName": "arn:aws:secretsmanager:ap-southeast-2:{{AWS ACCOUNT}}:secret:harbour-id-{{dummy}}","objectType": "secretsmanager","objectAlias": "harbour-id-opl"},{"objectName": "arn:aws:secretsmanager:ap-southeast-2:{{AWS ACCOUNT}}:secret:ship-id-{{dummy}}","objectType": "secretsmanager","objectAlias": "ship-id-opl"}]
  secretObjects:  
  # Comment out the below section if you do not plan to create secrets in the namespace. 
    - secretName: auth-token
      type: Opaque
      data:
        - key: auth-token-key
          objectName: auth-token-opl
      envName: AUTH_TOKEN
    - secretName: harbour-id
      type: Opaque
      data:
        - key: harbour-id-key
          objectName: harbour-id-opl
      envName: HARBOR_ID
    - secretName: ship-id
      type: Opaque
      data:
        - key: ship-id-key
          objectName: ship-id-opl
      envName: SHIP_ID
You can specify as many as you need in the same map/slice fashion. The chart is designed to loop over these items.

Configure ExternalSecrets Operator

Use the ExternalSecrets Operator to synchronize secrets from external secret management systems (such as AWS Secrets Manager or Google Cloud Secret Manager) into Kubernetes secrets. This integration is useful if you want your Crane deployment to automatically fetch and manage secrets from your external provider.

Use the data section to map external secrets to Kubernetes secrets and environment variables.

For more details, see the ExternalSecrets Operator documentation.

You can enable and configure the ExternalSecrets Operator for the Crane deployment by updating the following section in your values.yaml file:

  • enable
    In the externalSecrets operator section, set enable to yes to activate the integration.
    If you do not require ExternalSecrets Operator integration, leave enable as no.

    Only enable the provider you intend to use, either aws or gcpsm. For other providers (such as Azure), please contact support.
  • volume
    (Optional) Configure the volume name, mount path, and readOnly flag for mounting secrets.

  • externalSecret
    Configure the ExternalSecret resource:

    • name: Name of the ExternalSecret resource.

    • refreshInterval: How often the operator should refresh the secret.

    • target.name: Name of the Kubernetes Secret to create.

    • data: List of secrets to fetch, mapping secretKey (Kubernetes key) to remoteRef.key (external secret name) and envName (environment variable to populate).

  • secretStore
    Configure the SecretStore resource:

    • name: Name of the SecretStore.

    • provider: Configure your secrets provider (e.g., AWS or GCP).

      • AWS: Set enable for aws to true, specify service and region.

        • authSecretRef: (Optional) Use this if you want to authenticate to AWS using static credentials (not recommended for production).

          • accessKeyID: Reference to a Kubernetes Secret containing your AWS access key ID.

          • secretAccessKey: Reference to a Kubernetes Secret containing your AWS secret access key.

          • If authSecretRef.enable is false, the chart will use the service account associated with the deployment (recommended).

  • GCP: Set enable for gcpsm to true, specify projectID.

    • secretRef: (Optional) Use this if you want to authenticate to GCP using a static service account key.

      • secretAccessKeySecretRef: Reference to a Kubernetes Secret containing your GCP credentials.

      • If secretRef.enable is false, the chart will use Workload Identity with the service account (recommended).

Example configuration:

Copy
externalSecretsOperator:
  enable: yes
  volume: 
    name: 
    readOnly: 
    path: 

  externalSecret: 
    name: blaze-external-secret
    refreshInterval: "15s"
    target:
      name: blazemeter-secrets-store
    data:
      - secretKey: ship-id
        remoteRef:
          key: ship-id
        envName: SHIP_ID
      - secretKey: harbour-id
        remoteRef:
          key: harbour-id
        envName: HARBOR_ID
      - secretKey: auth-token
        remoteRef:
          key: auth-token
        envName: AUTH_TOKEN
  
  secretStore:
    name: blaze-secret-store
    provider:
      aws:
        enable: true
        service: SecretsManager
        region: ap-southeast-2
        # Optionally configure authentication using static credentials:
        authSecretRef:
          enable: false
#  ---- <Rest of the config> ----
The chart will use the service account associated with the deployment for authentication unless authSecretRef (AWS) or secretRef (GCP) is enabled. authSecretRef and secretRef allow you to reference Kubernetes secrets for static credentials, but using IAM roles (AWS) or Workload Identity (GCP) is recommended for production.

Verify the helm chart

After you have configured the values, run the following command to verify if the values are correctly used in the Helm chart:

helm lint <path-to-chart>
helm template <path-to-chart>

The command prints the template that Helm will use to install this chart.

Check the values and if something is missing, edit the values.yaml file again according to your requirements.

Install the helm chart

To install the BlazeMeter Agent through the Helm chart, use the following command:

helm install crane /path/to/chart --namespace <your-namepace-name>

Verify helm chart installation

After installing the chart, you can verify both the deployment and the underlying Kubernetes infrastructure using Helm’s built-in test hooks. This chart includes a test pod that checks for essential connectivity and configuration, ensuring your environment is ready for BlazeMeter workloads.

Run the Helm test

To execute the test, run

helm test <release-name> -n <namespace>

  1. Replace <release-name> with the name you used for your Helm release (e.g., crane).

  2. Replace <namespace> with the namespace where you installed the chart.

What does the test do?

The test pod will:

  • Validate that the Cluster resources are suitable to run Crane and child deployment.

  • Check for required roles and mappings.

  • Verify network connectivity and DNS resolution from within the cluster.

  • Validate if the required kubernetes resources are deployed to support crane and its functionalities.

Interpreting validation results

Success:

If the test passes, you’ll see output similar to:

NAME: crane
LAST DEPLOYED: Tue Jun  3 20:24:12 2025
NAMESPACE: default
STATUS: deployed
REVISION: 5
TEST SUITE:     cranetesthook
Last Started:   Tue Jun  3 20:24:24 2025
Last Completed: Tue Jun  3 20:24:30 2025
Phase:          Succeeded

This means your chart and infrastructure are ready.

Failure:

If the test fails, review the logs for details. The --logs flag points to the issue that is causing the failure. Common issues include missing secrets, network restrictions, or misconfigured values or specs. Address any reported issues and re-run the test.

Additional tips

You can add the --logs flag to helm test to automatically print the test pod logs as follows:

helm test <release-name> --logs

If the test pod is stuck or fails to start, check for a kubernetes scheduler error (possible with third-party admission controllers), image pull errors, or missing configuration.

If you continue to encounter issues, please contact your cloud or DevOps team for assistance.

Upgrade an existing helm chart

To upgrade your existing Helm release to a new version of the chart, use the helm upgrade command. This allows you to apply new chart versions or updated configuration values without uninstalling and reinstalling.

Basic upgrade command

Run the following command:

helm upgrade <release-name> /path/to/newchart -n <namespace>
  1. Replace <release-name> with the name of your Helm release (for example, crane).

  2. Replace /path/to/newchart with the path to the new or updated chart directory or .tgz file.

  3. Replace <namespace> with the namespace where your release is installed.

Upgrading with custom values

If you have a custom values.yaml file, specify it with the -f flag:

helm upgrade <release-name> /path/to/newchart -n <namespace> -f /path/to/values.yaml

You can specify multiple -f flags to merge several values files.

Additional tips

Before upgrading, preview the changes using the helm-diff plugin:

helm diff upgrade <release-name> /path/to/newchart -n <namespace> -f /path/to/values.yaml

If you want to force resource updates (for example, if only config or secrets changed), add --force:

helm upgrade <release-name> /path/to/newchart -n <namespace> --force

After upgrading, verify the deployment and run the Helm test as described in the previous section.

If you encounter issues during upgrade, review the output for errors and consult the Helm upgrade documentation (helm.sh).

Uninstall the helm chart

To uninstall the Helm chart, run:

helm uninstall <release-name> -n <your-namepace-name>