Skip to main content

Tanzu Application Services deployments overview

This topic shows you how to deploy a publicly available application to your Tanzu Application Service (TAS, formerly PCF) space by using any deployment strategy in Harness.

note

Currently, this feature is behind feature flags NG_SVC_ENV_REDESIGN. Contact Harness Support to enable this feature.

Objectives

You'll learn how to:

  • Install and launch a Harness delegate in your target cluster.
  • Connect Harness with your TAS account.
  • Connect Harness with a public image hosted on Artifactory.
  • Specify the manifest to use for the application.
  • Set up a TAS pipeline in Harness to deploy the application.

Important notes

  • For TAS deployments, Harness supports the following artifact sources. You connect Harness to these registries by using your registry account credentials.
    • Artifactory
    • Nexus
    • Docker Registry
    • Amazon S3
    • Google Container Registry (GCR)
    • Amazon Elastic Container Registry (ECR)
    • Azure Container Registry (ACR)
    • Google Artifact Registry (GAR)
    • Google Cloud Storage (GCS)
    • GitHub Package Registry
    • Azure Artifacts
    • Jenkins
  • Before you create a TAS pipeline in Harness, make sure that you have the Continuous Delivery module in your Harness account. For more information, go to create organizations and projects.
  • Your Harness delegate profile must have CF CLI v7, autoscaler, and Create-Service-Push plugins added to it.

Connect to a TAS provider

You can connect Harness to a TAS space by adding a TAS connector. Perform the following steps to add a TAS connector.

  1. Open a Harness project and select the Deployments module.

  2. In Project Setup, select Connectors, then select New Connector.

  3. In Cloud Providers, select Tanzu Application Service. The TAS connector settings appear.

  4. Enter a connector name and select Continue.

  5. Enter the TAS Endpoint URL. For example, https://api.system.tas-mycompany.com.

  6. In Authentication, select one of the following options.

    1. Plaintext - Enter the username and password. For password, you can either create a new secret or use an existing one.
    2. Encrypted - Enter the username and password. You can create a new secret for your username and password or use exiting ones.
  7. Select Continue.

  8. In Connect to the provider, select Connect through a Harness Delegate, and then select Continue. We don't recommend using the Connect through Harness Platform option here because you'll need a delegate later for connecting to your TAS environment. Typically, the Connect through Harness Platform option is a quick way to make connections without having to use delegates.

    Expand the sections below to learn more about installing delegates.

Use the delegate installation wizard
  1. In your Harness project, select Project Setup.
  2. Select Delegates.
  3. Select Install a Delegate.
  4. Follow the delegate installation wizard.

Use this delegate installation wizard video to guide you through the process.

Install a delegate using the terminal

What is Harness Delegate?

Harness Delegate is a lightweight worker process that is installed on your infrastructure and communicates only via outbound HTTP/HTTPS to the Harness Platform. This enables the Harness Platform to leverage the delegate to execute the CI/CD and other tasks on your behalf, without any of your secrets leaving your network.

You can install the Harness Delegate on either Docker or Kubernetes.

Install Harness Delegate

Create a new delegate token

Log in to the Harness Platform and go to Account Settings -> Account Resources -> Delegates. Select the Tokens tab. Select +New Token, and enter a token name, for example firstdeltoken. Select Apply. Harness Platform generates a new token for you. Select Copy to copy and store the token in a temporary file. You will provide this token as an input parameter in the next installation step. The delegate will use this token to authenticate with the Harness Platform.

Get your Harness account ID

Along with the delegate token, you will also need to provide your Harness accountId as an input parameter during delegate installation. This accountId is present in every Harness URL. For example, in the following URL:

https://app.harness.io/ng/#/account/6_vVHzo9Qeu9fXvj-AcQCb/settings/overview

6_vVHzo9Qeu9fXvj-AcQCb is the accountId.

Now you are ready to install the delegate on either Docker or Kubernetes.

Prerequisite

Ensure that you have access to a Kubernetes cluster. For the purposes of this tutorial, we will use minikube.

Install minikube

  • On Windows:
choco install minikube
  • On macOS:
brew install minikube

Now start minikube with the following config.

minikube start --memory 4g --cpus 4

Validate that you have kubectl access to your cluster.

kubectl get pods -A

Now that you have access to a Kubernetes cluster, you can install the delegate using any of the options below.

Install the Helm chart

As a prerequisite, you must have Helm v3 installed on the machine from which you connect to your Kubernetes cluster.

You can now install the delegate using the delegate Helm chart. First, add the harness-delegate Helm chart repo to your local Helm registry.

helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/
helm repo update
helm search repo harness-delegate

We will use the harness-delegate/harness-delegate-ng chart in this tutorial.

NAME                                    CHART VERSION   APP VERSION DESCRIPTION                                
harness-delegate/harness-delegate-ng 1.0.8 1.16.0 A Helm chart for deploying harness-delegate

Now we are ready to install the delegate. The following example installs/upgrades firstk8sdel delegate (which is a Kubernetes workload) in the harness-delegate-ng namespace using the harness-delegate/harness-delegate-ng Helm chart.

To install the delegate, do the following:

  1. In Harness, select Deployments, then select your project.

  2. Select Delegates under Project Setup.

  3. Select Install a Delegate to open the New Delegate dialog.

  4. Select Helm Chart under Install your Delegate.

  5. Copy the helm upgrade command.

  6. Run the command.

The command uses the default values.yaml located in the delegate-helm-chart GitHub repo. If you want change one or more values in a persistent manner instead of the command line, you can download and update the values.yaml file as per your need. You can use the updated values.yaml file as shown below.

helm upgrade -i firstk8sdel --namespace harness-delegate-ng --create-namespace \
harness-delegate/harness-delegate-ng \
-f values.yaml \
--set delegateName=firstk8sdel \
--set accountId=PUT_YOUR_HARNESS_ACCOUNTID_HERE \
--set delegateToken=PUT_YOUR_DELEGATE_TOKEN_HERE \
--set managerEndpoint=PUT_YOUR_MANAGER_HOST_AND_PORT_HERE \
--set delegateDockerImage=harness/delegate:23.02.78306 \
--set replicas=1 --set upgrader.enabled=false

Deploy using a custom role

During delegate installation, you have the option to deploy using a custom role. To use a custom role, you must edit the delegate YAML file.

Harness supports the following custom roles:

  • cluster-admin
  • cluster-viewer
  • namespace-admin
  • custom cluster roles

To deploy using a custom cluster role, do the following:

  1. Open the delegate YAML file in your text editor.

  2. Add the custom cluster role to the roleRef field in the delegate YAML.

    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
    name: harness-delegate-cluster-admin
    subjects:
    - kind: ServiceAccount
    name: default
    namespace: harness-delegate-ng
    roleRef:
    kind: ClusterRole
    name: cluster-admin
    apiGroup: rbac.authorization.k8s.io
    ---

    In this example, the cluster-admin role is defined.

  3. Save the delegate YAML file.

Verify delegate connectivity

Select Continue. After the health checks pass, your delegate is available for you to use. Select Done and verify your new delegate is listed.

Helm chart & Terraform Helm provider

Delegate Available

Kubernetes manifest

Delegate Available

Docker

Delegate Available

You can now route communication to external systems in Harness connectors and pipelines by selecting this delegate via a delegate selector.

Delegate selectors do not override service infrastructure connectors. Delegate selectors only determine the delegate that executes the operations of your pipeline.

Troubleshooting

The delegate installer provides troubleshooting information for each installation process. If the delegate cannot be verified, select Troubleshoot for steps you can use to resolve the problem. This section includes the same information.

Harness asks for feedback after the troubleshooting steps. You are asked, Did the delegate come up?

If the steps did not resolve the problem, select No, and use the form to describe the issue. You'll also find links to Harness Support and to Delegate docs.

Use the following steps to troubleshoot your installation of the delegate using Helm.

  1. Verify that Helm is correctly installed:

    Check for Helm:

    helm

    And then check for the installed version of Helm:

    helm version

    If you receive the message Error: rendered manifests contain a resource that already exists..., delete the existing namespace, and retry the Helm upgrade command to deploy the delegate.

    For further instructions on troubleshooting your Helm installation, go to Helm troubleshooting guide.

  2. Check the status of the delegate on your cluster:

    kubectl describe pods -n <namespace>
  3. If the pod did not start, check the delegate logs:

    kubectl logs -f <harnessDelegateName> -n <namespace>

    If the state of the delegate pod is CrashLoopBackOff, check your allocation of compute resources (CPU and memory) to the cluster. A state of CrashLoopBackOff indicates insufficent Kubernetes cluster resources.

  4. If the delegate pod is not healthy, use the kubectl describe command to get more information:

    kubectl describe <pod_name> -n <namespace>

To learn more, watch the Delegate overview video.

  1. In Set Up Delegates, select the Connect using Delegates with the following Tags option and enter your delegate name.
  2. Select Save and Continue.
  3. Once the test connection succeeds, select Finish. The connector now appears in the Connectors list.

Install Cloud Foundry Command Line Interface (CF CLI) on your Harness delegate

After the delegate pods are created, you must edit your Harness delegate YAML to install CF CLI v7, autoscaler, and Create-Service-Push plugins.

  1. Open the delegate.yaml in a text editor.

  2. Locate the environment variable INIT_SCRIPT in the Deployment object.

    - name: INIT_SCRIPT  
    value: ""
  3. Replace value: "" with the following script to install CF CLI, autoscaler, and Create-Service-Push plugins.

    info

    Harness delegate uses Red Hat based distributions like Red Hat Enterprise Linux (RHEL) or Red Hat Universal Base Image (UBI). Hence, we recommend that you use microdnf commands to install CF CLI on your delegate. If you are using a package manager in Debian based distributions like Ubuntu, use apt-get commands to install CF CLI on your delegate.

    info

    Make sure to use your API token for pivnet login in the following script.

- name: INIT_SCRIPT  
value: |
# update package manager, install necessary packages, and install CF CLI v7
microdnf update
microdnf install yum
microdnf install --nodocs unzip yum-utils
microdnf install -y yum-utils
echo y | yum install wget
wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
echo y | yum install cf7-cli -y

# autoscaler plugin
# download and install pivnet
wget -O pivnet https://github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin;
pivnet login --api-token=<replace with api token>

# download and install autoscaler plugin by pivnet
pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441
cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295

# install Create-Service-Push plugin from community
cf install-plugin -r CF-Community "Create-Service-Push"

# verify cf version
cf --version

# verify plugins
cf plugins
  1. Apply the profile to the delegate profile and check the logs.

    The output for cf --version is cf version 7.2.0+be4a5ce2b.2020-12-10.

    Here is the output for cf plugins.

    App Autoscaler        2.0.295   autoscaling-apps              Displays apps bound to the autoscaler
    App Autoscaler 2.0.295 autoscaling-events Displays previous autoscaling events for the app
    App Autoscaler 2.0.295 autoscaling-rules Displays rules for an autoscaled app
    App Autoscaler 2.0.295 autoscaling-slcs Displays scheduled limit changes for the app
    App Autoscaler 2.0.295 configure-autoscaling Configures autoscaling using a manifest file
    App Autoscaler 2.0.295 create-autoscaling-rule Create rule for an autoscaled app
    App Autoscaler 2.0.295 create-autoscaling-slc Create scheduled instance limit change for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-rule Delete rule for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-rules Delete all rules for an autoscaled app
    App Autoscaler 2.0.295 delete-autoscaling-slc Delete scheduled limit change for an autoscaled app
    App Autoscaler 2.0.295 disable-autoscaling Disables autoscaling for the app
    App Autoscaler 2.0.295 enable-autoscaling Enables autoscaling for the app
    App Autoscaler 2.0.295 update-autoscaling-limits Updates autoscaling instance limits for the app
    Create-Service-Push 1.3.2 create-service-push, cspush Works in the same manner as cf push, except that it will create services defined in a services-manifest.yml file first before performing a cf push.
    note

    The CF Command script does not require cf login. Harness logs in using the credentials in the TAS cloud provider set up in the infrastructure definition for the workflow executing the CF Command.

Create the deploy stage

Pipelines are collections of stages. For this tutorial, we'll create a new pipeline and add a single stage.

  1. In your Harness project, select Pipelines, select Deployments, then select Create a Pipeline.

    Your pipeline appears.

  2. Enter the name TAS Quickstart and click Start.

  3. Click Add Stage and select Deploy.

  4. Enter the stage name Deploy TAS Service, select the Tanzu Application Services deployment type, and select Set Up Stage.

    The new stage settings appear.

Create the Harness TAS service

Harness services represent your microservices or applications. You can add the same service to as many stages as you need. Services contain your artifacts, manifests, config files, and variables. For more information, go to services and environments overview.

Create a new service

  1. Select the Service tab, then select Add Service.

  2. Enter a service name. For example, TAS.

    Services are persistent and can be used throughout the stages of this pipeline or any other pipeline in the project.

  3. In Service Definition, in Deployment Type, verify if Tanzu Application Services is selected.

Add the manifest

  1. In Manifests, select Add Manifest.
    Harness uses TAS Manifest, Vars, and AutoScaler manifest types for defining TAS applications, instances, and routes.
    You can use one TAS manifest and one autoscaler manifest only. You can use unlimited vars file manifests.

  2. Select TAS Manifest and select Continue.

  3. In Specify TAS Manifest Store, select Harness and select Continue.

  4. In Manifest Details, enter a manifest name. For example, nginx.

  5. Select File/Folder Path.

  6. In Create or Select an Existing Config file, select Project. This is where we will create the manifest.

    1. Select New, select New Folder, enter a folder name, and then select Create.

    2. Select the new folder, select New, select New File, and then enter a file name. For example, enter manifest.

    3. Enter the following in the manifest file, and then click Save.

      applications:
      - name: ((NAME))
      health-check-type: process
      timeout: 5
      instances: ((INSTANCE))
      memory: 750M
      routes:
      - route: ((ROUTE))
  7. Select Apply Selected.

    You can add only one manifest.yaml file.

  8. Select Vars.yaml path and repeat steps 6.1 and 6.2 to create a vars file. Then, enter the following information:

    NAME: harness_<+service.name>
    INSTANCE: 1
    ROUTE: harness_<+service.name>_<+infra.name>.apps.tas-harness.com
  9. Select Apply Selected.

    You can add any number of vars.yaml files.

  10. Select AutoScaler.yaml and repeat steps 6.1 and 6.2 to create an autoscaler file. Then, enter the following information:

    instance_limits:
    min: 1
    max: 2
    rules:
    - rule_type: "http_latency"
    rule_sub_type: "avg_99th"
    threshold:
    min: 100
    max: 200
    scheduled_limit_changes:
    - recurrence: 10
    executes_at: "2032-01-01T00:00:00Z"
    instance_limits:
    min: 1
    max: 2
  11. Select Apply Selected.

    You can add only one autoscaler.yaml file.

  12. Select Submit.

Add the artifact for deployment

  1. In Artifacts, select Add Artifact Source.

  2. In Specify Artifact Repository Type, select Artifactory, and select Continue.

    info

    For TAS deployments, Harness supports the following artifact sources. You connect Harness to these registries by using your registry account credentials.

    For this tutorial, we will use Artifactory.

  1. In Artifactory Repository, click New Artifactory Connector.

  2. Enter a name for the connector, such as JFrog, then select Continue.

  3. In Details, in Artifactory Repository URL, enter https://harness.jfrog.io/artifactory/.

  4. In Authentication, select Anonymous, and select Continue.

  5. In Delegates Setup, select Only use Delegate with all of the following tags and enter the name of the delegate created in connect to a TAS provider (step 8).

  6. Select Save and Continue

  7. After the test connection succeeds, select Continue.

  8. In Artifact Details, enter the following details:

    1. Enter an Artifact Source Name.
    2. Select Generic or Docker repository format.
    3. Select a Repository where the artifact is located.
    4. Enter the name of the folder or repository where the artifact is located.
    5. Select Value to enter a specific artifact name. You can also select Regex and enter a tag regex to filter the artifact.
  9. Select Submit.

Define the TAS target infrastructure

You define the target infrastructure for your deployment in the Environment settings of the pipeline stage. You can define an environment separately and select it in the stage, or create the environment within the stage Environment tab.

There are two methods of specifying the deployment target infrastructure:

  • Pre-existing: the target infrastructure already exists and you simply need to provide the required settings.
  • Dynamically provisioned: the target infrastructure will be dynamically provisioned on-the-fly as part of the deployment process.

For details on Harness provisioning, go to Provisioning overview.

Pre-existing TAS infrastructure

The target space is your TAS space. This is where you will deploy your application.

  1. In Specify Environment, select New Environment.

  2. Enter the name TAS tutorial and select Pre-Production.

  3. Select Save.

  4. In Specify Infrastructure, select New Infrastructure.

  5. Enter a name, and then verify that the selected deployment type is Tanzu Application Type.

  6. Select the TAS connector you created earlier.

  7. In Organization, select the TAS org in which want to deploy.

  8. In Space, select the TAS space in which you want to deploy.

  9. Select Save.

Dynamically provisioned TAS infrastructure

note

Currently, the dynamic provisioning documented in this topic is behind the feature flag CD_NG_DYNAMIC_PROVISIONING_ENV_V2. Contact Harness Support to enable the feature.

Here is a summary of the steps to dynamically provision the target infrastructure for a deployment:

  1. Add dynamic provisioning to the CD stage:

    1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

    2. Select the type of provisioner that you want to use.

      Harness automatically adds the provisioner steps for the provisioner type you selected.

    3. Configure the provisioner steps to run your provisioning scripts.

    4. Select or create a Harness infrastructure in Environment.

  2. Map the provisioner outputs to the Infrastructure Definition:

    1. In the Harness infrastructure, enable the option Map Dynamically Provisioned Infrastructure.
    2. Map the provisioning script/template outputs to the required infrastructure settings.

Supported provisioners

The following provisioners are supported for TAS deployments:

  • Terraform
  • Terragrunt
  • Terraform Cloud
  • CloudFormation
  • Azure Resource Manager (ARM)
  • Azure Blueprint
  • Shell Script

Adding dynamic provisioning to the stage

To add dynamic provisioning to a Harness pipeline Deploy stage, do the following:

  1. In a Harness Deploy stage, in Environment, enable the option Provision your target infrastructure dynamically during the execution of your Pipeline.

  2. Select the type of provisioner that you want to use.

    Harness automatically adds the necessary provisioner steps.

  3. Set up the provisioner steps to run your provisioning scripts.

For documentation on each of the required steps for the provisioner you selected, go to the following topics:

Mapping provisioner output

Once you set up dynamic provisioning in the stage, you must map outputs from your provisioning script/template to specific settings in the Harness Infrastructure Definition used in the stage.

  1. In the same CD Deploy stage where you enabled dynamic provisioning, select or create (New Infrastructure) a Harness infrastructure.

  2. In the Harness infrastructure, in Select Infrastructure Type, select Tanzu Application Services if it is not already selected.

  3. In Tanzu Application Service Infrastructure Details, enable the option Map Dynamically Provisioned Infrastructure.

    A Provisioner setting is added and configured as a runtime input.

  4. Map the provisioning script/template outputs to the required infrastructure settings.

To provision the target deployment infrastructure, Harness needs specific infrastructure information from your provisioning script. You provide this information by mapping specific Infrastructure Definition settings in Harness to outputs from your template/script.

For TAS, Harness needs the following settings mapped to outputs:

  • Organization
  • Space
note

Ensure the Organization and Space settings are set to the Expression option.

For example, here's a snippet of a Terraform script that provisions the infrastructure for a Tanzu Application Services deployment and includes the required outputs:


provider "aws" {
region = "us-east-1"
}

resource "aws_opsworks_org" "pcf_org" {
name = "my-pcf-org"
}

resource "aws_opsworks_space" "pcf_space" {
name = "my-pcf-space"
organization_id = aws_opsworks_org.pcf_org.id
}

output "organization_name" {
value = aws_opsworks_org.pcf_org.name
}

output "space_name" {
value = aws_opsworks_space.pcf_space.name
}

In the Harness Infrastructure Definition, you map outputs to their corresponding settings using expressions in the format <+provisioner.OUTPUT_NAME>, such as <+provisioner.organization_name>.

Figure: Mapped outputs.

TAS execution strategies

Now you can select the deployment strategy for this stage of the pipeline.

The TAS workflow for performing a basic deployment takes your Harness TAS service and deploys it on your TAS infrastructure definition.

  1. In Execution Strategies, select Basic, then select Use Strategy.

  2. The basic execution steps are added.

  3. Select the Basic App Setup step to define Step Parameters.

    The basic app setup configuration uses your manifest in Harness TAS to set up your application.

    1. Name - Edit the deployment step name.
    2. Timeout - Set how long you want the Harness delegate to wait for the TAS cloud to respond to API requests before timeout.
    3. Instance Count - Select whether to Read from Manifest or Match Running Instances.
      The Match Running Instances setting can be used after your first deployment to override the instances in your manifest.
    4. Existing Versions to Keep - Enter the number of existing versions you want to keep. This is to roll back to a stable version if the deployment fails.
    5. Additional Routes - Enter additional routes if you want to add routes other than the ones defined in the manifests.
    6. Select Apply Changes.
  4. Select the App Resize step to define Step Parameters.

    1. Name - Edit the deployment step name.
    2. Timeout - Set how long you want the Harness delegate to wait for the TAS cloud to respond to API requests before timeout.
    3. Ignore instance count in Manifest - Select this option to override the instance count defined in the manifest.yaml file with the values specified in the App Resize step.
    4. Total Instances - Set the number or percentage of running instances you want to keep.
    5. Desired Instances - Old Version - Set the number or percentage of instances for the previous version of the application you want to keep. If this field is left empty, the desired instance count will be the difference between the maximum possible instance count (from the manifest or match running instances count) and the number of new application instances.
    6. Select Apply Changes.
  5. Add a Tanzu Command step to your stage if you want to execute custom Tanzu commands in this step.

    1. Timeout - Set how long you want the Harness delegate to wait for the TAS cloud to respond to API requests before timeout.
    2. Script - Select one of the following options.
      • File Store - Select this option to choose a script from Project, Organization, or Account.
      • Inline - Select this option to enter a script inline.
    3. Select Apply Changes.
  6. Add an App Rollback step to your stage if you want to roll back to an older version of the application in case of deployment failure.

  7. In Advanced configure the following options.

    • Delegate Selector - Select the delegate(s) you want to use to execute this step. You can select one or more delegates for each pipeline step. You only need to select one of a delegate's tags to select it. All delegates with the tag are selected.

    • Conditional Execution - Use the conditions to determine when this step is executed. For more information, go to conditional execution settings.

    • Failure Strategy - Define the failure strategies to control the behavior of your pipeline when there is an error in execution. For more information, go to failure strategy references and define a failure strategy.

      Expand the following section to view the error types and failure strategies supported for the steps in a Basic TAS deployment.

      Error types and failure strategy
      Step nameError types and failure strategy
      App Setup
      Error type Rollback Stage Manual Intervention Ignore Failure Retry Mark As Success Abort Mark As Failure
      Delegate Provisioning Errors Supported, but rollback is skipped because app is not setup. Supported Supported Supported Supported Supported Supported
      Delegate Restart Supported, but rollback is skipped because app is not setup. Supported Supported Supported Supported Supported Supported
      Timeout Errors Supported, but rollback is skipped because app is not setup. Supported Supported Supported Supported Supported Supported
      Execution-time Inputs Timeout Errors Supported, but rollback is skipped because app is not setup. Supported Supported Supported Supported Supported Supported
      App Resize
      Error type Rollback Stage Manual Intervention Ignore Failure Retry Mark As Success Abort Mark As Failure
      Delegate Provisioning Errors Supported Supported Supported Supported Supported Supported Supported
      Delegate Restart Supported Supported Supported Supported Supported Supported Supported
      Timeout Errors Supported Supported Supported Supported Supported Supported Supported
      Execution-time Inputs Timeout Errors Supported Supported Supported Supported Supported Supported Supported
      App Rollback
      Error type Rollback Stage Manual Intervention Ignore Failure Retry Mark As Success Abort Mark As Failure
      Delegate Provisioning Errors Invalid Supported Supported Invalid Supported Supported Supported
      Delegate Restart Invalid Supported Supported Invalid Supported Supported Supported
      Timeout Errors Invalid Supported Supported Invalid Supported Supported Supported
      Execution-time Inputs Timeout Errors Invalid Supported Supported Invalid Supported Supported Supported
      Tanzu Command
      Error type Rollback Stage Manual Intervention Ignore Failure Retry Mark As Success Abort Mark As Failure
      Delegate Provisioning Errors Invalid Supported Supported Supported Supported Supported Supported
      Delegate Restart Invalid Supported Supported Supported Supported Supported Supported
      Timeout Errors Invalid Supported Supported Supported Supported Supported Supported
      Execution-time Inputs Timeout Errors Invalid Supported Supported Supported Supported Supported Supported
      note

      For the Tanzu Command step, Harness does not provide default rollback steps. You can do a rollback by configuring your own Rollback step.

    • Looping Strategy - Select Matrix, Repeat, or Parallelism looping strategy. For more information, go to looping strategies overview.

    • Policy Enforcement - Add or modify a policy set to be evaluated after the step is complete. For more information, go to CD governance.

  8. Select Save.

Now the pipeline stage is complete and you can deploy.

Deploy and review

  1. Click Save > Save Pipeline, then select Run. Now you can select the specific artifact to deploy.

  2. Select a Primary Artifact.

  3. Select a Tag.

  4. Select the following Infrastructure parameters.

    1. Connector
    2. Organization
    3. Space
  5. Click Run Pipeline. Harness will verify the pipeline and then run it. You can see the status of the deployment, pause or abort it.

  6. Toggle Console View to watch the deployment with more detailed logging.

The deployment was successful.

In your project's Deployments, you can see the deployment listed.

Next steps

See CD tutorials for other deployment features.