Skip to main content

Upload Artifacts to S3

You can use the Upload Artifacts to S3 step in your CI pipelines to upload artifacts to AWS or other S3 providers, such as MinIO. You can also upload artifacts to GCS, upload artifacts to JFrog, and upload artifacts to Sonatype Nexus.

S3 Upload and Publish plugin

As an alternative to the Upload Artifacts to S3 step, you can use the S3 Upload and Publish Drone plugin to upload an artifact to S3 and publish it to the Artifacts tab.

For instructions, go to View artifacts on the Artifacts tab.

Prepare a pipeline

You need a CI pipeline with a Build stage.

If you haven't created a pipeline before, try one of the CI tutorials.

Prepare artifacts to upload

Add steps to your pipeline that generate artifacts to upload, such as Run steps. The steps you use depend on what artifacts you ultimately want to upload.

Upload artifacts to S3

Add an Upload Artifacts to S3 step. This step's settings are described below.

info

Depending on the stage's build infrastructure, some settings may be unavailable or located under Optional Configuration in the visual pipeline editor. Settings specific to containers, such as Set Container Resources, are not applicable when using the step in a stage with VM or Harness Cloud build infrastructure.

Name

Enter a name summarizing the step's purpose. Harness generates an Id (Entity Identifier Reference) based on the Name. You can edit the Id.

AWS Connector

Select the Harness AWS connector to use when connecting to AWS S3.

This step might not support all AWS connector authentication methods.

Stage variables are required for non-default ACLs and to assume IAM roles or use ARNs.

The AWS IAM roles and policies associated with the AWS account for your Harness AWS connector must allow pushing to S3. For more information, go to the AWS connector settings reference.

Stage variable required for non-default ACLs

S3 buckets use private ACLs by default. Your pipeline must have a PLUGIN_ACL stage variable if you want to use a different ACL.

  1. In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
  2. In the Advanced section, add a stage variable.
  3. Enter PLUGIN_ACL as the Variable Name, set the Type to String, and then select Save.
  4. For the Value, enter the relevant ACL.

Stage variable required to assume IAM role or use ARNs

Stages with Upload Artifacts to S3 steps must have a PLUGIN_USER_ROLE_ARN stage variable if:

To add the PLUGIN_USER_ROLE_ARN stage variable:

  1. In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.

  2. In the Advanced section, add a stage variable.

  3. Enter PLUGIN_USER_ROLE_ARN as the Variable Name, set the Type to String, and then select Save.

  4. For the Value, enter the full ARN value.

    • For cross-account roles, this ARN value must correspond with the AWS connector's ARN.
    • For connectors that use the delegate's IAM role, the ARN value must identify the role you want the build pod/machine to use.

Region

Define the AWS region to use when pushing the image.

Bucket

The name of the S3 bucket name where you want to upload the artifact.

Source Path

Path to the artifact file/folder that you want to upload.

If you want to upload a compressed file, you must use a Run step to compress the artifact before uploading it.

Endpoint URL

Endpoint URL for S3-compatible providers. This setting is not needed for AWS.

Target

The path, relative to the S3 Bucket, where you want to store the artifact. Do not include the bucket name; you specified this in Bucket.

If no path is specified, the artifact is saved to [bucket]/[key].

Run as User

Specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.

Set Container Resources

Maximum resources limits for the resources used by the container at runtime:

  • Limit Memory: Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number with the suffixes G or M. You can also use the power-of-two equivalents, Gi or Mi. Do not include spaces when entering a fixed value. The default is 500Mi.
  • Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed. For example, you can specify one hundred millicpu as 0.1 or 100m. The default is 400m. For more information, go to Resource units in Kubernetes.

Timeout

Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:

Confirm the upload

After you add the steps and save the pipeline, select Run to run the pipeline.

On the build details page, you can see the logs for each step as they run.

After the Upload Artifacts to S3 step runs, you can see the uploaded artifacts on S3.

View artifacts on the Artifacts tab

As an alternative to manually finding artifacts on S3, you can use Drone plugins to view artifacts on the Artifacts tab on the Build details page.

The Artifact Metadata Publisher Drone plugin pulls content from cloud storage and publishes it to the Artifacts tab.

Add the Plugin step after the Upload Artifacts to S3 step.

Add a Plugin step that uses the artifact-metadata-publisher plugin.

               - step:
type: Plugin
name: publish artifact metadata
identifier: publish_artifact_metadata
spec:
connectorRef: account.harnessImage
image: plugins/artifact-metadata-publisher
settings:
file_urls: ## Provide the URL to the target artifact that was uploaded in the Upload Artifacts to S3 step.
artifact_file: artifact.txt