Upload Artifacts to S3
You can use the Upload Artifacts to S3 step in your CI pipelines to upload artifacts to AWS or other S3 providers, such as MinIO. You can also upload artifacts to GCS, upload artifacts to JFrog, and upload artifacts to Sonatype Nexus.
As an alternative to the Upload Artifacts to S3 step, you can use the S3 Upload and Publish Drone plugin to upload an artifact to S3 and publish it to the Artifacts tab.
For instructions, go to View artifacts on the Artifacts tab.
Prepare a pipeline
You need a CI pipeline with a Build stage.
If you haven't created a pipeline before, try one of the CI tutorials.
Prepare artifacts to upload
Add steps to your pipeline that generate artifacts to upload, such as Run steps. The steps you use depend on what artifacts you ultimately want to upload.
Upload artifacts to S3
Add an Upload Artifacts to S3 step. This step's settings are described below.
Depending on the stage's build infrastructure, some settings may be unavailable or located under Optional Configuration in the visual pipeline editor. Settings specific to containers, such as Set Container Resources, are not applicable when using the step in a stage with VM or Harness Cloud build infrastructure.
Name
Enter a name summarizing the step's purpose. Harness generates an Id (Entity Identifier Reference) based on the Name. You can edit the Id.
AWS Connector
Select the Harness AWS connector to use when connecting to AWS S3.
This step might not support all AWS connector authentication methods.
Stage variables are required for non-default ACLs and to assume IAM roles or use ARNs.
The AWS IAM roles and policies associated with the AWS account for your Harness AWS connector must allow pushing to S3. For more information, go to the AWS connector settings reference.
Stage variable required for non-default ACLs
S3 buckets use private ACLs by default. Your pipeline must have a PLUGIN_ACL
stage variable if you want to use a different ACL.
- In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
- In the Advanced section, add a stage variable.
- Enter
PLUGIN_ACL
as the Variable Name, set the Type to String, and then select Save. - For the Value, enter the relevant ACL.
Stage variable required to assume IAM role or use ARNs
Stages with Upload Artifacts to S3 steps must have a PLUGIN_USER_ROLE_ARN
stage variable if:
- Your AWS connector's authentication uses a cross-account role (ARN). You can use
PLUGIN_USER_ROLE_ARN
to specify the full ARN value corresponding with the AWS connector's ARN. - Your AWS connector uses Assume IAM Role on Delegate authentication. If your connector doesn't use AWS Access Key authentication, then the Upload Artifact to S3 step uses the IAM role of the build pod or build VM (depending on your build infrastructure). You can use
PLUGIN_USER_ROLE_ARN
to select a different role than the default role assumed by the build pod/machine. This is similar tosts assume-role
.
To add the PLUGIN_USER_ROLE_ARN
stage variable:
In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
In the Advanced section, add a stage variable.
Enter
PLUGIN_USER_ROLE_ARN
as the Variable Name, set the Type to String, and then select Save.For the Value, enter the full ARN value.
- For cross-account roles, this ARN value must correspond with the AWS connector's ARN.
- For connectors that use the delegate's IAM role, the ARN value must identify the role you want the build pod/machine to use.
Region
Define the AWS region to use when pushing the image.
Bucket
The name of the S3 bucket name where you want to upload the artifact.
Source Path
Path to the artifact file/folder that you want to upload.
If you want to upload a compressed file, you must use a Run step to compress the artifact before uploading it.
Endpoint URL
Endpoint URL for S3-compatible providers. This setting is not needed for AWS.
Target
The path, relative to the S3 Bucket, where you want to store the artifact. Do not include the bucket name; you specified this in Bucket.
If no path is specified, the artifact is saved to [bucket]/[key]
.
Run as User
Specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.
Set Container Resources
Maximum resources limits for the resources used by the container at runtime:
- Limit Memory: Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number with the suffixes
G
orM
. You can also use the power-of-two equivalents,Gi
orMi
. Do not include spaces when entering a fixed value. The default is500Mi
. - Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed. For example, you can specify one hundred millicpu as
0.1
or100m
. The default is400m
. For more information, go to Resource units in Kubernetes.
Timeout
Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to:
Confirm the upload
After you add the steps and save the pipeline, select Run to run the pipeline.
On the build details page, you can see the logs for each step as they run.
After the Upload Artifacts to S3 step runs, you can see the uploaded artifacts on S3.
View artifacts on the Artifacts tab
As an alternative to manually finding artifacts on S3, you can use Drone plugins to view artifacts on the Artifacts tab on the Build details page.
- Artifact Metadata Publisher plugin
- S3 Upload and Publish plugin
The Artifact Metadata Publisher Drone plugin pulls content from cloud storage and publishes it to the Artifacts tab.
Add the Plugin step after the Upload Artifacts to S3 step.
- Visual
- YAML
Configure the Plugin step settings as follows:
- Name: Enter a name.
- Container Registry: Select a Docker connector.
- Image: Enter
plugins/artifact-metadata-publisher
. - Settings: Add the following two settings as key-value pairs.
file_urls
: The URL to the target artifact that was uploaded in the Upload Artifacts to S3 step.artifact_file
:artifact.txt
Add a Plugin
step that uses the artifact-metadata-publisher
plugin.
- step:
type: Plugin
name: publish artifact metadata
identifier: publish_artifact_metadata
spec:
connectorRef: account.harnessImage
image: plugins/artifact-metadata-publisher
settings:
file_urls: ## Provide the URL to the target artifact that was uploaded in the Upload Artifacts to S3 step.
artifact_file: artifact.txt
The S3 Upload and Publish Drone plugin uploads a specified file or directory to AWS S3 and publishes it to the Artifacts tab.
If you use this plugin, you do not need an Upload Artifacts to S3 step in your pipeline.
- Visual
- YAML
Add a Plugin step that uses the drone-s3-upload-publish
plugin.
Configure the Plugin step settings as follows:
- Name: Enter a name.
- Container Registry: Select a Docker connector.
- Image: Enter
harnesscommunity/drone-s3-upload-publish
. - Settings: Add the following seven settings as key-value pairs.
aws_access_key_id
: An expression referencing a Harness secret or pipeline variable containing your AWS access ID, such as<+pipeline.variables.AWS_ACCESS>
.aws_secret_access_key
: An expression referencing a Harness secret or pipeline variable containing your AWS access key, such as<+pipeline.variables.AWS_SECRET>
.aws_default_region
: Your default AWS region, such asap-southeast-2
.aws_bucket
: The target S3 bucket.artifact_file
:url.txt
source
: The path to store and retrieve the artifact in the S3 bucket.
- Image Pull Policy: Select If Not Present.
Add a Plugin step that uses the drone-s3-upload-publish
plugin, for example:
- step:
type: Plugin
name: s3-upload-publish
identifier: custom_plugin
spec:
connectorRef: account.harnessImage
image: harnesscommunity/drone-s3-upload-publish
settings:
aws_access_key_id: <+pipeline.variables.AWS_ACCESS> ## Reference to a Harness secret or pipeline variable containing your AWS access ID.
aws_secret_access_key: <+pipeline.variables.AWS_SECRET> ## Reference to a Harness secret or pipeline variable containing your AWS access key.
aws_default_region: ap-southeast-2 ## Set to your default AWS region.
aws_bucket: bucket-name ## The target S3 bucket.
artifact_file: url.txt
source: OBJECT_PATH ## Path to store and retrieve the artifact from S3.
imagePullPolicy: IfNotPresent
For aws_access_key_id
and aws_secret_access_key
, use expressions to reference Harness secrets or pipeline variables containing your AWS access ID and key.