Continuous Delivery & GitOps FAQs
This article addresses some frequently asked questions about Harness Continuous Delivery & GitOps.
For an overview of Harness' support for platforms, methodologies, and related technologies, go to Supported platforms and technologies.
For a list of CD supported platforms and tools, go to CD integrations.
For an overview of Harness concepts, see Learn Harness' key concepts.
General FAQs
How does Harness calculate pricing for CD?
See Service-based licensing and usage for CD
My definition of a service differs from the above standard definition. How will pricing work in my case?
Harness allows deployment of various custom technologies such as Terraform scripts, background jobs, and other non-specified deployments. These require custom evaluation to assess the correct Licensing model. Please contact the Harness Sales team to discuss your specific technologies and deployment use cases.
See the Pricing FAQ at Harness pricing.
Are there other mechanisms to license Harness CD beyond services?
See the Pricing FAQ at Harness pricing.
Yes, we are happy to have Harness Sales team work with you and understand the specifics of what you are trying to achieve and propose a custom licensing/pricing structure.
Do unused/stale services consume a license?
See the Pricing FAQ at Harness pricing.
Harness CD actively tracks and provides visibility into all active services that consume a license.
An active service is defined as a service that has been deployed at least once in the last 30 days. A service deemed inactive (no deployments in the last 30 days), does not consume a license.
How will I know if I am exceeding my licensed service usage?
See the Pricing FAQ at Harness pricing.
Harness CD has built-in license tracking and management dashboards that provide you real-time visibility into your license allocation and usage.
If you notice that you are nearing or exceeding your licensed services, please get in touch with Harness Sales team to plan ahead and ensure continued usage and compliance of the product.
How many users can I onboard onto Harness CD? Is there a separate pricing for Users?
Harness CD has been designed to empower your entire Engineering and DevOps organization to deploy software with agility and reliability. We do not charge for users who onboard Harness CD and manage various aspects of the deployment process, including looking through deployment summaries, reports, and dashboards. We empower users with control and visibility while pricing only for the actual ‘services’ you deploy as a team.
If I procure a certain number of service licenses on an annual contract, and realize that more licenses need to be added, am I able to procure more licenses mid-year through my current contract?
See the Pricing FAQ at Harness pricing.
Yes, Harness Sales team is happy to work with you and help fulfill any Harness-related needs, including mid-year plan upgrades and expansions.
If I procure a certain number of service licenses on an annual contract, and realize that I may no longer need as many, am I able to reduce my licenses mid-year through my current contract?
See the Pricing FAQ at Harness pricing.
While an annual contract can not be lowered mid-year through the contract, please contact us and we will be very happy to work with you. In case you are uncertain at the beginning of the contract of how many licenses should be procured - you can buy what you use today to start and expand mid-year as you use more. You can also start with a monthly contract and convert to an annual subscription.
What if I am building an open source project?
We love Open Source and are committed to supporting our Community. We recommend the open-source Gitness for hosting your source code repository as well as CI/CD pipelines.
Contact us and we will be happy to provide you with a no restriction SaaS Plan!
What if I add more service instance infrastructure outside of Harness?
See the Pricing FAQ at Harness pricing.
If you increase the Harness-deployed service instance infrastructure outside of Harness, Harness considers this increase part of the service instance infrastructure and licensing is applied.
When is a service instance removed?
If Harness cannot find the service instance infrastructure it deployed, it removes it from the Services dashboard.
If Harness cannot connect to the service instance infrastructure, it will retry until it determines if the service instance infrastructure is still there.
If the instance/pod is in a failed state does it still count towards the service instance count?
Harness performs a steady state check during deployment and requires that each instance/pod reach a healthy state.
A Kubernetes liveness probe failing later would mean the pod is restarted. Harness continues to count it as a service instance.
A Kubernetes readiness probe failing later would mean traffic is no longer routed to the pods. Harness continues to count pods in that state.
Harness does not count an instance/pod if it no longer exists. For example, if the replica count is reduced.
What deployment strategies can I use?
Harness supports all deployment strategies, such as blue/green, rolling, and canary.
See Deployment concepts and strategies
How do I filter deployments on the Deployments page?
You can filter deployments on the the Deployments page according to multiple criteria, and save these filters as a quick way to filter deployments in the future.
How do I know which Harness Delegates were used in a deployment?
Harness displays which Delegates performed each task in the Details of each step.
Can I restrict deployments to specific User Groups?
Yes, you can enable the Role permission Pipeline Execute and then apply that Role to specific User Groups.
See Manage roles.
Can I deploy a service to multiple infrastructures at the same time?
Each stage has a service and target Infrastructure. If your Pipeline has multiple stages, you can deploy the same service to multiple infrastructures.
See Define your Kubernetes target infrastructure.
Can I re-run a failed deployment?
Yes, select Re-run Pipeline.
How to handle the scenario where powershell scripts does not correctly return the status code on failure ?
Though it is an issue with Powershell where it does not return the error code correctly we need this for our step to proceed further and reflect the status correctly. Consider wrapping the code like below in the script:
$ErrorActionPreference = [System.Management.Automation.ActionPreference]::Stop
<execution code>
exit $LASTEXITCODE
Can we persist variables in the pipeline after the pipeline run is completed ?
We do not persist the variables and the variables are only accessible during the context of execution. You can make api call to write it as harness config file and later access the Harness file or alternatively you have a config file in git where you can push the var using a shell script and later access the same config file.
How do I access one pipeline variables from another pipeline ?
Directly, it may not be possible.
As a workaround, A project or org or account level variable can be created and A shell script can be added in the P1 pipeline after the deployment which can update this variable with the deployment stage status i.e success or failure then the P2 pipeline can access this variable and perform the task based on its value.
The shell script will use this API to update the value of the variable - https://apidocs.harness.io/tag/Variables#operation/updateVariable
Why some data for the resource configurations returned by api are json but not the get pipeline detail api ?
The reason the get api call for pipeline is returning a yaml because the pipeline is stored as yaml in harness. As this api call is for fetching the pipeline hence it is returning the yaml definition of the pipeline and not the json. If still you need json representation of the output you can use a parser like yq to convert the response.
How to exit a workflow without marking it as failed
You can add a failure strategy in the deploy stage by either ignoring the failure for the shell script or getting a manual intervention where you can mark that step as a success.
2 Deployments in pipeline, is it possible for me to rollback the stage 1 deployment if the stage 2 tests returned errors?
We do have a pipeline rollback feature that is behind a feature flag. This might work better as you would be able to have both stages separate, with different steps, as you did before, but a failure in the test job stage could roll back both stages.
Also, for the kubernetes job, if you use the Apply step instead of Rollout then the step will wait for the job to complete before proceeding, and you would not need the Wait step.
CDNG Notifications custom slack notifications
It is possible to create a shell script that sends notifications through Slack, in this case, we can refer to this article:
https://discuss.harness.io/t/custom-slack-notifications-using-shell-script/749
Creation of environment via API?
We do support API's for the nextgen : https://apidocs.harness.io/tag/Environments#operation/createEnvironmentV2
curl -i -X POST \
'https://app.harness.io/ng/api/environmentsV2?accountIdentifier=string' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY_HERE' \
-d '{
"orgIdentifier": "string",
"projectIdentifier": "string",
"identifier": "string",
"tags": {
"property1": "string",
"property2": "string"
},
"name": "string",
"description": "string",
"color": "string",
"type": "PreProduction",
"yaml": "string"
}'
Download artifact for winrm is not working while Nexus if windows machine is behind proxy in CG
Nexus is supported for NG but not in CG, so you can use custom powershell script something like below: Invoke-WebRequest -Uri "${URI}" -Headers $Headers -OutFile "${OUT_FILE}" -Proxy "$env:HTTP_PROXY"
How can we automatically create a new service whenever a new service yaml is uploaded to my source repo?
We can create a pipeline with api call for service creation and in that pipeline we can add a trigger to our source repo where service yaml is uploaded. Now whenever there will be a new service yaml the pipeline will get triggered and we can fetch this new service yaml using git cli in the shell step and use the yaml to make the api call for service creation.
How do I use all environments and only select infrastructure for multiple environment deployments?
Use filtered lists for this purpose. You can specify "Filter on Entities" as Environment in the first filter and select "Type" as all. Now for the infrastructure you can add another filter and provide the tag filter.
How do I list Github Tags for custom artifact when the curl returns a json array without any root element?
We cannot provide an array directly to the custom artifact. It needs a root element to parse the json response.
How to use the Stage Variable inside the Shell Script?
A variable expression can be used to access stage variables in pipelines. Just hover over your variable name, and you will see an option to copy the variable expression path, You can reference this path in shell script.
How can we return dynamically generated information to a calling application upon the successful completion of pipelines initiated by API calls from other applications?
You can configure pipeline outputs throughout the stages to include all the data you want to compile. Then, upon execution completion, you can include a shell script that references these outputs and sends the compiled information to the desired API.
Can we get details what branch did trigger the pipeline and who did it; the time the pipeline failed or terminated, while using Microsoft Teams Notification
These details are not available by default as only (status, time, pipeline name url etc) is only sent and if you need these details might ned to use custom shell script
How to pass list of multiple domains for allowing whitelisting while using api ?
Domain whitelisting api takes domain as input array. So if we have multiple domains to be passed this needs to be done as coma separeted string entries in the array. Below is a sample for the same:
curl -i -X PUT \
'https://app.harness.io/ng/api/authentication-settings/whitelisted-domains?accountIdentifier=xxxx' \
-H 'Content-Type: application/json' \
-H 'x-api-key: REDACTED' \
-d '["gmail.com","harness.io"]'
I have a pipeline containing different stages DEV-QA-UAT-PROD. In UAT I'm using Canary deployment and in PROD it's Blue-Green. In these scenarios how Harness provides proper Roll Back strategies?
Harness provides a declarative rollback feature that can perform rollbacks effectively in different deployment scenarios.
For Canary deployment in UAT, you can define the percentage of traffic to route to the new version and set up conditions to switch traffic between the old and new versions. If an anomaly is detected during the canary deployment, Harness will automatically trigger a rollback to the previous version.
For Blue-Green deployment in PROD, you can define the conditions to switch traffic between the blue and green environments. If an issue is detected in the green environment, you can easily switch back to the blue environment using the declarative rollback feature.
You can define the failure strategy on stages and steps in your pipeline to set up proper rollback strategies. You can add a failure strategy in the deploy stage by either ignoring the failure for the shell script or getting a manual intervention where you can mark that step as a success. Additionally, you can use the declarative rollback feature provided by Harness to perform rollbacks effectively in different deployment scenarios.
How to pass the dynamic tag of the image from the CI pipeline to the CD Pipeline to pull the image.
A project or org or account level variable can be created and A shell_script/Run Step can be added in the P1 pipeline to export or output the required variable then the P2 pipeline can access this variable and perform the task based on its value.
The shell script will use this API to update the value of the variable.
Where can one find the API request and response demo for execution of Pipeline with Input Set ?
One can use the below curl example to do so :
curl -i -X POST \
'https://app.harness.io/pipeline/api/pipeline/execute/{identifier}/inputSetList?accountIdentifier=string&orgIdentifier=string&projectIdentifier=string&moduleType=string&branch=string&repoIdentifier=string&getDefaultFromOtherRepo=true&useFQNIfError=false¬esForPipelineExecution=' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY_HERE' \
-d '{
"inputSetReferences": [
"string"
],
"withMergedPipelineYaml": true,
"stageIdentifiers": [
"string"
],
"lastYamlToMerge": "string"
}'
Please read more on this in the following documentation on Execute a Pipeline with Input Set References.
How do we pass the output list of first step to next step looping strategy "repeat", the output can be a list or array which needs to be parsed ?
The Output Variable of the shell script is a string, which you are trying to pass as a list of strings, to avoid this :
- First you need to convert your array list into a string and then pass it as an output variable.
- Then convert this string into a list of string again before passing it to the repeat strategy.
Please read more on this in the following Documentation.
I need to run my step in delegate host?
You can create a shell script and select option as execute on delegate under Execution Target
How to fetch files from the harness file store in the run step?
To fetch files from the Harness file store in a Run step, you can use the following example:
- step:
type: Run
name: Fetch Files from File Store
identifier: fetch_files
spec:
shell: Sh
command: |
harness file-store download-file --file-name <file_name> --destination <destination_path>
Replace "filename" with the name of the file you want to fetch from the file store, and "destinationpath" with the path where you want to save the file on the target host.
Does Harness supports multiple IaC provisioners?
Harness does support multiple Iac provisioners, few examples are terraform, terragrunt, cloud formation, shell script provisioning etc.
How do I setup a Pipeline Trigger for Tag and Branch creation in Github?
The out of the box Github Trigger type does not currently support this however, you can use a Custom Webhook trigger and follow the below steps in order to achieve this.
- Create a Custom Webhook trigger
- Copy the Webhook URL of the created trigger
- Configure a Github Repository Webhook pasting in the URL copied from Step 2 in the Payload URL
- Set the content type to
application/json
- Select
Let me select individual events.
for theWhich events would you like to trigger this webhook?
section - Check the
Branch or tag creation
checkbox
What are reserved symbols in PowerShell, and how do I handle them in Harness secrets in Powershell scripts?
Symbols such as |
, ^
, &
, <
, >
, and %
are reserved in PowerShell and can have special meanings. It's important to be aware of these symbols, especially when using them as values in Harness secrets.
If a reserved symbol needs to be used as a value in a Harness secret for PowerShell scripts, it should be escaped using the ^
symbol. This ensures that PowerShell interprets the symbol correctly and does not apply any special meanings to it.
The recommended expression to reference a Harness secret is <+secrets.getValue('secretID')>
. This ensures that the secret value is obtained securely and without any issues, especially when dealing with reserved symbols.
Which API is utilized for modifying configuration in the update-git-metadata API request for pipelines?
Please find an example API call below:
curl --location --request PUT 'https://app.harness.io/gateway/pipeline/api/pipelines/<PIPELINE_IDENTIFIER>/update-git-metadata?accountIdentifier=<ACCOUNT_ID>&orgIdentifier=<ORG_ID>&projectIdentifier=<PROJECT_IDENTIFIER>&connectorRef=<CONNECTOR_REF_TO_UPDATE>&repoName=<REPO_NAME_TO_UPDATE>&filePath=<FILE_PATH_TO_UPDATE>' \
-H 'x-api-key: <API_KEY>' \
-H 'content-type: application/json' \
Please read more on this in the following Documentation
How do I perform iisreset on a Windows machine?
You can create a WinRM connector and use a powershell script to perform the iisreset. Make sure the user credentials used for the connection have admin access.
If the assigned delegate executing a task goes down does the task gets re-assigned to other available delegates?
If a delegate fails or disconnects, then the assigned task will fail. We do not perform the re-assignment. If the step is idempotent then we can use a retry strategy to re-execute the task.
If the "All environments" option is used for a multiple environment deployment, why can we not specify infrastructure?
When the "All environments" option is selected we do not provide infrastructure selection in the pipeline editor. The infrastructure options are available in the run form.
We have an updated manifest file for deployment, but delegate seems to be fetching old manifest. How can we update this?
You can clear the local cached repo.
Local repository is stored at /opt/harness-delegate/repository/gitFileDownloads/Nar6SP83SJudAjNQAuPJWg/<connector-id>/<repo-name>/<sha1-hash-of-repo-url>
.
Can we get the pipeline execution url from the custom trigger api response?
The custom trigger api response contains a generic url for pipeline execution and not the exact pipeline execution. If we need the exact pipeline execution for any specific trigger we need to use the trigger activity page.
Does Harness offer a replay feature similar to Jenkins?
Yes, Harness provides a feature similar to Jenkins' Replay option, allowing you to rerun a specific build or job with the same parameters and settings as the previous execution. In Harness, this functionality is known as Retry Failed Executions. You can resume pipeline deployments from any stage or from a specific stage within the pipeline.
To learn more about how to utilize this feature in Harness, go to Resume pipeline deployments documentation.
How can I handle uppercase environment identifiers in Harness variables and deploy pipelines?
Harness variables provide flexibility in managing environment identifiers, but dealing with uppercase identifiers like UAT and DR can pose challenges. One common requirement is converting these identifiers to lowercase for consistency. Here's how you can address this:
-
Using Ternary Operator: While if-else statements aren't directly supported in variables, you can leverage the ternary operator to achieve conditional logic.
-
Updating Environment Setup: Another approach is to update your environment setup to ensure identifiers like UAT and DR are stored in lowercase. By maintaining consistency in the environment setup, you can avoid issues with case sensitivity in your deployment pipelines.
What does "buffer already closed for writing" mean?
This error occurs in SSH or WinRM connections when some command is still executing and the connection is closed by the host. It needs further debugging by looking into logs and server resource constraints.
Where do I get the metadata for the Harness download/copy command?
This metadata is detected in the service used for the deployment. Ideally, you would have already configured an artifact, and the command would use the same config to get the metadata.
Can I use SSH to copy an artifact to a target Windows host?
If your deployment type is WinRM, then WinRM is the default option used to connect to the Windows host.
Why doesn't the pipeline skip steps in a step group when another step in the group fails?
If you want this to occur, you neeed to define a conditional execution of <+stage.liveStatus> == "SUCCESS"
on each step in the group.
Why am I getting an error that the input set does not exist in the selected Branch?
This happens because pipelines and input sets need to exist in the same branch when storing them in Git. For example, if your pipeline exists in the dev
branch but your input set exists in the main
branch, then loading the pipeline in the dev
branch and attempting to load the input set will cause this error. To fix this, please ensure that both the pipeline and input set exist in the same branch and same repository.
When attempting to import a .yaml file from GitHub to create a new pipeline, the message This file is already in use by another pipeline
is displayed. Given that there are no other pipelines in this project, is there a possibility of a duplicate entry that I may not be aware of?
It's possible that there are two pipeline entities in the database, each linked to the same file path from the Harness account and the GitHub URL. Trying to import the file again may trigger the File Already Imported
pop-up on the screen. However, users can choose to bypass this check by clicking the Import
button again.
How can you seamlessly integrate Docker Compose for integration testing into your CI pipeline without starting from scratch?
Run services for integration in the background using a docker-compose.yaml
file. Connect to these services via their listening ports. Alternatively, while running docker-compose up
in CI with an existing docker-compose.yaml
is possible, it can complicate the workflow and limit pipeline control, including the ability to execute each step, gather feedback, and implement failure strategies.
What lead time do customers have before the CI starts running the newer version of images?
Customers typically have a one-month lead time before the CI starts running the newer versions of images. This allows them to conduct necessary tests and security scans on the images before deployment.
Can I export my entire FirstGen deployment history and audit trail from Harness?
You can use the following Harness FirstGen APIs to download your FirstGen audit trial and deployment history:
Why does a remote input set need a commit message input?
Harness requires a commit message so Harness can store the input set YAML in your Git Repo by making a commit to your Git repo.
What is the difference between "Remote Input Set" and "Import Input Set from Git"?
Remote Input Set is used when you create an input set and want to store it remotely in SCM.
Import Input Set from Git it is used when you already have an input set YAML in your Git repo that you want to import to Harness. This is a one-time import.
Why does the deleted service remain shown on the overview?
The dashboard is based on historical deployment data based on the selected time frame. Once the deleted service is not present in the selected time frame it will stop showing up on the dashboard.
In the overview page why Environments always showed 0 when the reality there are some environments?
The overview page offers a summary of deployments and overall health metrics for the project. Currently, the fields are empty as there are no deployments within the project. Once a deployment is in the project, these fields will be automatically updated.
What is the log limit of CI step log fetch step and how can one export the logs ?
Harness deployment logging has the following limitations:
- A hard limit of 25MB for an entire step's logs. Deployment logs beyond this limit are skipped and not available for download. The log limit for Harness CI steps is 5MB, and you can export full CI logs to an external cache.
- The Harness log service has a limit of 5000 lines. Logs rendered in the Harness UI are truncated from the top if the logs exceed the 5000 line limit.
Does Harness support any scripts available to migrate GCR triggers to GAR ?
No, one can create a script and use the api to re-create them. Please read more on this in our API Docs.
Please read more on this in the following Documentation on logs and limitations and Truncated execution logs.
In a Helm deployment with custom certificates, what is essential regarding DNS-compliant keys? ? How should delegates be restarted after modifying the secret for changes to take effect ?
Please follow below suggestions:
- Ensure that the secret containing custom certificates adheres strictly to DNS-compliant keys, avoiding underscores primarily. Following any modification to this secret, it is imperative to restart all delegates to seamlessly incorporate the changes.
- Helm Installation Command:
helm upgrade -i nikkelma-240126-del --namespace Harness-delegate-ng --create-namespace \
Harness-delegate/Harness-delegate-ng \
--set delegateName=nikkelma-240126-del \
--set accountId=_specify_account_Id_ \
--set delegateToken=your_Delegatetoken_= \
--set managerEndpoint=https://app.Harness.io/gratis \
--set delegateDockerImage=Harness/delegate:version_mentioned \
--set replicas=1 --set upgrader.enabled=true \
--set-literal destinationCaPath=_mentioned_path_to_destination \
--set delegateCustomCa.secretName=secret_bundle
- CA Bundle Secret Creation (Undesirable):
kubectl create secret generic -n Harness-delegate-ng ca-bundle --from-file=custom_certs.pem=./local_cert_bundle.pem
- CA Bundle Secret Creation (Desirable, no underscore in first half of from-file flag):
kubectl create secret generic -n Harness-delegate-ng ca-bundle --from-file=custom-certs.pem=./local_cert_bundle.pem
Please read more on Custom Certs in the following Documentation.
Can we use Continuous Verification inside CD module without any dependency of SRM ?
Yes, one can set up a Monitored Service in the Service Reliability Management module or in the Verify step
in a CD stage.
Please read more on this in the following Documentation.
How do I create a Dashboard in NG, which shows all the CD pipelines which are executing currently, in real-time ?
You can use the "status" field in dashboards to get the status of the deployments.
How is infra key formed for deployments.
The Infrastructure key (the unique key used to restrict concurrent deployments) is now formed with the Harness account Id + org Id + project Id + service Id + environment Id + connector Id + infrastructure Id.
What if the infra key formed in case when account Id + org Id + project Id + service Id + environment Id are same and the deployments are getting queued because of it.
To make the deployment work you can :
- Add a connector in the select host field and specify the host.
- Change the secret identifier (create a new with same key but different identifier).
I have a terraform code which I will need to use it deploy resources for Fastly service. And, I would like to know should I create a pipeline in CI or CD module and what's the reasoning behind it?
The decision on whether to create your pipeline in the Continuous Deployment (CD) module or Continuous Integration (CI) module depends on your specific use case and deployment strategy.
If your goal is to automate the deployment of infrastructure whenever there are changes in your code, and you are using Terraform for provisioning, it is advisable to create a pipeline in the CD module. This ensures that your application's infrastructure stays current with any code modifications, providing seamless and automated deployment.
Alternatively, if your use of Terraform is focused on provisioning infrastructure for your CI/CD pipeline itself, it is recommended to establish a pipeline in the CI module. This allows you to automate the provisioning of your pipeline infrastructure, ensuring its availability and keeping it up-to-date.
In broad terms, the CI module is typically dedicated to building and testing code, while the CD module is designed for deploying code to production. However, the specific use case and deployment strategy will guide your decision on where to create your pipeline.
It's worth noting that you also have the option to incorporate both types of processes within a single pipeline, depending on your requirements and preferences.
Is there a way to get notified whenever a new pipeline is created?
No, As per the current design it's not possible.
Does harness support polling on folders?
We currently, do not support polling on folders. We have an open enhancement request to support this.
How do I filter out Approvals for Pipeline Execution Time in Dashboards?
You can get the Approval step duration from the Deployments and Services V2 data source.
What is a service instance in Harness?
A service is an independent unit of software you deploy through Harness CD pipelines.
This will typically map to a service in Kubernetes apps, or to an artifact you deploy in traditional VM-based apps.
Service instances represent the dynamic instantiation of a service you deploy with Harness.
For example, for a service representing a Docker image, service instances are the number of pods running the Docker image.
Notes:
- For services with more than 20 service instances - active pods or VMs for that service - additional service licenses will be counted for each 20 service instances. This typically happens when you have large monolith services.
- See the Pricing FAQ at Harness pricing.
What are organizations and projects?
Harness organizations (orgs) allow you to group projects that share the same goal. For example, all projects for a business unit or division.
Within each org you can add several Harness projects.
A Harness project contains Harness pipelines, users, and resources that share the same goal. For example, a project could represent a business unit, division, or simply a development project for an app.
Think of projects as a common space for managing teams working on similar technologies. A space where the team can work independently and not need to bother account admins or even org admins when new entities like connectors, delegates, or secrets are needed.
Much like account-level roles, project members can be assigned project admin, member, and viewer roles
What is a Harness pipeline?
Typically, a pipeline is an end-to-end process that delivers a new version of your software. But a pipeline can be much more: a pipeline can be a cyclical process that includes integration, delivery, operations, testing, deployment, real-time changes, and monitoring.
For example, a pipeline can use the CI module to build, test, and push code, and then a CD module to deploy the artifact to your production infrastructure.
What's a Harness stage?
A stage is a subset of a pipeline that contains the logic to perform one major segment of the pipeline process. Stages are based on the different milestones of your pipeline, such as building, approving, and delivering.
Some stages, like a deploy stage, use strategies that automatically add the necessary steps.
What are services in Harness?
A service represents your microservices and other workloads logically.
A service is a logical entity to be deployed, monitored, or changed independently.
What are service definitions?
When a service is added to the stage in a pipeline, you define its service definition. Service definitions represent the real artifacts, manifests, and variables of a service. They are the actual files and variable values.
You can also propagate and override a service in subsequent stages by selecting its name in that stage's service settings.
What artifacts does Harness support?
Harness supports all of the common repos.
See Connect to an artifact repo.
What's a Harness environment?
Environments represent your deployment targets logically (QA, production, and so on). You can add the same environment to as many stages as you need.
What are Harness infrastructure definitions?
Infrastructure definitions represent an environment's infrastructure physically. They are the actual clusters, hosts, and so on.
What are Harness connectors?
Connectors contain the information necessary to integrate and work with third-party tools.
Harness uses connectors at pipeline runtime to authenticate and perform operations with a third-party tool.
For example, a GitHub connector authenticates with a GitHub account and repo and fetches files as part of a build or deploy stage in a pipeline.
See Harness Connectors how-tos.
How does Harness manage secrets?
Harness includes built-in secrets management to store your encrypted secrets, such as access keys, and use them in your Harness account. Harness integrates will all popular secrets managers.
See Harness secrets management overview.
Can I reference settings using expressions?
Yes. Everything in Harness can be referenced by a fully qualified name (FQN). The FQN is the path to a setting in the YAML definition of your pipeline.
See Built-in Harness variables reference.
Can I enter values at runtime?
Yes. You can use runtime Inputs to set placeholders for values that will be provided when you start a pipeline execution.
See Fixed values, runtime inputs, and expressions.
Can I evaluate values at run time?
Yes. With expressions, you can use Harness input, output, and execution variables in a setting.
All of these variables represent settings and values in the pipeline before and during execution.
At run time, Harness will replace the variable with the runtime value.
See Fixed Values, runtime inputs, and expressions.
Error evaluating certain expressions in a Harness pipeline
Some customers have raised concerns about errors while trying to evaluable expressions (example: <+pipeline.sequenceId>
) while similar expressions do get evaluated. In this case the concatenation in the expression /tmp/spe/<+pipeline.sequenceId>
is not working because a part of expression <+pipeline.sequenceId>
is integer so the concatenation with /tmp/spec/
is throwing error because for concat, both the values should be string only.
So we can invoke the toString()
on the integer value then our expression should work. So the final expression would be /tmp/spe/<+pipeline.sequenceId.toString()>
.
How to carry forward the output variable when looping steps?
If you are using looping strategies on steps or step groups in a pipeline, and need to carry forward the output variables to consecutive steps or with in the loop, you can use <+strategy.iteration>
to denote the iteration count.
For example, assume a looping strategy is applied to a step with the identifier my_build_step.
which has an output variable my_variable
The expression <+pipeline.stages.my_build_step.output.outputVariables.my_variable>
won't work. Instead, you must append the index value to the identifier in the expression, such as: <+pipeline.stages.my_build_step_0.output.outputVariables.my_variable>
If you are using with in the loop you can denote the same as <+pipeline.stages.my_build_step_<+strategy.iteration>.output.outputVariables.my_variable>
See Iteration Counts
How do I get the output variables from pipeline execution using Harness NG API?
We have an api to get the pipeline summary:
https://apidocs.harness.io/tag/Pipeline-Execution-Details#operation/getExecutionDetailV2
If you pass the flag renderFullBottomGraph
as true to this api it also gives you the output variables in the execution. You can parse the response to get the output variables and use it accordingly.
We have multiple accounts, like sandbox and prod, and we want to move the developments from sandbox to prod easily. Is there a solution for this?
Absolutely! We recommend customers to use test orgs or projects for sandbox development. Our hierarchical separation allows them to isolate test cases from production workloads effectively.
For pipeline development concerns, we have a solution too. Customers can utilize our built-in branching support from GitX. You can create a separate branch for building and testing pipeline changes. Once the changes are tested and verified, you can merge the changes into their default branch.
Sandbox accounts are most valuable for testing external automation running against Harness, which helps in building or modifying objects. This way, you can test changes without affecting production environments.
Is there an environment variable to set when starting the container to force the Docker delegate to use client tool libs from harness-qa-public QA repo?
To achieve this, you need to create a test image that points to the harness-qa-public QA repository. This involves updating the Docker file with the appropriate path to the QA buckets.
If I delete an infrastructure definition after deployments are done to it, what are the implications other than potential dashboard data loss for those deployments ?
At the moment there is no dependency on the instance sync and infrastructure definition. Infrastructure definition is used only to generate infrastructure details. The instance sync is done for service and environment. Only in case if any these are deleted, the instance sync will stop and delete instances.
If you are using the default release name format in Harness FirstGen as release-${infra.kubernetes.infraId}
, it's important to note that when migrating to Harness NextGen, you will need to replace ${infra.kubernetes.infraId}
with the new expression. In Harness NextGen, a similar expression <+INFRA_KEY>
is available for defining release names. However, it's crucial to understand that these expressions will resolve to completely different values compared to the expressions used in Harness FirstGen.
What is the procedure to take services backup?
We do not have any backup ability for services out of the box but you can take the backup of service YAMLs and use them later for creating service if there is any issue with the service.
What is Harness FirstGen Graphql API to create Harness pipelines in a specific application?
We do not have a way to create a new pipeline using Graphql in FirstGen. However, we do have APIs to create Harness pipelines in NextGen.