All docs This doc
Skip to end of metadata
Go to start of metadata

Continuous integration and continuous delivery explained:

Continuous Integration/Continuous Deployment (CI/CD) Pipeline lays out some practices to follow to write code more quickly and ultimately generate value to the end-user. The CI (Continuous Integration) is the process of automatically detecting, pulling, building, and doing unit testing as source code is changed periodically for a product. CD(continuous Delivery) generally refers to the overall chain of processes (pipeline) that automatically gets source code changes and runs them through build, test, packaging, and related operations to produce a deployable release, largely without any human intervention.


CI/CD Pipeline is crucial in order to improve delivery predictability, efficiency, security and maintainability of our products. This pipeline automates the steps in our product delivery process, such as initiating automatic builds, testing and then deploying to Kubernetes.

The Kubernetes Continuous Integration Continuous Delivery (CI/CD) pipeline tools for WSO2 Identity Server help in automating the delivery process by building docker images, running automated tests, and deploying to a Development, Staging, or Production environment. Additionally, the setup consists of tools required for seamless update delivery, log aggregation, and monitoring.

The tools consist of Jenkins and Spinnaker. They are the primary tools used for continuous integration and deployment. The setup is deployed on top of Kubernetes using Helm, which makes the processes of configuration, installation, scaling, and upgrading simple. Additionally, Jenkins jobs and Spinnaker pipelines are preconfigured, making the process of getting started hassle-free.

Pipeline Architecture

The following diagram illustrates the architecture of the CI/CD pipeline.

This pipeline uses Jenkins as the integration tool, while Spinnaker is being used as the deployment tool.

For a product to be created or updated, Spinnaker expects a new Helm chart or Docker image. An update could be triggered by Spinnaker from any of the following events:

  1. The Helm chart's overridden values (values-dev.yaml, values-staging.yaml, values-prod.yaml) are stored in the chart source repository and Jenkins periodically polls for changes to the repository. Once a change is detected, a predefined Jenkins job will download the relevant chart from WSO2 repository and provide it to Spinnaker as a Webhook along with the overrides for each environment.

  2. A cron job in Jenkins pulls the latest image from the WSO2 Docker registry containing the latest updates. A new image is built on top of this updated base image based on the Dockerfile in the artifact source repository. This image is then pushed to the private Docker registry which is then consumed by Spinnaker and propagated to the environments.

  3. The artifact source repository contains the Dockerfile used to customize the base image from WSO2 Docker registry. This could also include artifacts that need to be copied into the image. A change to this repository triggers a build of a new image, which gets pushed to the private Docker registry. This triggers Spinnaker to propagate the new image to the environments.

Each environment has a corresponding Spinnaker pipeline (Dev, Staging, and Production). Every new change is deployed to Dev environment immediately. However, promotion to the staging and above environments need manual approval, which will trigger the pipelines to respective environments.

Let's get started with the K8s CICD Pipeline




CI/CD Pipeline is crucial in order to improve delivery, predictability, efficiency, security, and maintainability of our products. The CICD pipeline automates the steps in our product delivery process, such as initiating automatic builds, testing, and deploying to Amazon EC2 instances.

By using AWS Jenkins Pipeline plugin, this pipeline builds, tests, and deploys code for every edit. This pipeline expands with continuous integration by deploying all code changes to a development and staging environment and thereafter, to the production environment. Additionally, the setup consists of tools that are required for seamless update delivery, log aggregation, and monitoring.


Pipeline Architecture 

The following image depicts the AWS pipeline in a more understandable manner.

The above diagram depicts the stages of the CI/CD pipeline. These stages in the pipeline are fully customizable as per your requirements.

e.g., The number of environments in which the deployment is done and the test cases to run at each environment can also be changed. 

Following are the templated AWS pipeline stages:


Stage

Purpose

Setup environment

This stage involves setting up the Jenkins environment for the pipeline. This clones the following Github repositories and creates the directory structure: 

  • Artifacts Source Repository (GitRepoArtifacts): Contains the artifacts and any tests that need to be in the deployed environments.

  • Configuration Repository (GitRepoPuppet): Contains the puppet configuration modules for the deployed product pattern.

  • CFN Deployments Repository (GitRepoCF): It contains the CloudFormation scripts that are used to create the required deployment environments.


The repositories are taken as user inputs when setting up the Jenkins pipeline. 

Build the updated product pack

This stage involves downloading the pack and applying the puppet configuration to it. 

Note: By default, the product downloaded here is a General Availability(GA) product pack. However, if you give valid credentials to your WSO2 account, the product pack will be updated to the latest timestamp. 

The puppet configurations are taken from the repository (GitRepoPuppet) that is cloned at the environment setup stage.  A puppet-configured product pack is created in a .zip file 

Build the AMI for product nodes

In this stage, an Amazon Machine Image(AMI) is created. This AMI is created with the product pack that was created in the above stage, which makes it immutable. The created AMI is used in deployment stages in order to deploy to the configured environments without any other configurations or changes.

Deploy to development

In this stage, the dev stack is created and the test endpoint is returned as an output. Any tests that need to be conducted can be done against this endpoint.

Run tests on development

This stage involves running any pre-written test cases in the staging environment. The server URLs are given as outputs on the deployed staging environment stack.

Manual approval

The pipeline requires manual approval by the user at this stage, prior to deploying to staging. The default timeout for this stage is 72 hours.

Deploy to staging

In this stage, the staging stack is created and the test endpoint is returned as an output. Any tests that need to be conducted can be done against this endpoint.

Run tests on staging

This stage involves running any pre-written test cases in the staging environment. The server URLs are given as outputs on the deployed staging environment stack.

Manual approval

The pipeline requires manual approval at this stage, prior to deploying to production. The default timeout for this stage is 72 hours.

Deploy to production

Subsequent to manual approval the production stack is created and the test endpoint for the production environment is returned.

See stage Run tests on staging.

Run tests on production

This stage involves running the required tests the production environment. The server URL is given as outputs on the deployed production environment stack.


Obtaining WSO2 updates

To obtain WSO2 updates, you need a valid WSO2 subscription. To obtain valid WSO2 Subscription click  here

The Updated product pack is used to set up the pipeline, to facilitate this correct credentials are provided to the CFN script. For the first build, we use the WUM client and for subsequent builds WSO2 In-Place Updates tool will be used.


Git Hook on the repository

Git Hook can be configured for any repository of your choice in order to automatically trigger the pipeline.  If the correct GitHub credentials are given together with a valid repository, a push to the master branch automatically triggers the pipeline.


Logging and Monitoring

The pipeline is configured with logs for all environments and monitoring for the production environment. 

WSO2 AWS Pipeline is available in regions stated below:

  • us-east-1
  • us-east-2

  • us-west-1

  • us-west-2

  • ap-south-1

  • eu-west-1

  • eu-west-2

  • sa-east-1

  • ap-northeast-1

  • ap-southeast-1

  • Ap-southeast-2

  • eu-central-1 HERE

  • ca-central-1

The carbon logs can be accessed through Log Dashboard URL of the output tab for each deployment environment stack. 

The monitoring dashboard can be accessed through the MonitoringHTTPUrl with default username (admin) and password (admin) credentials.

The monitoring dashboard consists of

  • Probe monitoring

  • Resource monitoring

  • JVM monitoring

  • JMX monitoring




  • No labels