- Available environment variables
- Cleanup pipeline
- Pipeline stages
- Creating your own pipeline stage
Wodby provides you a way to run your scripts after each deployment. You can do it by adding
wodby.yml file to the docroot of your app. Inside wodby.yml you can describe pipelines like this (example for command stage):
pipeline: - name: Drupal 8 clear cache on dev type: command command: drush cr directory: $HTTP_ROOT only_if: test "$WODBY_ENVIRONMENT_TYPE" = "dev"
In the example above
$HTTP_ROOTused as a directory instead of
$APP_ROOTbecause drupal root is in a subdirectory (composer-based project)
Or like this:
pipeline: - name: Run my custom script type: command command: ./my-script.sh directory: $APP_ROOT
The pipeline is an automated manifestation of your deployment process. In other words, it's just a set of post-deployment actions to execute
Available environment variables
wodby.yml can have one cleanup block; cleanup is another pipeline which needs to be executed after a pipeline has either failed or passed. In the cleanup block, we can add command or shell script stages. The below example create a log file in pipeline and then cleanup the log file in the cleaup steps.
pipeline: - name: start pipeline command: echo “pipeline” > log/log.txt cleanup: - name: cleanup command: rm log/*
Stage in pipeline has three elements, name, type and configurations. configuration elements are optional. The elements of configurations depend on the type. For example
command_stage type has command configuration, which specify the shell command run in the stage. The following is the table on the type and the parameters.
Command stage executes one command. Users specify Command stage adding command in type.
The following is the parameter of Command stage.
||false||shell command run in the stage|
||true||run specified command on when the condition written in only_if is satisfied|
||true||the directory where wodby runs the specified command|
You can set child stages and run these stages in parallel like this.
pipeline: - name: parallel stages parallel: - name: parallel command 1 type: command command: parallel command 1 - name: parallel command 2 type: command command: parallel command 2 - name: parallel command 3 type: command command: parallel command 3
In the above setting, parallel command 1, parallel command 2 and parallel command 3 are executed in parallel.
Reusing the results from stages
Wodby stores the results of preceding stages. The stages can make use of the results of finished stages using the three special variables (__OUT, __ERR, __COMBINED and __RESULT) in wodby.yml configuration files.
- __OUT - output flushed to standard output
- __ERR - output flushed to standard error
- __COMBINED - combined output of stdout and stderr
- __RESULT - execution result (true or false)
The three variables are maps whose keys are stage names and the value are results of the stages. For example, we want the standard output result of the stage named "stage1", we write __OUT["stage1"].
The following is a sample configuration with a special value.
pipeline: - name: stage_1 command: echo "hello world" - name: stage_2 command: echo __OUT["stage_1"]
Wodby with the above configuration outputs "hello world" twice, since the second stage (stage_2) flushes the standard output result of the first stage (stage_1).
Wait for running stages until the conditions are satisfied
The stage starts immediately after the previous stage finish, but some stages need to wait for some action such as port is ready or file are created. wait_for feature supports the actions which need to be ready before the stages begin.
wait_for is defined as a property of stage.
pipeline: - name: launch solr command: bin/solr start - name: post data to solr index command: bin/post -d ~/tmp/foobar.js wait_for: host=localhost port=8983 state=ready
The wait_for property takes the key value pairs. Key has several variations. The value depends on the key type. The following table shows the supported key value pairs and the description.
|Key||Value (value type)||Description|
|delay||second (float)||Seconds to wait after the previous stage finish|
|port||port number (int)||Port number|
|file||file name (string)||File to be created in the previous stages|
|host||host (string)||IP address or host name|
|state||state of the other key (string)||Four types of states are supported. The possible value is dependent to the other Keys.|
There are several state values and possible state values are depend on the other key.
There are seveal state values and possible state values are depend on the other key.
|present / ready||Specified port is ready or file is created.|
|absent / unready||port is not active or file does not exist|
Creating your own pipeline stage
You can create your own pipeline stage and then use them in your pipeline.
The the example below where we define two simple pipelines
namespace: mypackage stages: - def: name: hello command: echo "May I help you majesty!" - def: name: goodbye command: echo "Goobye majesty."
To import stages to pipeline configuration file, we use require block and add the list of file names into the block.
For example, the following example import the stages defined in
require: - conf/mystages.yml pipeline: - call: mypackage::hello - call: mypackage::goodbye
In the above setting, the stages (
mypackage::goodbye) which are defined in
mystage.yml are specified.
The files specified in pipeline configuration file need to have two blocks namespace and stages. In namespace, we add the package name, the package name is need to avoid collisions of the stage names in multiple required files. The stages block contains the list of stage definitions. We can define the stages same as the stages in pipeline configurations.
You can find output logs of executed post-deployment scripts under
Instance > Tasks