There are various ways you can begin using LAD depending on your environment.

Pip installation

The LAD project can be found on Pypy and can be installed using pip as follows:

pip install git+


LAD requires python 3.6

Build LAD

You may also clone the github repository and build lad yourself. See the Development Guide on further instructions on how you may do this.

Openshift Installation

There are two ways you may install LAD on openshift. The first way is done using Ansible and the second using the provided Makefile. For both methods you will need to clone the repo:

$ git clone
$ cd log-anomaly-detector/

Ansible OCP Install

Not surprisingly you will need Ansible and an OCP cluster with access to a namespace with deployment privileges. Navigate to the playbooks directory:

$ cd playbooks/
$ ls
playbook.yaml  roles  vars

We include one playbook that will provision an entire stack of tools alongside LAD. The stack includes a MySQL database, Prometheus, Grafana (with pre built dashboards for LAD), Factstore and LAD itself. See the roles/ folder for more info.

Using the playbook is relatively straight forward, you first define your configuration within vars/ directory and then run the following command from the playbooks/ directory`.

Feel free to adjust the variables as you see fit. If you are just looking to try out LAD on openshift, you may also use the standard variables provided within playbooks/vars/demo/dev-vars.yaml. You will however, need to update the namespace variable to match your OCP namespace (which needs to be already created):

    # The namespace you want to install LAD
    namespace: "lad"
    kubeconfig: $HOME/.kube/config
    state: present
    customer_id: "demo"

Once that is done, simply invoke the following command to deploy the entire stack:

$ ansible-playbook playbook.yaml -e target_env=dev -e customer=demo

Here dev/demo refers to custom profile setting for a dev environment located in playbooks/vars/demo/dev-vars.yaml. Similarly, by supplying dev we also use the common vars found within the playbooks/vars/common/dev-vars.yaml directory.

By default LAD is scaled down to zero pods. You will have to first configure a proper data source and sink before running a LAD deployment. For example, if we peek inside playbooks/vars/demo/dev-vars.yaml and look at the config map settings, we see:

    es_secrets_name: "log-anomaly-detector-certs"
    app_config: |
        STORAGE_DATASOURCE:           "es"
        STORAGE_DATASINK:             "stdout"
        ES_ENDPOINT:                  <elastic search URL>
        ES_QUERY:                     'ecommerce'
        ES_USE_SSL:                   False
        ES_INPUT_INDEX:               "lad-"
        ES_VERSION:                   7
        FACT_STORE_URL:               { { factstore_route } }
        INFER_TIME_SPAN:              900
        INFER_LOOPS:                  1
        INFER_MAX_ENTRIES:            3000
        TRAIN_TIME_SPAN:              900
        TRAIN_MAX_ENTRIES:            3000
        PARALLELISM:                  6
        SOMPY_TRAIN_ROUGH_LEN:        100
        SOMPY_INIT:                   "random"

Note that ES_ENDPOINT needs to be provided if that is your source. If your Elasticsearch requires cert files, you will have to manually add them to your namespace and provide their name using the es_secrets_name var otherwise you may simply exclude this variable. Once done, run the following command again:

$ ansible-playbook playbook.yaml -e target_env=dev -e customer=demo

Then scale up LAD to a single pod and watch the logs to see it in action.


An Elasticsearch ansible role is included but not enabled by default in the playbook, the general assumption is that you already have an Elasticsearch instance should you wish to injest data from it with LAD. If you would like the playbook to provision elasticsearch as well, simply change the es.deploy var to true in playbooks/vavs/common/dev-vars.yaml:

# dev-vars.yaml
    deploy: true

Makefile Installation

To deploy LAD and all accomodating tools (Prometheus, MySQL, Grafana, Elastic Search, Elastalert, Factstore) run the following commands from the root of the project:

$ git clone
$ cd log-anomaly-detector
$ make NAMESPACE=<your_namespace> oc_deploy_demo_prereqs

In the Makefile update the FACTSTORE_ROUTE (based on your newly deployed Factstore route) and SMTP_SERVER_URL (in order to use Elastalert, you will need a ready SMTP server).

$ cat Makefile
# route for the Factstore deployed

# mailing server used by elastalerts to send anomaly alerts

Now run the following command to deploy LAD, Prometheus, and Grafana:

$ make NAMESPACE=<your_namespace> oc_deploy_lad
$ make NAMESPACE=<your_namespace> oc_deploy_demo_monitoring

LAD will launch alongside a demo ecommerce app. If you place order on this demo app, you will see LAD try to detect anomolies based on the order logs produced. Update the configmaps for LAD to use your own data sources instead.

For more information on how to configure LAD to better suit your needs, see Configurations.