The recommended way to locally try source{d} Lookout is using docker-compose as described in quickstart documentation.
In some situations, you might want to run all its dependencies and components separately, or you may even want to run it in a distributed mode. Both ways to run source{d} Lookout would require to follow these general steps:
- Run source{d} Lookout dependencies.
- Run the desired analyzers.
- Configure source{d} Lookout.
- Run source{d} Lookout.
In other situations you may just want to test an analyzer locally without accessing GitHub at all. For those situations, you might want to read the lookout-sdk binary documentation.
source{d} Lookout can be also run in a distributed fashion using a RabbitMQ queue to coordinate a watcher and several workers.
- The watcher process monitors GitHub pull requests and enqueues jobs for new events.
- The running workers dequeue jobs as they become available, call the registered analyzers, and post the results as comments.
The general steps to run source{d} Lookout in distributed mode are the same as said above.
For more details about the purpose of these external dependencies, you can take a look at External services in Architecture documentaion
source{d} Lookout needs a running instance of:
- bblfshd to parse files into UAST.
- PostgreSQL for persistence.
- (optional) RabbitMQ to coordinate a watcher and several workers (when running source{d} Lookout in a distributed way).
You can run them manually or with docker-compose.
$ docker-compose up -d --force-recreate bblfsh postgresIn case you want to run it in a distributed way, you will also need RabbitMQ, so you can run instead:
$ docker-compose -f docker-compose.yaml -f docker-compose-rabbitmq.yml up -d --force-recreate bblfsh postgres rabbitmqTo monitor RabbitMQ, go to http://localhost:8081, and access it with guest/guest
You will need to run the Analyzers to be used by source{d} Lookout.
You can run one of our example analyzers, any of the already available analyzers or the one that you're developing.
For testing purposes, you may want to use a dummy analyzer. You can download it from source{d} Lookout releases page and then run it:
$ dummy serveCopy the config.yml.tpl into config.yml and modify it according to your needs.
Take a look at configuration and GitHub authentication for more details about source{d} Lookout configuration.
At least you should:
- Add the gRPC addresses of the analyzers you ran in the previous step.
- Add the URLs of the repositories to be watched or authenticate as a GitHub App.
Download the latest lookoutd binary from source{d} Lookout releases page.
For non-default configuration, please take a look into lookoutd Command Options
$ lookoutd migrateFor non-default configuration, please take a look into lookoutd Command Options
For a single server watching GitHub and processing events, just run:
$ lookoutd serve [--dry-run] [--github-token=<token> --github-user=<user>]For non-default configuration, please take a look into lookoutd Command Options
In order to run it in a distributed mode, the watcher and the workers must be run separately.
Run the watcher:
$ lookoutd watch [--github-token=<token> --github-user=<user>]and as many workers you need:
$ lookoutd work [--dry-run] [--github-token=<token> --github-user=<user>]lookoutd binary includes some subcommands as described above, and they accept many different options; you can use:
lookoutd -h, to see all the available subcommands.lookoutd subcommand -h, to see all the options for the given subcommand.
Here are some of the most relevant options for lookoutd:
- dry-run mode
- authentication options
- number of concurrent events to process
- dependencies URIs
- logging options
If you want to avoid posting the analysis results on GitHub, and only print them, enable the dry-run mode when running serve, work subcommands:
| subcommands | Env var | Option |
|---|---|---|
serve, work |
LOOKOUT_DRY_RUN |
--dry-run |
To post the comments returned by the Analyzers into GitHub, you can configure the authentication in the config.yml (see configuration documentation), or do it explicitly when running serve, work and watch subcommands:
| subcommands | Env var | Option |
|---|---|---|
serve, work, watch |
GITHUB_USER |
--github-user= |
serve, work, watch |
GITHUB_TOKEN |
--github-token= |
You can adjust the number of events that each worker or the single server will process concurrently when running serve or work subcommands (if you set it to 0, it will process as many as the number of processors you have):
| subcommands | Env var | Option | Default |
|---|---|---|---|
serve, work |
LOOKOUT_WORKERS |
--workers= |
1 |
If you started all the source{d} Lookout dependencies using docker-compose, then lookoutd binary will be able to find them with its default values; otherwise, you should pass some extra values when running the lookoutd binary:
| subcommands | Env var | Option | Description | Default |
|---|---|---|---|---|
serve, work, migrate |
LOOKOUT_DB |
--db= |
PostgreSQL connection string | postgres://postgres:postgres@localhost:5432/lookout?sslmode=disable |
serve, work |
LOOKOUT_BBLFSHD |
--bblfshd= |
bblfsh gRPC address | ipv4://localhost:9432 |
watch, work |
LOOKOUT_QUEUE |
--queue= |
RabbitMQ queue name | lookout |
watch, work |
LOOKOUT_BROKER |
-broker-= |
RabbitMQ broker service URI | amqp://localhost:5672 |
| Env var | Option | Description | Default |
|---|---|---|---|
LOG_LEVEL |
--log-level= |
Logging level (info, debug, warning or error) |
info |
LOG_FORMAT |
--log-format= |
log format (text or json), defaults to text on a terminal and json otherwise |
|
LOG_FIELDS |
--log-fields= |
default fields for the logger, specified in json | |
LOG_FORCE_FORMAT |
--log-force-format |
ignore if it is running on a terminal or not |