This is a list of some of the Bots available features. For a more comprehensive
list you can always check out the code with all the commands or use the help
command.
- A help command containing all the bot commands.
- A bunch of embed interaction commands (like pats, hugs and etc.).
- A levelling system using a PostgreSQL connection which works with an XP cooldown (default is 60 seconds).
- The leveling system has a nice topranks command which gives a cool-looking embed!
- A bot uptime command.
- Additional optional features.
There's an optional ai feature (which can be enabled with --features="ai")
using Ollama.
To use it you simply need to run:
ollama pull gwen2.5:1.5b # <- You can use any model.
ollama serveNote
You can use any model you like, just make sure to set it in the
src/data/ai.rs at
crate::data::ai::OllamaRequest::DEFAULT_MODEL.
Note
If you also wish to deploy Ollama on a Docker container for example and want
to change the POST request URL, feel free to edit
crate::data::ai::OllamaRequest::CHAT_ENDPOINT at
src/data/ai.rs.
You can also enable the Tokio Console
feature by compiling the bot with --features="tokio_console".
Note
Make sure to also compile with RUSTFLAGS="--cfg tokio_unstable" if you
choose to do so.
The project uses back-end agnostic OpenTelemetry meaning you can choose your
preferred back-end if you choose to turn the opentelemetry feature flag on.
- Set up the
.envfile. - Run the app (
cargo run --releaseorcargo run --release --features='<your-features>').
Note
Refer to the .env.example file for all the required
variables and how to set them up accordingly.
Important
Make sure you aren't running PostgreSQL, Jaeger or Ollama locally due to port conflicts!
The project uses Docker with compose. To run it just run:
docker-compose up -dYou need to install Docker Compose from docker.com/compose/install though.
Note
docker-compose.yml and the
Dockerfile are set up for all the features.