Operations cockpit
Settings
WhatsApp connection
WhatsApp Web
Create a session, scan the QR code, and start ingesting WhatsApp group messages.
WAHA
Connection details are configured on the server. Use the button to verify WAGS can reach WAHA.
When enabled, the system ingests messages from every WhatsApp group this phone receives.
Business hours (SLA)
When enabled, SLA breaches are evaluated only during these hours.
Per-group SLA targets
Maximum minutes to first response for each support group.
Application accounts
Users who may sign in to this UI.
Manage UI access and participants discovered from messages.
Existing users
WAGS container restart
Restart only the WAGS container without rebuilding the full stack.
System reset
Permanently delete all system data and return to first-run setup for a new connection.
AI & Ollama
Assistant index status, WAGS configuration, and live data from the Ollama server (models, version, latency).
AI backend
Choose whether the assistant talks to a local Ollama service or Amazon Bedrock.
Leave fields empty to use the defaults from Docker/`.env`.
Bedrock uses your AWS credentials (environment or IAM role). Provide a region and model IDs.
WAHA API (from Swagger / OpenAPI)
Operations exposed by your WAHA instance, loaded from its OpenAPI or Swagger JSON. Use this to plan which data to persist locally next.
| Method | Path | Summary | Tags | WAGS |
|---|
Next step: choose which of these sources to mirror into MongoDB for an efficient local store.
Server log
DB tables & fields
Review MongoDB collections, estimated document counts, and top-level sample fields.