Skip to main content

Settings

The Settings page configures all system integrations, security options, domain behavior, law enforcement reporting, and system health monitoring. You must complete the LLM configuration in Settings before any persona can send messages or any worker can generate analysis reports.

Settings: Integrations

Tabs

Integrations

The Integrations tab covers the core operational configuration for Intel.

LLM configurations

Intel uses LLMs for two distinct roles: persona LLM powers the conversational AI in personas, and Threat Assessment LLM classifies threats and generates forensic reports. Configuring them separately lets you optimize cost and quality independently.

Supported providers:

ProviderNotes
OpenAIGPT-4o, GPT-4o-mini, o1
AnthropicClaude Sonnet, Claude Haiku
Azure OpenAIUse your Azure deployment endpoint
GroqFast inference for Llama and Mixtral models
Together AIOpen-source model hosting
OpenRouterMulti-provider router
Fireworks AIOptimized open-source inference
AnyscaleManaged model deployment

To add a new LLM configuration:

  1. Click Add Configuration
  2. Select your provider from the dropdown
  3. Enter the Base URL for the provider's API (e.g. https://api.openai.com/v1)
  4. Enter the Model name exactly as the provider identifies it (e.g. gpt-4o-mini)
  5. Enter your API key for that provider
  6. Click Test Connection and wait for the green checkmark confirming the credentials work
  7. Click Set as Default for the appropriate role: persona LLM or Threat Assessment LLM

You need at least one active configuration assigned as the persona LLM default before any persona can respond to messages.

Recommendations by role:

RoleRecommended modelReason
persona LLMFast, low-cost model (e.g. gpt-4o-mini, claude-haiku)High message volume; response latency directly affects how convincing the persona feels to a scammer
Threat Assessment LLMMore capable model (e.g. gpt-4o, claude-sonnet)Better accuracy on threat classification, scam type detection, and forensic report generation

Global collector for malicious URLs

Sets the default worker that receives newly discovered malicious URLs for automatic investigation. When a persona receives a suspicious URL from a scammer in any conversation, that URL is immediately submitted to this worker for forensic analysis: no manual action is required.

Choose the worker that should handle new URL submissions when no persona-specific collector is configured. In most deployments, this will be the Forensic Investigator worker.

API keys

KeyPurpose
URLQuery.net API keyUsed for URL sandboxing and external analysis enrichment. Obtain a key by creating a free account at urlquery.net.
OpenAI API keyUsed specifically for law enforcement report generation (can also be set via the OPENAI_API_KEY environment variable)

Security

Settings: Security

The Security tab configures how Intel makes outbound HTTP requests during investigations.

HTTPS proxy configuration routes all worker HTTP requests through a proxy server, hiding the system's IP address from scam sites during investigation. This is important for:

  • Preventing scam operators from detecting and blocking the investigation IP
  • Routing investigations through a jurisdiction-appropriate egress point
  • Anonymizing the system's origin for sensitive investigations

Configure your proxy server address, port, and credentials in this section. All worker browser sessions will route through the configured proxy when enabled.

Domains

Settings: Domains

Domain-specific settings control how the domain monitoring engine behaves:

  • Default check frequency: how often domains are polled for availability changes
  • Backoff configuration: how aggressively to reduce polling frequency for persistently offline domains
  • Alerting thresholds: conditions that trigger notifications (domain comes online, risk score changes, HTTP status changes)
  • Auto-scan on discovery: whether newly discovered domains are automatically queued for worker investigation

LE reporting

Configure the template and format for law enforcement reports generated from threat data. Settings here affect the output of the LE Report export option available on individual Reports.

Options include:

  • Report header/agency information
  • Jurisdiction and case reference fields
  • Evidence formatting preferences
  • Digital signature settings

System health

Settings: System health

The System Health tab provides a real-time status panel for all Intel infrastructure components:

ComponentWhat is checked
API serverBackend API responsiveness and version
Worker queueQueue depth and whether workers are picking up tasks
DatabaseConnection status and query latency
LLM APIConnectivity to each configured LLM provider
Browser worker poolNumber of available headless browser instances

Use this panel to diagnose issues before raising a support request. A worker queue that is growing but not shrinking indicates workers have stopped processing tasks: check the individual worker detail pages for error states.