circle-exclamation
This documentation page is currently under development and may be updated frequently.

LLM Reaction

An LLM Reaction allows Towerflow to process incoming data using an AI model.

LLM Reactions are event-driven and are triggered automatically when data is received through a Data Webhook.

They can be used for:

  • Data interpretation and classification

  • AI-based decision making

  • Event-driven trading logic

  • Signal generation without chart analysis


πŸ› οΈ How LLM Reactions Work

  1. A Data Webhook receives an incoming request

  2. The payload is forwarded to all connected Reactions

  3. The LLM Reaction injects the data into its prompt

  4. The AI processes the request

  5. The result is stored and optionally forwarded as a signal

LLM Reactions run only when new data arrives β€” there is no scheduling.


βš™οΈ Configuration

Prompt

The prompt defines how the AI should interpret incoming data.

LLM Reactions support the following template variables:

  • {{data}} β†’ Raw payload from the webhook request

  • {{ip}} β†’ IP address of the sender

  • {{headers}} β†’ Request headers in JSON format

These variables are replaced at runtime with the actual values from the incoming request.


🧾 Example Prompt


🧩 Incoming Data & Context

The AI receives structured context from the webhook request:

{{data}}

  • Contains the full request body

  • Passed to the AI as-is

  • No schema enforcement

  • Typically JSON (text support coming soon)


{{headers}}

  • Contains all HTTP request headers

  • Provided as a JSON object

  • Useful for authentication, source identification, or metadata


{{ip}}

  • The IP address of the request sender (if available)

  • Can be used for filtering or trust-based logic


πŸ§ͺ Example Incoming Payload


🧾 Output & Storage

Each execution produces an output that:

  • Is stored in the platform

  • Is linked to the triggering webhook event

  • Can be reviewed for debugging and auditing

  • Can be forwarded to downstream systems


🧠 Trading Decision Mode (Optional)

LLM Reactions can be configured to produce trading decisions.

In this mode, the AI’s output is interpreted as a signal action.

Supported Actions

Only one action may be returned per execution.


⚠️ Mandatory Response Format

To trigger a signal, the AI must respond with exactly one of the supported actions.

βœ… Valid Responses

❌ Invalid Responses (ignored)

Any deviation from the exact format results in no action.


🧠 System Context (Automatically Injected)

When Trading Decision Mode is enabled, Towerflow automatically injects system-level context.

Prepended Context

Appended Instruction

You do not need to include this logic in your custom prompt.


πŸ“Š Signal Behavior

AI Output
Result

LONG

Opens a long position

SHORT

Opens a short position

EXIT

Closes the active position

HOLD

No action is taken

Invalid output β†’ no signal is dispatched


🧠 Prompt Writing Best Practices

  • Clearly define when to act

  • Explicitly describe exit conditions

  • Handle conflicting or low-confidence data

  • Prefer deterministic language

  • Suppress explanations when using decision mode

Example Trading Prompt


🚫 Limitations

  • No chart or image input

  • No price selection

  • One action per execution

  • No partial positions

  • Output must be exact


  • Data Webhook – Receive external data

  • Code Reaction – Deterministic logic processing

  • LLM Session – Scheduled AI analysis

  • Chart Analysis with Signal Generation – Chart-based signals

  • Signal Bots – Execute trades


🧠 Summary

LLM Reactions provide a powerful way to:

  • React to live external data

  • Apply AI reasoning in real time

  • Make structured trading decisions

  • Build event-driven trading systems without charts

They are best suited for clean, rule-guided AI decisions triggered by external events.

Last updated