Function initiate_llm_request

Source
pub(crate) async fn initiate_llm_request(
    app: &mut App,
    input_to_send: String,
    provider: Arc<GenAIProvider>,
    model_name: &str,
    tx_llm: &UnboundedSender<String>,
)
Expand description

Initiates an asynchronous LLM request with proper state management.

This function handles the complex orchestration of sending a user message to the LLM provider while managing UI state and providing user feedback.

§Process Flow

  1. Sets the application to busy state (disables input)
  2. Updates the status message with request information
  3. Spawns an asynchronous task for the LLM request
  4. Handles streaming responses through the provided channel
  5. Manages error states and recovery

§Arguments

  • app - Mutable reference to the application state
  • input_to_send - The user’s message to send to the LLM
  • provider - Arc reference to the LLM provider implementation
  • model_name - Name/identifier of the model to use
  • tx_llm - Channel sender for streaming LLM responses

§State Changes

This function modifies the application state:

  • Sets is_llm_busy to true
  • Sets is_input_disabled to true
  • Updates the status message
  • May add error messages to the chat history

§Concurrency

The actual LLM request is executed in a separate tokio task to prevent blocking the UI thread. This ensures the interface remains responsive during potentially long-running LLM requests.

§Error Handling

Errors are handled gracefully and communicated to the user through:

  • Status message updates
  • Error state management
  • Chat history error messages