pub(crate) async fn initiate_llm_request(
app: &mut App,
input_to_send: String,
provider: Arc<GenAIProvider>,
model_name: &str,
tx_llm: &UnboundedSender<String>,
)
Expand description
Initiates an asynchronous LLM request with proper state management.
This function handles the complex orchestration of sending a user message to the LLM provider while managing UI state and providing user feedback.
§Process Flow
- Sets the application to busy state (disables input)
- Updates the status message with request information
- Spawns an asynchronous task for the LLM request
- Handles streaming responses through the provided channel
- Manages error states and recovery
§Arguments
app
- Mutable reference to the application stateinput_to_send
- The user’s message to send to the LLMprovider
- Arc reference to the LLM provider implementationmodel_name
- Name/identifier of the model to usetx_llm
- Channel sender for streaming LLM responses
§State Changes
This function modifies the application state:
- Sets
is_llm_busy
to true - Sets
is_input_disabled
to true - Updates the status message
- May add error messages to the chat history
§Concurrency
The actual LLM request is executed in a separate tokio task to prevent blocking the UI thread. This ensures the interface remains responsive during potentially long-running LLM requests.
§Error Handling
Errors are handled gracefully and communicated to the user through:
- Status message updates
- Error state management
- Chat history error messages