Embedding AI Chat in Your Mobile App
AI-powered chat is fast becoming a core component of modern mobile apps—from interactive help bots to self-service repair assistants. This post explores how you might incorporate a ChatGPT-like system directly in your Flutter-based mobile client, handle input/output flows, and maintain a seamless user experience.
Planning the Architecture
-
Frontend
- Flutter UI: You typically want a scrollable conversation window with user messages on one side, AI responses on the other.
- Local Chat State: Use a local list of messages to instantly display user input. When an AI response arrives from the server (or the external AI API), append it to the list.
-
Backend
- API Endpoints: These forward user messages to the AI service (e.g., OpenAI’s API).
- Authentication & Rate Limits: Protect your endpoint from unauthorized usage, especially important if you maintain an enterprise-level or paid usage environment.
-
AI Integration
- API Calls: Typically an HTTPS POST with a JSON body specifying the user’s prompt, along with system prompts.
- Prompt Engineering: A hidden “system prompt” shapes how the AI responds, ensuring it uses the correct style, language, or domain-specific logic.
Key UI Workflow
Below is a simplified Flutter widget that manages the AI chat interface. (Note that I wrote a more extensive version for my real application, but this snippet should illustrate the main logic.)
import 'package:flutter/material.dart';
class AiChatPage extends StatefulWidget {
const AiChatPage({Key? key}) : super(key: key);
@override
State<AiChatPage> createState() => _AiChatPageState();
}
class _AiChatPageState extends State<AiChatPage> {
final _messageController = TextEditingController();
final ScrollController _scrollController = ScrollController();
/// A local list of conversation messages, each is either from "user" or "assistant".
List<Map<String, String>> _chatHistory = [];
@override
void initState() {
super.initState();
// Optionally preload a greeting
_chatHistory.add({
"role": "assistant",
"content": "Hi there! I'm your AI assistant. How can I help?"
});
}
@override
void dispose() {
_messageController.dispose();
_scrollController.dispose();
super.dispose();
}
Future<void> _sendUserMessage(String text) async {
// 1) Immediately display user’s input
setState(() {
_chatHistory.add({"role": "user", "content": text});
});
_messageController.clear();
// 2) Make call to your backend or directly to an AI API
try {
final aiResponse = await _fakeSendToOpenAI(text);
// 3) Append AI reply
setState(() {
_chatHistory.add({"role": "assistant", "content": aiResponse});
});
// 4) Scroll to bottom
Future.delayed(Duration(milliseconds: 500), () {
_scrollController.animateTo(
_scrollController.position.maxScrollExtent,
duration: Duration(milliseconds: 400),
curve: Curves.easeOut,
);
});
} catch (e) {
// Show error message
setState(() {
_chatHistory.add({
"role": "assistant",
"content": "Something went wrong. Please try again later."
});
});
}
}
// Stub method simulating an AI call
Future<String> _fakeSendToOpenAI(String prompt) async {
await Future.delayed(Duration(seconds: 1));
return "Here's my AI-based analysis for: '$prompt' 🤖";
}
Widget _buildChatBubble(String content, bool isUser) {
return Align(
alignment: isUser ? Alignment.centerRight : Alignment.centerLeft,
child: Container(
margin: EdgeInsets.symmetric(vertical: 6, horizontal: 12),
padding: EdgeInsets.all(12),
decoration: BoxDecoration(
color: isUser ? Colors.blue.shade100 : Colors.grey.shade200,
borderRadius: BorderRadius.circular(8),
),
child: Text(content),
),
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("AI Chat"),
),
body: Column(
children: [
Expanded(
child: ListView.builder(
controller: _scrollController,
itemCount: _chatHistory.length,
itemBuilder: (context, index) {
final message = _chatHistory[index];
final isUser = message["role"] == "user";
return _buildChatBubble(message["content"] ?? "", isUser);
},
),
),
SafeArea(
child: Padding(
padding: EdgeInsets.symmetric(horizontal: 8, vertical: 4),
child: Row(
children: [
Expanded(
child: TextField(
controller: _messageController,
textInputAction: TextInputAction.send,
onSubmitted: (value) => _sendUserMessage(value),
decoration: InputDecoration(hintText: "Type message..."),
),
),
IconButton(
icon: Icon(Icons.send),
onPressed: () {
_sendUserMessage(_messageController.text.trim());
},
),
],
),
),
),
],
),
);
}
}
Notable Considerations:
- We keep
_chatHistoryin memory. Large or persistent conversation logs might require local database storage or server-side session tokens. -
ScrollControllerto auto-scroll to the bottom upon receiving a new AI response.
Prompt Engineering
One of the biggest differences between standard REST calls and AI calls is the prompt. You may include:
- A system or assistant role message that sets style or domain constraints (e.g., “You are a friendly repair bot, please help the user diagnose issues…”).
- The user message with the actual question or instructions.
- Additional context or metadata that your AI uses to tailor the reply.
In my production code, I embed a hidden “system prompt” that ensures the chatbot always maintains a consistent style—like a helpful home-appliance troubleshooting bot. This is stored server-side or built into the request body.
AI Integration Strategy
Option 1: Direct Calls in the Client
You can integrate an API key directly in your Flutter code and send requests from the user’s device. However, this approach risks exposing your API key. It’s simpler for development but risky for production.
Option 2: Custom Backend Proxy
A more secure option is to have your mobile client send messages to your own backend. The backend then:
- Assembles the prompt (including hidden system text).
- Calls the AI provider API using your private key.
- Sends back the AI’s response to the client.
This approach offers more control (e.g., rate limiting, monitoring usage, adjusting system prompts) while keeping your keys private.
Post-Processing AI Replies
AI models sometimes return partial or poorly formatted content. In my code, for instance, I do extra steps:
- Markdown Stripping: If the model returns code blocks or bold text, you can parse or remove them before display.
- Filtering: Redact user-sensitive content if needed.
- Truncation: If the model is too verbose, you can cut off extra tokens or lines.
I also add logic for “role: system” vs. “role: user” vs. “role: assistant” to reflect the conversation state (e.g., color-coded messages).
Conclusion
Embedding an AI chat within a mobile app demands thoughtful planning: you must handle local conversation states, secure your API keys, define robust system prompts, and gracefully manage user interactions. The payoff is an app with dynamic, context-aware assistance—whether for tech support, triaging repairs, or just providing an engaging user experience. Once you start hooking up an AI model, you’ll see immediate benefits in user satisfaction and discover new ways to streamline features like advanced Q&A, troubleshooting flows, or product recommendation dialogs.
Enjoy Reading This Article?
Here are some more articles you might like to read next: