A streamlit-based chatbot application that uses Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) capabilities powered by LangChain and LangGraph.
Overall, my goal is to create a RAG/chatbot "playground" where developers can experiment with different retrieval strategies, different vector stores, Q/A workflows, etc. I am trying to build observability directly into the UI, so that developers can see, e.g., exactly what query was sent to the search engine, what document chunks were retrieved, etc. This is especially aimed for users who are building with LangGraph and use LangSmith for observability.
Another goal is to see how much I can do with LangGraph and Streamlit without relying on the StreamlitCallbackHandler.
NOTE: WORK-IN-PROGRESS!
- Interactive chat interface built with Streamlit
- Support for multiple LLM providers
- Retrieval Augmented Generation (RAG) for knowledge-enhanced responses
- Document upload capability for custom knowledge bases
- Configurable model parameters (temperature, max tokens)
# Clone the repository
git clone https://github.com/yourusername/streamlit-chatbot.git
cd streamlit-chatbot
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows, use: .venv\Scripts\activate
# Install dependencies
pip install -e .# Set up your environment variables in a .env file
cp .env.example .env
# Edit .env with your API keys
# Run the application
streamlit run src/streamlit_chatbot/app.pyMIT
The repo is not yet in a state that's ready for contributions.