Skip to content

A Streamlit chatbot with memory for running open-source text models on Groq.

License

Notifications You must be signed in to change notification settings

alphasecio/groq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Groq Chatbot with Memory

Groq is a platform for running large language models (LLMs) with token-based pricing and no infrastructure management. Groq uses LPU Inference Engines, a new type of end-to-end processing unit system that provides fast inference for computationally intensive systems. This project showcases a Streamlit app using Groq-hosted models combined with Mem0 for persistent conversational memory.

Sign up for an account at GroqCloud and get an API key, which you'll need for this project. You'll also need an OpenAI API key for the embeddings model used by Mem0.

groq-playground

Supported Models

  • Groq (for chat response)
    • llama-3.3-70b-versatile
    • meta-llama/llama-4-scout-17b-16e-instruct
    • gemma2-9b-it
    • mistral-saba-24b
    • qwen-qwq-32b
    • deepseek-r1-distill-llama-70b
  • Mem0 (for memory backend)
    • mixtral-8x7b-32768: for semantic memory retrieval
    • text-embedding-3-small: for embeddings

Usage

  1. Clone the repository. Alternatively, deploy to Railway, Render, or Google Cloud Run.
git clone https://github.com/alphasecio/groq.git
cd groq
  1. Set your API keys either as environment variables or via the Streamlit sidebar inputs.
  2. Run the app.
streamlit run streamlit_app.py

About

A Streamlit chatbot with memory for running open-source text models on Groq.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Contributors 2

  •  
  •  

Languages