A MagicMirror² module that displays lunch menu information scraped from web sources such as county school dining/food schedule pages. This module works with an external Docker-based scraper service that fetches menu data from websites and generates formatted HTML for display.
Developer's Note: This project arose from a want to display the next day's lunch menu for our school district on our MagicMirror². Unfortunately our county's school menu site did not support iframes, and could not be dynamically linked to for parsing normally. This setup uses BeautifulSoup to render the page within a browser via Python, where it can then be parsed for the info. Once extracted, the container will use that saved information to create a simple HTML setup for the module to display. The setup reloads and scrapes at 3AM EST, since the link I used in my setup changes daily. Customize it for your needs based on the type of info and page you'll be using.
- Automatic Menu Scraping: Docker container automatically scrapes menu data on a schedule
- Weekend Handling: Automatically shows Monday's menu on weekends (configurable)
- Customizable: Easy to configure for different websites and menu formats
- Lightweight: Uses simple HTTP requests (no browser automation needed)
- Beautiful Display: Clean HTML output styled for MagicMirror²
The scraper captures the marked lunch section from the website (left) and displays it in the Magic Mirror module (right).
In your MagicMirror² modules directory:
cd ~/MagicMirror/modules
git clone https://github.com/mjryan253/mmm-lunchmenu.git MMM-LunchMenu
cd MMM-LunchMenuThe module requires a separate scraper service to fetch menu data. You can run this in a Docker container or as a standalone Python script.
- Copy the example Docker Compose file:
cd scraper
cp docker-compose.example.yml docker-compose.yml-
Edit
docker-compose.ymland configure the environment variables (see Configuration section below) -
Start the scraper:
docker-compose up -d- Install Python dependencies:
cd scraper
pip install -r requirements.txt- Set environment variables and run:
export MENU_URL="https://your-menu-website.com"
export OUTPUT_PATH="./lunch_menu.html"
python scrape.pyAdd the module to your config/config.js:
{
module: 'MMM-LunchMenu',
position: 'bottom_left',
header: 'School Lunch',
config: {
menuUrl: '/modules/MMM-LunchMenu/public/lunch_menu.html',
updateInterval: 3600000, // 1 hour in milliseconds
width: '600px',
height: '400px'
}
}Important: The menuUrl path must match where the scraper outputs the HTML file. If using Docker, ensure the volume mount points to the correct location.
| Option | Type | Default | Description |
|---|---|---|---|
menuUrl |
string | /modules/MMM-LunchMenu/public/lunch_menu.html |
Path to the generated menu HTML file (must be accessible by MagicMirror) |
updateInterval |
number | 3600000 |
How often to refresh the menu (in milliseconds) |
width |
string | "600px" |
Width of the module display |
height |
string | "400px" |
Height of the module display |
The scraper can be configured using environment variables. These can be set in your docker-compose.yml file or as system environment variables.
MENU_URL: The URL of the website to scrape for menu information- Example:
https://www.aacps.org/dining?filter=61292
- Example:
OUTPUT_PATH: Path where the HTML file will be saved (default:/output/lunch_menu.html)TIMEZONE: Timezone for date calculations (default:America/New_York)- Examples:
America/New_York,Europe/London,America/Los_Angeles
- Examples:
SCHEDULE_TIME: Time to run daily scrape in 24-hour format (default:03:00)- Example:
03:00for 3:00 AM,14:30for 2:30 PM
- Example:
WEEKEND_FALLBACK: Show Monday's menu on weekends (default:true)- Set to
falseto disable weekend fallback
- Set to
TARGET_DAY_PATTERN: Regex pattern to match day names in the menu (default:(Monday|Tuesday|Wednesday|Thursday|Friday|Saturday|Sunday))- Customize if your website uses different day names or formats
MENU_SECTION_PATTERN: Regex pattern to extract the menu section (default:Lunch(.*?)(?=salad bar|$))- This pattern captures content after "Lunch" until "salad bar" or end of text
- Customize based on your website's menu structure
MENU_SECTION_NAME: Display name for the menu section (default:Lunch)- This is the header text shown in the module
To adapt this module for a different website, you'll need to customize the scraper's parsing logic. Here's a step-by-step guide:
- Open your menu website in a browser
- Right-click and select "View Page Source" or use browser DevTools
- Identify how the menu data is structured:
- How are days of the week labeled?
- Where is the menu content located in the HTML?
- What text patterns indicate the start/end of menu sections?
First, verify the scraper can fetch your website:
# Using curl to test
curl -H "User-Agent: Mozilla/5.0" https://your-menu-website.comIf the website requires JavaScript to load content, you may need to use a different approach (like Playwright or Selenium) instead of the simple requests library.
The scraper uses regex patterns to extract menu information. You'll likely need to adjust:
-
TARGET_DAY_PATTERN: Match how days are labeled on your website# Example: If your site uses "Mon", "Tue", etc. TARGET_DAY_PATTERN = r'(Mon|Tue|Wed|Thu|Fri|Sat|Sun)' # Example: If your site uses dates like "12/03/2025" TARGET_DAY_PATTERN = r'(\d{1,2}/\d{1,2}/\d{4})'
-
MENU_SECTION_PATTERN: Extract the menu content# Example: Extract everything after "Lunch Menu:" until next section MENU_SECTION_PATTERN = r'Lunch Menu:\s*(.*?)(?=Breakfast|Dinner|$)' # Example: Extract content between specific HTML tags # (You might need to modify parse_menu_content() function for this)
If your website has a complex structure, you may need to modify the parse_menu_content() function in scraper/scrape.py. For example:
def parse_menu_content(html_content):
"""Custom parsing for your specific website."""
soup = BeautifulSoup(html_content, 'html.parser')
# Example: Find menu by CSS selector
menu_div = soup.find('div', class_='menu-content')
if not menu_div:
return None
# Extract text or specific elements
menu_items = menu_div.find_all('li')
menu_text = '\n'.join([item.get_text() for item in menu_items])
return [('Lunch', menu_text)]- Set environment variables:
export MENU_URL="https://your-menu-website.com"
export MENU_SECTION_PATTERN="your-custom-pattern"- Run the scraper manually:
cd scraper
python scrape.py-
Check the output file to verify it contains the correct menu data
-
If the output looks correct, update your
docker-compose.ymlwith the new environment variables
The generate_html_output() function in scrape.py creates the HTML display. You can customize the CSS styles to match your preferences:
# In generate_html_output() function, modify the html_template
# Change colors, fonts, spacing, etc. to match your MagicMirror themeLet's say you want to scrape a menu from a different school district website:
- Find the menu URL: Navigate to the school's menu page and copy the URL
- Inspect the HTML structure: Look at how the menu is formatted
- Update docker-compose.yml:
environment:
- MENU_URL=https://newschool.example.com/menus
- MENU_SECTION_PATTERN=Lunch Menu\s*(.*?)(?=Nutrition|$)
- MENU_SECTION_NAME=Today's Lunch- Test and iterate: Run the scraper and adjust patterns until it extracts the correct data
To extract multiple sections (e.g., Breakfast and Lunch), modify parse_menu_content():
menu_sections = []
# Extract Breakfast
breakfast_match = re.search(r'Breakfast(.*?)(?=Lunch|$)', today_menu, re.DOTALL | re.IGNORECASE)
if breakfast_match:
menu_sections.append(('Breakfast', breakfast_match.group(1).strip()))
# Extract Lunch
lunch_match = re.search(r'Lunch(.*?)(?=Dinner|$)', today_menu, re.DOTALL | re.IGNORECASE)
if lunch_match:
menu_sections.append(('Lunch', lunch_match.group(1).strip()))
return menu_sectionsIf your menu website uses a different timezone:
environment:
- TIMEZONE=America/Chicago # Central Time
- SCHEDULE_TIME=02:00 # Run at 2 AM CentralTo show a different day's menu on weekends:
Modify the parse_menu_content() function:
if WEEKEND_FALLBACK and day_of_week >= 5:
target_date_str = "Friday" # Show Friday's menu on weekends- Create
scraper/docker-compose.yml:
version: '3.8'
services:
menu-scraper:
build: .
container_name: mmm-lunchmenu-scraper
restart: unless-stopped
environment:
- MENU_URL=https://your-menu-website.com
- OUTPUT_PATH=/output/lunch_menu.html
- TIMEZONE=America/New_York
- SCHEDULE_TIME=03:00
volumes:
- ./output:/output- Build and start:
cd scraper
docker-compose up -dIf you're running MagicMirror in Docker, you can integrate the scraper:
services:
magicmirror:
# ... your MagicMirror config ...
volumes:
- ./config:/opt/magic_mirror/config
- mm2_shared:/opt/magic_mirror/modules/MMM-LunchMenu/public
menu-scraper:
build: ./scraper
volumes:
- mm2_shared:/output # Shared volume with MagicMirror
environment:
- MENU_URL=https://your-menu-website.com
- OUTPUT_PATH=/output/lunch_menu.html
volumes:
mm2_shared:Then update your MagicMirror config:
{
module: 'MMM-LunchMenu',
config: {
menuUrl: '/modules/MMM-LunchMenu/public/lunch_menu.html'
}
}- Check scraper logs:
docker-compose logs menu-scraper- Verify HTML file exists:
# If using Docker
docker-compose exec menu-scraper ls -la /output/
# If using standalone
ls -la scraper/output/-
Check file permissions: Ensure MagicMirror can read the HTML file
-
Verify menuUrl path: The path in
config.jsmust match where the file is actually located
- Test the URL manually:
curl -H "User-Agent: Mozilla/5.0" https://your-menu-website.com-
Check if website requires JavaScript: Some sites need browser automation (Playwright/Selenium)
-
Verify regex patterns: The patterns might not match your website's structure
-
Check timezone settings: Ensure the timezone matches your location
- Inspect the raw HTML: Save the fetched HTML and examine its structure
- Adjust regex patterns: Modify
TARGET_DAY_PATTERNandMENU_SECTION_PATTERN - Modify parsing function: For complex structures, customize
parse_menu_content()
If you see permission errors:
# Fix output directory permissions
chmod 777 scraper/output/
# Or recreate Docker volume
docker-compose down
docker volume rm [volume-name]
docker-compose up -d- Install dependencies:
cd scraper
pip install -r requirements.txt- Set environment variables:
export MENU_URL="https://your-menu-website.com"
export OUTPUT_PATH="./lunch_menu.html"- Run:
python scrape.py- Modify
scrape.pywith your changes - Test locally before updating Docker image
- Check the generated HTML file for correctness
- Rebuild Docker image if needed:
docker-compose build --no-cacheMIT License - see LICENSE.md for details
Contributions are welcome! Please feel free to submit a Pull Request.
- Based on the MMM-Template by Dennis Rosenbaum
- Uses BeautifulSoup4 for HTML parsing
- Uses requests for HTTP requests
For issues and questions:
- Check the troubleshooting section above
- Review the customization guide
- Open an issue on GitHub with:
- Your website URL (if not sensitive)
- Error messages from logs
- Your configuration (sanitized)

