Skip to content

Ollama integration as a new AI provider#1208

Merged
Azgaar merged 10 commits intoAzgaar:ollamafrom
kroryan:master
Jun 14, 2025
Merged

Ollama integration as a new AI provider#1208
Azgaar merged 10 commits intoAzgaar:ollamafrom
kroryan:master

Conversation

@kroryan
Copy link
Copy Markdown
Contributor

@kroryan kroryan commented May 18, 2025

Description

This pull request introduces Ollama integration as a new AI provider for the text generation feature within the Fantasy Map Generator. Users can now leverage locally running Ollama models (e.g., Llama 3, Mistral) to generate descriptive text for their map notes.

Motivation and Context:

The primary motivation was to offer users more flexibility and control over the AI models used for text generation, particularly by enabling the use of local models which can be beneficial for privacy, cost, and offline access. This also serves as an alternative to cloud-based AI providers.

NOTE: this will only work for people who is using it locally and has ollama running in the same machine

Summary of Changes:

  1. Ollama Provider Implementation (modules/ui/ai-generator.js):

    • Added "Ollama" to the list of available AI providers (PROVIDERS and MODELS constants).
    • When "Ollama" is selected:
      • The "API Key" input field is repurposed to accept the Ollama model name (e.g., "llama3"). The placeholder text dynamically updates to guide the user.
      • The system makes requests to the local Ollama API endpoint: http://localhost:11434/api/generate.
    • A new function generateWithOllama was created to construct and send the request to the Ollama API, including the model name, prompt, system message, and temperature.
    • The handleStream function was updated to correctly parse the newline-delimited JSON objects streamed by the Ollama API.
    • Logic for saving and loading the Ollama model name (from the "API Key" field) to/from local storage was implemented, similar to how API keys for other providers are handled.
  2. AI Generator Dialog Enhancements (modules/ui/ai-generator.js):

    • Resolved an issue where the AI generator dialog would sometimes fail to initialize correctly, leading to errors when trying to access its elements (e.g., "Cannot set properties of null").
    • The initialization of the dialog's UI elements (updateDialogElements) and the setup of its internal event listeners (for the help button and model selection dropdown) are now performed within the jQuery dialog's open event. This ensures that the DOM elements are available and ready before any manipulation attempts.
  3. Notes Editor Integration (modules/ui/notes-editor.js):

    • The openAiGenerator function, triggered by the "generate text for notes" button, was verified to correctly call the main generateWithAi function, ensuring the dialog opens as expected.
    • The prompt sent to the AI was refined to provide clearer instructions on the expected HTML output format (use of <p> tags, no headings, no markdown). (Note: This change was present in the development process; the user may have reverted this specific prompt modification in their local version. The core functionality for Ollama integration remains.)

How it Works:

When a user selects "Ollama (enter model in key field)" from the AI generator dropdown:

  1. The "API Key" field's placeholder changes to "Enter Ollama model name (e.g., llama3)".
  2. The user enters their desired Ollama model name (which must be pulled/available in their local Ollama instance).
  3. Upon clicking "Generate", the generateWithOllama function sends a POST request to http://localhost:11434/api/generate with the specified model, prompt, and other parameters.
  4. The Ollama API streams back the response as a series of JSON objects, each on a new line.
  5. The handleStream function processes this stream, extracting the content from each JSON object and appending it to the result text area in real-time.

This integration allows for a seamless experience using local LLMs for content generation directly within the Fantasy Map Generator.

Type of change

  • New feature (Ollama integration for AI text generation)
  • Bug fix (Resolved issues with AI generator dialog initialization)
  • Refactoring / style
  • Documentation update / chore
  • Other (please describe)

Versioning

  • Version is updated
  • Changed files hash is updated

@netlify
Copy link
Copy Markdown

netlify bot commented May 18, 2025

Deploy Preview for afmg ready!

Name Link
🔨 Latest commit fe2f0d6
🔍 Latest deploy log https://app.netlify.com/projects/afmg/deploys/683f42d2f446c70008f3d46f
😎 Deploy Preview https://deploy-preview-1208--afmg.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@Azgaar
Copy link
Copy Markdown
Owner

Azgaar commented May 18, 2025

Hello. I'm not against Ollama, but this PR looks AI-generated. It has both good and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make more sense to allow ANY local model by just letting user to populate endpoint and other required fields. In this case I also expect a tutorial to be linked on what is it and how to use it.

@Azgaar Azgaar self-requested a review May 18, 2025 14:26
@kroryan
Copy link
Copy Markdown
Contributor Author

kroryan commented May 18, 2025

and bad parts, at least I would expect a human to work on it before presenting to public. Also I'm not sure why should we stick with Ollama and not e.g. LM Studio. It would make

well i check the code and it is all right but im new in coding so i may miss something, ollama works a 30% faster than lm studio you can check it thats why use ollama its better, also ollama api is not like lm studio api so it would need a diferent implementation , oh i think it is explained just in the pull request the user just need to write the model name instead the api and select ollama thats all

@Azgaar
Copy link
Copy Markdown
Owner

Azgaar commented May 18, 2025

well i check the code and it is all right

I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with window.generateWithAi = generateWithAi.

@kroryan
Copy link
Copy Markdown
Contributor Author

kroryan commented May 19, 2025

well i check the code and it is all right

I won't say so. It adds some mess that we don't need. Changelog is in a separate file, not in Readme. Some comments are redundant, some vars are renamed for unclear reason, and I don't get why does it change the way modules work with window.generateWithAi = generateWithAi.

Okey okey i will review all and i will get back to you, sorry for make you lost time!

Copy link
Copy Markdown
Contributor Author

@kroryan kroryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fail

Copy link
Copy Markdown
Contributor Author

@kroryan kroryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed the Recent Changes section from the README.md to keep the changelog separate.

I've also gone through the modules/ui/ai-generator.js file to delete all the comments and unnecessary staff

I've removed the window.generateWithAi = generateWithAi; line, as it was indeed an oversight on my part and unnecessary. The function is now only exposed via modules.generateWithAi, respecting the existing module structure.

take a look and let me know what you think¡

@maxim-haniyeu
Copy link
Copy Markdown

Hello, sorry for the delay, I was pretty busy recently. Once I find time, I will try to merge it. It still requires some clean up to follow the patterns, but it would be easier to just do it then discuss.

@kroryan
Copy link
Copy Markdown
Contributor Author

kroryan commented Jun 4, 2025

Hello, sorry for the delay, I was pretty busy recently. Once I find time, I will try to merge it. It still requires some clean up to follow the patterns, but it would be easier to just do it then discuss.

hey no problem about the delay, thanks mate¡

@Azgaar Azgaar changed the base branch from master to ollama June 14, 2025 12:02
@Azgaar
Copy link
Copy Markdown
Owner

Azgaar commented Jun 14, 2025

I will merge it to branch and then do some changes before merging to master.

@Azgaar Azgaar merged commit fe2fa6d into Azgaar:ollama Jun 14, 2025
4 checks passed
@Azgaar Azgaar mentioned this pull request Jun 14, 2025
7 tasks
@Azgaar
Copy link
Copy Markdown
Owner

Azgaar commented Jun 14, 2025

@kroryan, I've made some change and started a new PR to master: #1211
Thanks for the contribution!

@kroryan
Copy link
Copy Markdown
Contributor Author

kroryan commented Jun 14, 2025

@kroryan, I've made some change and started a new PR to master: #1211 Thanks for the contribution!

thanks so much to you¡¡¡

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants