Skip to content

LLM / Artificial Intelligence trained on my own BookStack library in order to find what past notes? #5611

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
Wookbert opened this issue May 15, 2025 · 4 comments

Comments

@Wookbert
Copy link

Describe the feature you'd like

Are there any plans to integrate an LLM into BookStack? It might sound like an insane, but it could be useful to locate information in one’s BookStack library or to refresh ones memory on what’s there. Think of using it for instance to create recaps of books or pages.

Describe the benefits this would bring to existing BookStack users

I'm collecting so much information and have so many projects going on, I find it increasingly difficult to keep track of my own recordings.

I assume many other have similar problems.

Can the goal of this request already be achieved via other means?

Only by manually browsing or searching for the right keywords.

Have you searched for an existing open/closed issue?

  • I have searched for existing issues and none cover my fundamental request

How long have you been using BookStack?

Over 5 years

Additional context

No response

@ssddanbrown
Copy link
Member

Hi @Wookbert,

I started on a proof of concept of a somewhat native integration of LLM based search back in March.
My draft PR branch with research can be found in #5552.
I just added a video preview of my proof of concept in a comment there to provide some visuals: #5552 (comment)

There's quite a few questionables & considerations around this though.
It's on pause right now, while I crack on with the next feature release, but my plan was to come back to it to develop it out a little further after this current release cycle is done.

@Kristoffeh
Copy link

Hello!

From what I've heard; The bigger LLMs are based on various calculations by using the GPU rather than the CPU. I actually currently don't even have a GPU in my homelab server.

It would be great if things like LLM is configurable, so that its not even in use in the first place - unless it is configured.

@ssddanbrown
Copy link
Member

Hi @Kristoffeh,
We probably wouldn't ship models directly part of BookStack at all, and this would be something that's optional upon the default system due to the external requirements.

My current implementation uses OpenAI-like APIs, which others seem to support as a somewhat non-official standard.
The idea is you'd be able to integration with an external system of your choice which supports this API, including a self-hosted instance of something like Ollama using self-hosted models (potentially on another system) or just existing LLM services.

@Kristoffeh
Copy link

Hi @Kristoffeh, We probably wouldn't ship models directly part of BookStack at all, and this would be something that's optional upon the default system due to the external requirements.

My current implementation uses OpenAI-like APIs, which others seem to support as a somewhat non-official standard. The idea is you'd be able to integration with an external system of your choice which supports this API, including a self-hosted instance of something like Ollama using self-hosted models (potentially on another system) or just existing LLM services.

Hello again @ssddanbrown and thanks for taking the time.
Okay, that sounds great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants