This docker-compose repository should set up a collection of working Moodle and solr containers that will run the Virtual Chat Agent demonstrator.
- Clone this repository.
- Run
docker-compose build
. This should build moodle, mysql and solr images. - Run
docker-compose up
to bring up 3 containers.
-
Once the solr server is runnng, access the command line via
docker exec -it server-solr-1 bash
-
On the command line run
solr create -c moodle
to create the Moodle collection. -
From the
/var/solr/orig
copy the configuration files over the Moodle collection configuration.This will add in the configuration setings for the DenseVectorField types and the configuration for the Tika extraction library.
cp -r /var/solr/orig/ /var/solr/data/moodle/conf/
-
Restart solr. This is easiest done by doing a
docker-compose down
anddocker-compose up
.
- Access
http://localhost:8080
and run the Moodle installation, or acccess the docker container and runphp /var/www/html/admin/cli/install_database.php
An AI Provider will need to be configured.
- Access the
local_ai
AI Providers settings athttp://localhost:8080/admin/category.php?category=local_ai_settings
, or via the Local Plugins section. - Select the
Providers
option. - Click on the
OpenAI API Based Provider
- Provide a
Name
. This will be displayed to users selecting an AI provider. - Set the
Base URL
. For OpenAI this will behttps://api.openai.com
- Set your
API Key
. This is configured on the Open AI platform .- Access OpenAI platform
- Select a Project from the dropdown
- Select the
Dashboard
option from the top right menu - Select
API Keys
from the left hand menu. - Use th "Create new secret key` button in the top right to create new key.
- Expand
Features
- Check the
Allow Chat
option - Set the
Completions path
option. This is the relative URL to the Base URL to text completion / chat API. For OpenAI this is/v1/chat/completions
. - Set the
Completion Model
to an appropriate value for the model you wish to use.gpt-4o
is the current latest Open AI model. - Enable the
Allow Embeddings
- Set the
Embeddings path
option. This is the relative URL to the Base URL to generate an embedding from text. For OpenAI this is/v1/embeddings
. - Set the
Embedding model
to appropriate text embedding model. For OpenAI this istext-embedding-3-large
ortext-embedding-3-small
The Content Constraints
section allows you to select a Moodle Category or Course that this provider will be available in.
If you do not select anything this provider instance will be accessible throughout your site.
Once basic Moodle is installed, you need to configure the Global Search.
-
Access the global search configuration page http://localhost:8080/admin/settings.php?section=manageglobalsearch
-
Select the
Solr for Rag
search engine. This should have been pulled from https://github.com/mhughes2k/moodle-search_solrrag as part of the docker build of the moodle image. -
Click on the
Save Changes
button. -
The
3. Set up Search engine
option should display "no solr configuration found". -
Click on the
3. Setup search engine
option. -
Under
AI Settings
, set theChoose Provider
option to an AI Provider that has been defined. A global provider is probably best. -
For
Choose File Content extractor
chooseSolr with internal tika
.A standalone tika server is not fully implemented.
-
Under
Connection Settings
, set theHost name
tosolr
. -
Set the
Index name
tomoodle
. Unless you changed this in Step 2 of the solr configuration. -
Click
Save changes
. -
Return to the
Manage global search
configuration page. -
You should have all "green" statuses for Steps 1 - 3.
-
Access Step 5
Enable global search
. -
Set
Enable global search
to "Yes". -
If your cron is running on a regular basis (as it should), the global search indexer should now start indexing content in moodle sites.
You can run
php search/cli/indexer.php
within the moodle container to trigger the indexing process as well.
-
Restore or create a course
-
Create an instance of the "AI Chat (Reference)" activity.
-
Give it a name.
-
In
AI Provider
select an AI provider. This doesn't have to be the same AI Provider that the search uses. This provider is used to synthesise the response to the user. -
You should get a prompt and be able to interrogate your course.