From e0f7c102bc17ed814a77a7c779d35c37361f77cc Mon Sep 17 00:00:00 2001 From: FUN MOOC Bot Date: Thu, 11 Jul 2024 10:20:25 +0000 Subject: [PATCH] Deployed b56f6db to dev with MkDocs 1.6.0 and mike 2.1.1 --- dev/features/api/index.html | 4 ++-- dev/search/search_index.json | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/dev/features/api/index.html b/dev/features/api/index.html index 7dcbe8be3..df0130914 100644 --- a/dev/features/api/index.html +++ b/dev/features/api/index.html @@ -1533,7 +1533,7 @@

PUT /xAPI/statements/<

{
     "actor": null,
-    "id": "90971f98-4a68-41d2-bf30-44c840e050be",
+    "id": "af583046-98a3-42e7-877f-00ad8bfcd6df",
     "object": {
         "id": "string"
     },
@@ -2209,7 +2209,7 @@ 

PUT /xAPI/statements

{
     "actor": null,
-    "id": "676abdad-ebe3-469e-8cb3-2b55db3e7845",
+    "id": "43871fb4-8c97-4d2e-bb4d-c0589b2d5f68",
     "object": {
         "id": "string"
     },
diff --git a/dev/search/search_index.json b/dev/search/search_index.json
index 478a2defb..96465dec5 100644
--- a/dev/search/search_index.json
+++ b/dev/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Ralph","text":"

\u2699\ufe0f The ultimate toolbox for your learning analytics (expect some xAPI \u2764\ufe0f)

Ralph is a toolbox for your learning analytics, it can be used as a:

  • LRS, an HTTP API server to collect xAPI statements (learning events), following the ADL LRS standard
  • command-line interface (CLI), to build data pipelines the UNIX-way\u2122\ufe0f,
  • library, to fetch learning events from various backends, (de)serialize or convert them from and to various standard formats such as xAPI, or openedx
"},{"location":"#what_is_an_lrs","title":"What is an LRS?","text":"

A Learning Record Store, or LRS, is a key component in the context of learning analytics and the Experience API (xAPI).

The Experience API (or Tin Can API) is a standard for tracking and reporting learning experiences. In particular, it defines:

  • the xAPI format of the learning events. xAPI statements include an actor, a verb, an object as well as contextual information. Here\u2019s an example statement:
    {\n    \"id\": \"12345678-1234-5678-1234-567812345678\",\n    \"actor\":{\n        \"mbox\":\"mailto:xapi@adlnet.gov\"\n    },\n    \"verb\":{\n        \"id\":\"http://adlnet.gov/expapi/verbs/created\",\n        \"display\":{\n            \"en-US\":\"created\"\n        }\n    },\n    \"object\":{\n        \"id\":\"http://example.adlnet.gov/xapi/example/activity\"\n    }\n}\n
  • the Learning Record Store (LRS), is a RESTful API that collects, stores and retrieves these events. Think of it as a learning database that unifies data from various learning platforms and applications. These events can come from an LMS (Moodle, edX), or any other learning component that supports sending xAPI statements to an LRS (e.g. an embedded video player), from various platforms.

xAPI specification version

In Ralph, we\u2019re following the xAPI specification 1.0.3 that you can find here.

For your information, xAPI specification 2.0 is out! It\u2019s not currently supported in Ralph, but you can check it here.

"},{"location":"#installation","title":"Installation","text":""},{"location":"#install_from_pypi","title":"Install from PyPI","text":"

Ralph is distributed as a standard python package; it can be installed via pip or any other python package manager (e.g. Poetry, Pipenv, etc.):

Use a virtual environment for installation

To maintain a clean and controlled environment when installing ralph-malph, consider using a virtual environment.

  • Create a virtual environment:

    python3.12 -m venv <path-to-virtual-environment>\n

  • Activate the virtual environment:

    source venv/bin/activate\n

If you want to generate xAPI statements from your application and only need to integrate learning statement models in your project, you don\u2019t need to install the backends, cli or lrs extra dependencies, the core library is what you need:

pip install ralph-malph\n

If you want to use the Ralph LRS server, add the lrs flavour in your installation. You also have to choose the type of backend you will use for LRS data storage (backend-clickhouse,backend-es,backend-mongo).

  • Install the core package with the LRS and the Elasticsearch backend. For example:
pip install ralph-malph[backend-es,lrs]\n
  • Add the cli flavour if you want to use the LRS on the command line:
pip install ralph-malph[backend-es,lrs,cli]\n
  • If you want to play around with backends with Ralph as a library, you can install:
pip install ralph-malph[backends]\n
  • If you have various uses for Ralph\u2019s features or would like to discover all the existing functionnalities, it is recommended to install the full package:
pip install ralph-malph[full]\n
"},{"location":"#install_from_dockerhub","title":"Install from DockerHub","text":"

Ralph is distributed as a Docker image. If Docker is installed on your machine, it can be pulled from DockerHub:

docker run --rm -i fundocker/ralph:latest ralph --help\n
Use a ralph alias in your local environment

Simplify your workflow by creating an alias for easy access to Ralph commands:

alias ralph=\"docker run --rm -i fundocker/ralph:latest ralph\"\n
"},{"location":"#lrs_specification_compliance","title":"LRS specification compliance","text":"

WIP.

"},{"location":"#contributing_to_ralph","title":"Contributing to Ralph","text":"

If you\u2019re interested in contributing to Ralph, whether it\u2019s by reporting issues, suggesting improvements, or submitting code changes, please head over to our dedicated Contributing to Ralph page. There, you\u2019ll find detailed guidelines and instructions on how to take part in the project.

We look forward to your contributions and appreciate your commitment to making Ralph a more valuable tool for everyone.

"},{"location":"#contributors","title":"Contributors","text":""},{"location":"#license","title":"License","text":"

This work is released under the MIT License (see LICENSE).

"},{"location":"CHANGELOG/","title":"Changelog","text":"

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

"},{"location":"CHANGELOG/#unreleased","title":"Unreleased","text":""},{"location":"CHANGELOG/#501_-_2024-07-11","title":"5.0.1 - 2024-07-11","text":""},{"location":"CHANGELOG/#changed","title":"Changed","text":"
  • Force Elasticsearch REFRESH_AFTER_WRITE setting to be a string
"},{"location":"CHANGELOG/#fixed","title":"Fixed","text":"
  • Fix LaxStatement validation to prevent statements IDs modification
"},{"location":"CHANGELOG/#500_-_2024-05-02","title":"5.0.0 - 2024-05-02","text":""},{"location":"CHANGELOG/#added","title":"Added","text":"
  • Models: Add Webinar xAPI activity type
"},{"location":"CHANGELOG/#changed_1","title":"Changed","text":"
  • Upgrade pydantic to 2.7.0
  • Migrate model tests from hypothesis strategies to polyfactory
  • Replace soon-to-be deprecated parse_obj_as with TypeAdapter
"},{"location":"CHANGELOG/#420_-_2024-04-08","title":"4.2.0 - 2024-04-08","text":""},{"location":"CHANGELOG/#added_1","title":"Added","text":"
  • Models: Add Edx teams-related events support
  • Models: Add Edx notes events support
  • Models: Add Edx certificate events support
  • Models: Add Edx bookmark (renamed Course Resource) events support
  • Models: Add Edx poll and survey events support
  • Models: Add Edx Course Content Completion events support
  • Models: Add Edx drag and drop events support
  • Models: Add Edx cohort events support
  • Models: Add Edx content library interaction events support
  • Backends: Add ralph.backends.data and ralph.backends.lrs entry points to discover backends from plugins.
"},{"location":"CHANGELOG/#changed_2","title":"Changed","text":"
  • Backends: the first argument of the get_backends method now requires a list of EntryPoints, each pointing to a backend class, instead of a tuple of packages containing backends.
  • API: The RUNSERVER_BACKEND configuration value is no longer validated to point to an existing backend.
"},{"location":"CHANGELOG/#fixed_1","title":"Fixed","text":"
  • LRS: Fix querying on activity when LRS contains statements with an object lacking a objectType attribute
"},{"location":"CHANGELOG/#410_-_2024-02-12","title":"4.1.0 - 2024-02-12","text":""},{"location":"CHANGELOG/#added_2","title":"Added","text":"
  • Add LRS multitenancy support for user-specific target storage
"},{"location":"CHANGELOG/#changed_3","title":"Changed","text":"
  • query_statements and query_statements_by_ids methods can now take an optional user-specific target
"},{"location":"CHANGELOG/#fixed_2","title":"Fixed","text":"
  • Backends: switch LRSStatementsQuery since/until field types to iso 8601 string
"},{"location":"CHANGELOG/#removed","title":"Removed","text":"
  • Removed event_table_name attribute of the ClickHouse data backend
"},{"location":"CHANGELOG/#400_-_2024-01-23","title":"4.0.0 - 2024-01-23","text":""},{"location":"CHANGELOG/#added_3","title":"Added","text":"
  • Backends: Add Writable and Listable interfaces to distinguish supported functionalities among data backends
  • Backends: Add max_statements option to data backends read method
  • Backends: Add prefetch option to async data backends read method
  • Backends: Add concurrency option to async data backends write method
  • Backends: Add get_backends function to automatically discover backends for CLI and LRS usage
  • Backends: Add client options for WSDataBackend
  • Backends: Add READ_CHUNK_SIZE and WRITE_CHUNK_SIZE data backend settings
  • Models: Implement Pydantic model for LRS Statements resource query parameters
  • Models: Implement xAPI LMS Profile statements validation
  • Models: Add EdX to xAPI converters for enrollment events
  • Project: Add aliases for ralph-malph extra dependencies: backends and full
"},{"location":"CHANGELOG/#changed_4","title":"Changed","text":"
  • Arnold: Add variable to override PVC name in arnold deployment
  • API: GET /statements now has \u201cmine\u201d option which matches statements that have an authority field matching that of the user
  • API: Invalid parameters now return 400 status code
  • API: Forwarding PUT now uses PUT (instead of POST)
  • API: Incoming statements are enriched with id, timestamp, stored and authority
  • API: Add RALPH_LRS_RESTRICT_BY_AUTHORITY option making ?mine=True implicit
  • API: Add RALPH_LRS_RESTRICT_BY_SCOPE option enabling endpoint access control by user scopes
  • API: Enhance \u2018limit\u2019 query parameter\u2019s validation
  • API: Variable RUNSERVER_AUTH_BACKEND becomes RUNSERVER_AUTH_BACKENDS, and multiple authentication methods are supported simultaneously
  • Backends: Refactor LRS Statements resource query parameters defined for ralph API
  • Backends: Refactor database, storage, http and stream backends under the unified data backend interface [BC]
  • Backends: Refactor LRS query_statements and query_statements_by_ids backends methods under the unified lrs backend interface [BC]
  • Backends: Update statementId and voidedStatementId to snake_case, with camelCase alias, in LRSStatementsQuery
  • Backends: Replace reference to a JSON column in ClickHouse with function calls on the String column [BC]
  • CLI: User credentials must now include an \u201cagent\u201d field which can be created using the cli
  • CLI: Change push to write and fetch to read [BC]
  • CLI: Change -c --chunk-size option to -s --chunk-size [BC]
  • CLI: Change websocket backend name -b ws to -b async_ws along with it\u2019s uri option --ws-uri to --async-ws-uri [BC]
  • CLI: List cli usage strings in alphabetical order
  • CLI: Change backend configuration environment variable prefixes from RALPH_BACKENDS__{{DATABASE|HTTP|STORAGE|STREAM}}__{{BACKEND}}__{{OPTION}} to RALPH_BACKENDS__DATA__{{BACKEND}}__{{OPTION}}
  • Models: The xAPI context.contextActivities.category field is now mandatory in the video and virtual classroom profiles. [BC]
  • Upgrade base python version to 3.12 for the development stack and Docker image
  • Upgrade bcrypt to 4.1.2
  • Upgrade cachetools to 5.3.2
  • Upgrade fastapi to 0.108.0
  • Upgrade sentry_sdk to 1.39.1
  • Upgrade uvicorn to 0.25.0
"},{"location":"CHANGELOG/#fixed_3","title":"Fixed","text":"
  • API: Fix a typo (\u2018attachements\u2019 -> \u2018attachments\u2019) to ensure compliance with the LRS specification and prevent potential silent bugs
"},{"location":"CHANGELOG/#removed_1","title":"Removed","text":"
  • Project: Drop support for Python 3.7
  • Models: Remove school, course, module context extensions in Edx to xAPI base converter
  • Models: Remove name field in VideoActivity xAPI model mistakenly used in video profile
  • CLI: Remove DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration
"},{"location":"CHANGELOG/#390_-_2023-07-21","title":"3.9.0 - 2023-07-21","text":""},{"location":"CHANGELOG/#changed_5","title":"Changed","text":"
  • Upgrade fastapi to 0.100.0
  • Upgrade sentry_sdk to 1.28.1
  • Upgrade uvicorn to 0.23.0
  • Enforce valid IRI for activity parameter in GET /statements
  • Change how duplicate xAPI statements are handled for clickhouse backend
"},{"location":"CHANGELOG/#380_-_2023-06-21","title":"3.8.0 - 2023-06-21","text":""},{"location":"CHANGELOG/#added_4","title":"Added","text":"
  • Implement edX open response assessment events pydantic models
  • Implement edx peer instruction events pydantic models
  • Implement xAPI VideoDownloaded pydantic model (using xAPI TinCan downloaded verb)
"},{"location":"CHANGELOG/#changed_6","title":"Changed","text":"
  • Allow to use a query for HTTP backends in the CLI
"},{"location":"CHANGELOG/#370_-_2023-06-13","title":"3.7.0 - 2023-06-13","text":""},{"location":"CHANGELOG/#added_5","title":"Added","text":"
  • Implement asynchronous async_lrs backend
  • Implement synchronous lrs backend
  • Implement xAPI virtual classroom pydantic models
  • Allow to insert custom endpoint url for S3 service
  • Cache the HTTP Basic auth credentials to improve API response time
  • Support OpenID Connect authentication method
"},{"location":"CHANGELOG/#changed_7","title":"Changed","text":"
  • Clean xAPI pydantic models naming convention
  • Upgrade fastapi to 0.97.0
  • Upgrade sentry_sdk to 1.25.1
  • Set Clickhouse client_options to a dedicated pydantic model
  • Upgrade httpx to 0.24.1
  • Force a valid (JSON-formatted) IFI to be passed for the /statements GET query agent filtering
  • Upgrade cachetools to 5.3.1
"},{"location":"CHANGELOG/#removed_2","title":"Removed","text":"
  • verb.display field no longer mandatory in xAPI models and for converter
"},{"location":"CHANGELOG/#360_-_2023-05-17","title":"3.6.0 - 2023-05-17","text":""},{"location":"CHANGELOG/#added_6","title":"Added","text":"
  • Allow to ignore health check routes for Sentry transactions
"},{"location":"CHANGELOG/#changed_8","title":"Changed","text":"
  • Upgrade sentry_sdk to 1.22.2
  • Upgrade uvicorn to 0.22.0
  • LRS /statements GET method returns a code 400 with certain parameters as per the xAPI specification
  • Use batch/v1 api in cronjob_pipeline manifest
  • Use autoscaling/v2 in HorizontalPodAutoscaler manifest
"},{"location":"CHANGELOG/#fixed_4","title":"Fixed","text":"
  • Fix the more IRL building in LRS /statements GET requests
"},{"location":"CHANGELOG/#351_-_2023-04-18","title":"3.5.1 - 2023-04-18","text":""},{"location":"CHANGELOG/#changed_9","title":"Changed","text":"
  • Upgrade httpx to 0.24.0
  • Upgrade fastapi to 0.95.1
  • Upgrade sentry_sdk to 1.19.1
  • Upgrade uvicorn to 0.21.1
"},{"location":"CHANGELOG/#fixed_5","title":"Fixed","text":"
  • An issue with starting Ralph in pre-built Docker containers
  • Fix double quoting in ClickHouse backend server parameters
  • An issue Ralph starting when ClickHouse is down
"},{"location":"CHANGELOG/#350_-_2023-03-08","title":"3.5.0 - 2023-03-08","text":""},{"location":"CHANGELOG/#added_7","title":"Added","text":"
  • Implement PUT verb on statements endpoint
  • Add ClickHouse database backend support
"},{"location":"CHANGELOG/#changed_10","title":"Changed","text":"
  • Make trailing slashes optional on statements endpoint
  • Upgrade sentry_sdk to 1.16.0
"},{"location":"CHANGELOG/#340_-_2023-03-01","title":"3.4.0 - 2023-03-01","text":""},{"location":"CHANGELOG/#changed_11","title":"Changed","text":"
  • Upgrade fastapi to 0.92.0
  • Upgrade sentry_sdk to 1.15.0
"},{"location":"CHANGELOG/#fixed_6","title":"Fixed","text":"
  • Restore sentry integration in the LRS server
"},{"location":"CHANGELOG/#330_-_2023-02-03","title":"3.3.0 - 2023-02-03","text":""},{"location":"CHANGELOG/#added_8","title":"Added","text":"
  • Restore python 3.7+ support for library usage (models)
"},{"location":"CHANGELOG/#changed_12","title":"Changed","text":"
  • Allow xAPI extra fields in extensions fields
"},{"location":"CHANGELOG/#321_-_2023-02-01","title":"3.2.1 - 2023-02-01","text":""},{"location":"CHANGELOG/#changed_13","title":"Changed","text":"
  • Relax required Python version to 3.7+
"},{"location":"CHANGELOG/#320_-_2023-01-25","title":"3.2.0 - 2023-01-25","text":""},{"location":"CHANGELOG/#added_9","title":"Added","text":"
  • Add a new auth subcommand to generate required credentials file for the LRS
  • Implement support for AWS S3 storage backend
  • Add CLI --version option
"},{"location":"CHANGELOG/#changed_14","title":"Changed","text":"
  • Upgrade fastapi to 0.89.1
  • Upgrade httpx to 0.23.3
  • Upgrade sentry_sdk to 1.14.0
  • Upgrade uvicorn to 0.20.0
  • Tray: add the ca_certs path for the ES backend client option (LRS)
  • Improve Sentry integration for the LRS
  • Update handbook link to https://handbook.openfun.fr
  • Upgrade base python version to 3.11 for the development stack and Docker image
"},{"location":"CHANGELOG/#fixed_7","title":"Fixed","text":"
  • Restore ES and Mongo backends ability to use client options
"},{"location":"CHANGELOG/#310_-_2022-11-17","title":"3.1.0 - 2022-11-17","text":""},{"location":"CHANGELOG/#added_10","title":"Added","text":"
  • EdX to xAPI converters for video events
"},{"location":"CHANGELOG/#changed_15","title":"Changed","text":"
  • Improve Ralph\u2019s library integration by unpinning dependencies (and prefer ranges)
  • Upgrade fastapi to 0.87.0
"},{"location":"CHANGELOG/#removed_3","title":"Removed","text":"
  • ModelRules constraint
"},{"location":"CHANGELOG/#300_-_2022-10-19","title":"3.0.0 - 2022-10-19","text":""},{"location":"CHANGELOG/#added_11","title":"Added","text":"
  • Implement edX video browser events pydantic models
  • Create a post endpoint for statements implementing the LRS spec
  • Implement support for the MongoDB database backend
  • Implement support for custom queries when using database backends get method (used in the fetch command)
  • Add dotenv configuration file support and python-dotenv dependency
  • Add host and port options for the runserver cli command
  • Add support for database selection when running the Ralph LRS server
  • Implement support for xAPI statement forwarding
  • Add database backends status checking
  • Add health LRS router
  • Tray: add LRS server support
"},{"location":"CHANGELOG/#changed_16","title":"Changed","text":"
  • Migrate to python-legacy handler for mkdocstrings package
  • Upgrade click to 8.1.3
  • Upgrade elasticsearch to 8.3.3
  • Upgrade fastapi to 0.79.1
  • Upgrade ovh to 1.0.0
  • Upgrade pydantic to 1.9.2
  • Upgrade pymongo to 4.2.0
  • Upgrade python-keystoneclient to 5.0.0
  • Upgrade python-swiftclient to 4.0.1
  • Upgrade requests to 2.28.1
  • Upgrade sentry_sdk to 1.9.5
  • Upgrade uvicorn to 0.18.2
  • Upgrade websockets to 10.3
  • Make backends yield results instead of writing to standard streams (BC)
  • Use pydantic settings management instead of global variables in defaults.py
  • Rename backend and parser parameter environment variables (BC)
  • Make project dependencies management more modular for library usage
"},{"location":"CHANGELOG/#removed_4","title":"Removed","text":"
  • Remove YAML configuration file support and pyyaml dependency (BC)
"},{"location":"CHANGELOG/#fixed_8","title":"Fixed","text":"
  • Tray: do not create a cronjobs list when no cronjob has been defined
  • Restore history mixin logger
"},{"location":"CHANGELOG/#210_-_2022-04-13","title":"2.1.0 - 2022-04-13","text":""},{"location":"CHANGELOG/#added_12","title":"Added","text":"
  • Implement edX problem interaction events pydantic models
  • Implement edX textbook interaction events pydantic models
  • ws websocket stream backend (compatible with the fetch command)
  • bundle jq, curl and wget in the fundocker/ralph Docker image
  • Tray: enable ralph app deployment command configuration
  • Add a runserver command with basic auth and a Whoami route
  • Create a get endpoint for statements implementing the LRS spec
  • Add optional fields to BaseXapiModel
"},{"location":"CHANGELOG/#changed_17","title":"Changed","text":"
  • Upgrade uvicorn to 0.17.4
  • Upgrade elasticsearch to 7.17.0
  • Upgrade sentry_sdk to 1.5.5
  • Upgrade fastapi to 0.73.0
  • Upgrade pyparsing to 3.0.7
  • Upgrade pydantic to 1.9.0
  • Upgrade python-keystoneclient to 4.4.0
  • Upgrade python-swiftclient to 3.13.0
  • Upgrade pyyaml to 6.0
  • Upgrade requests to 2.27.1
  • Upgrade websockets to 10.1
"},{"location":"CHANGELOG/#201_-_2021-07-15","title":"2.0.1 - 2021-07-15","text":""},{"location":"CHANGELOG/#changed_18","title":"Changed","text":"
  • Upgrade elasticsearch to 7.13.3
"},{"location":"CHANGELOG/#fixed_9","title":"Fixed","text":"
  • Restore elasticsearch backend datastream compatibility for bulk operations
"},{"location":"CHANGELOG/#200_-_2021-07-09","title":"2.0.0 - 2021-07-09","text":""},{"location":"CHANGELOG/#added_13","title":"Added","text":"
  • xAPI video interacted pydantic models
  • xAPI video terminated pydantic models
  • xAPI video completed pydantic models
  • xAPI video seeked pydantic models
  • xAPI video initialized pydantic models
  • xAPI video paused pydantic models
  • convert command to transform edX events to xAPI format
  • EdX to xAPI converters for page viewed andpage_close events
  • Implement core event format converter
  • xAPI video played pydantic models
  • xAPI page viewed and page terminated pydantic models
  • Implement edX navigational events pydantic models
  • Implement edX enrollment events pydantic models
  • Install security updates in project Docker images
  • Model selector to retrieve associated pydantic model of a given event
  • validate command to lint edX events using pydantic models
  • Support all available bulk operation types for the elasticsearch backend (create, index, update, delete) using the --es-op-type option
"},{"location":"CHANGELOG/#changed_19","title":"Changed","text":"
  • Upgrade elasticsearch to 7.13.2
  • Upgrade python-swiftclient to 3.12.0
  • Upgrade click to 8.0.1
  • Upgrade click-option-group to 0.5.3
  • Upgrade pydantic to 1.8.2
  • Upgrade sentry_sdk to 1.1.0
  • Rename edX models
  • Migrate model tests from factories to hypothesis strategies
  • Tray: switch from openshift to k8s (BC)
  • Tray: remove useless deployment probes
"},{"location":"CHANGELOG/#fixed_10","title":"Fixed","text":"
  • Tray: remove version immutable field in DC selector
"},{"location":"CHANGELOG/#120_-_2021-02-26","title":"1.2.0 - 2021-02-26","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":"
  • edX server event pydantic model and factory
  • edX page_close browser event pydantic model and factory
  • Tray: allow to specify a self-generated elasticsearch cluster CA certificate
"},{"location":"CHANGELOG/#fixed_11","title":"Fixed","text":"
  • Tray: add missing Swift variables in the secret
  • Tray: fix pods anti-affinity selector
"},{"location":"CHANGELOG/#removed_5","title":"Removed","text":"
  • pandas is no longer required
"},{"location":"CHANGELOG/#110_-_2021-02-04","title":"1.1.0 - 2021-02-04","text":""},{"location":"CHANGELOG/#added_15","title":"Added","text":"
  • Support for Swift storage backend
  • Use the push command --ignore-errors option to ignore ES bulk import errors
  • The elasticsearch backend now accepts passing all supported client options
"},{"location":"CHANGELOG/#changed_20","title":"Changed","text":"
  • Upgrade pyyaml to 5.4.1
  • Upgrade pandas to 1.2.1
"},{"location":"CHANGELOG/#removed_6","title":"Removed","text":"
  • click_log is no longer required as we are able to configure logging
"},{"location":"CHANGELOG/#100_-_2021-01-13","title":"1.0.0 - 2021-01-13","text":""},{"location":"CHANGELOG/#added_16","title":"Added","text":"
  • Implement base CLI commands (list, extract, fetch & push) for supported backends
  • Support for ElasticSearch database backend
  • Support for LDP storage backend
  • Support for FS storage backend
  • Parse (gzipped) tracking logs in GELF format
  • Support for application\u2019s configuration file
  • Add optional sentry integration
  • Distribute Arnold\u2019s tray to deploy Ralph in a k8s cluster as cronjobs
"},{"location":"LICENSE/","title":"License","text":"

MIT License

Copyright (c) 2020-present France Universit\u00e9 Num\u00e9rique

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

"},{"location":"UPGRADE/","title":"Upgrade","text":"

All instructions to upgrade this project from one release to the next will be documented in this file. Upgrades must be run sequentially, meaning you should not skip minor/major releases while upgrading (fix releases can be skipped).

This project adheres to Semantic Versioning.

"},{"location":"UPGRADE/#4x_to_5y","title":"4.x to 5.y","text":""},{"location":"UPGRADE/#upgrade_learning_events_models","title":"Upgrade learning events models","text":"

xAPI learning statements validator and converter are built with Pydantic. Ralph 5.x is compatible with Pydantic 2.x. Please refer to Pydantic migration guide if you are using Ralph models feature.

Most of fields in Pydantic models that are optional are set with None as default value in Ralph 5.y. If you serialize some Pydantic models from ralph and want to keep the same content in your serialization, please set exclude_none to True in the serialization method model_dump.

"},{"location":"UPGRADE/#3x_to_4y","title":"3.x to 4.y","text":""},{"location":"UPGRADE/#upgrade_user_credentials","title":"Upgrade user credentials","text":"

To conform to xAPI specifications, we need to represent users as xAPI Agents. You must therefore delete and re-create the credentials file using the updated cli, OR you can modify it directly to add the agents field. The credentials file is located in { RALPH_APP_DIR }/{ RALPH_AUTH_FILE } (defaults to .ralph/auth.json). Each user profile must follow the following pattern (see this post for examples of valid agent objects) :

{\n  \"username\": \"USERNAME_UNCHANGED\",\n  \"hash\": \"PASSWORD_HASH_UNCHANGED\",\n  \"scopes\": [ LIST_OF_SCOPES_UNCHANGED ],\n  \"agent\": { A_VALID_AGENT_OBJECT }\n}\n
Agent can take one of the following forms, as specified by the xAPI specification: - mbox:
\"agent\": {\n      \"mbox\": \"mailto:john.doe@example.com\"\n}\n
- mbox_sha1sum:
\"agent\": {\n        \"mbox_sha1sum\": \"ebd31e95054c018b10727ccffd2ef2ec3a016ee9\",\n}\n
- openid:
\"agent\": {\n      \"openid\": \"http://foo.openid.example.org/\"\n}\n
- account:
\"agent\": {\n      \"account\": {\n        \"name\": \"simonsAccountName\",\n        \"homePage\": \"http://www.exampleHomePage.com\"\n}\n

For example here is a valid auth.json file:

[\n  {\n    \"username\": \"john.doe@example.com\",\n    \"hash\": \"$2b$12$yBXrzIuRIk6yaft5KUgVFOIPv0PskCCh9PXmF2t7pno.qUZ5LK0D2\",\n    \"scopes\": [\"example_scope\"],\n    \"agent\": {\n      \"mbox\": \"mailto:john.doe@example.com\"\n    }\n  },\n  {\n    \"username\": \"simon.says@example.com\",\n    \"hash\": \"$2b$12$yBXrzIuRIk6yaft5KUgVFOIPv0PskCCh9PXmF2t7pno.qUZ5LK0D2\",\n    \"scopes\": [\"second_scope\", \"third_scope\"],\n    \"agent\": {\n      \"account\": {\n        \"name\": \"simonsAccountName\",\n        \"homePage\": \"http://www.exampleHomePage.com\"\n      }\n    }\n  }\n]\n
"},{"location":"UPGRADE/#upgrade_ralph_cli_usage","title":"Upgrade Ralph CLI usage","text":"

If you are using Ralph\u2019s CLI, the following changes may affect you:

  • The ralph fetch command changed to ralph read
  • The -b ws backend option changed to -b async_ws
    • The corresponding --ws-uri option changed to --async-ws-uri
  • The -c --chunk-size option changed to -s --chunk-size
  • The DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration is removed in favor of allowing each backend to define their own defaults:

    Backend Environment variable for default (read) chunk size async_es/es RALPH_BACKENDS__DATA__ES__READ_CHUNK_SIZE=500 async_lrs/lrs RALPH_BACKENDS__DATA__LRS__READ_CHUNK_SIZE=500 async_mongo/mongo RALPH_BACKENDS__DATA__MONGO__READ_CHUNK_SIZE=500 clickhouse RALPH_BACKENDS__DATA__CLICKHOUSE__READ_CHUNK_SIZE=500 fs RALPH_BACKENDS__DATA__FS__READ_CHUNK_SIZE=4096 ldp RALPH_BACKENDS__DATA__LDP__READ_CHUNK_SIZE=4096 s3 RALPH_BACKENDS__DATA__S3__READ_CHUNK_SIZE=4096 swift RALPH_BACKENDS__DATA__SWIFT__READ_CHUNK_SIZE=4096
  • The ralph push command changed to ralph write

  • The -c --chunk-size option changed to -s --chunk-size
  • The DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration is removed in favor of allowing each backend to define their own defaults:

    Backend Environment variable for default (write) chunk size async_es/es RALPH_BACKENDS__DATA__ES__WRITE_CHUNK_SIZE=500 async_lrs/lrs RALPH_BACKENDS__DATA__LRS__WRITE_CHUNK_SIZE=500 async_mongo/mongo RALPH_BACKENDS__DATA__MONGO__WRITE_CHUNK_SIZE=500 clickhouse RALPH_BACKENDS__DATA__CLICKHOUSE__WRITE_CHUNK_SIZE=500 fs RALPH_BACKENDS__DATA__FS__WRITE_CHUNK_SIZE=4096 ldp RALPH_BACKENDS__DATA__LDP__WRITE_CHUNK_SIZE=4096 s3 RALPH_BACKENDS__DATA__S3__WRITE_CHUNK_SIZE=4096 swift RALPH_BACKENDS__DATA__SWIFT__WRITE_CHUNK_SIZE=4096
  • Environment variables used to configure backend options for CLI usage (read/write/list commands) changed their prefix: RALPH_BACKENDS__{{DATABASE or HTTP or STORAGE or STREAM}}__{{BACKEND}}__{{OPTION}} changed to RALPH_BACKENDS__DATA__{{BACKEND}}__{{OPTION}}

  • Environment variables used to configure backend options for LRS usage (runserver command) changed their prefix: RALPH_BACKENDS__{{DATABASE}}__{{BACKEND}}__{{OPTION}} changed to RALPH_BACKENDS__LRS__{{BACKEND}}__{{OPTION}}
"},{"location":"UPGRADE/#upgrade_history_syntax","title":"Upgrade history syntax","text":"

CLI syntax has been changed from fetch & push to read & write affecting the command history. You must replace the command history after updating: - locate your history file path, which is in { RALPH_APP_DIR }/history.json (defaults to .ralph/history.json) - run the commands below to update history

sed -i 's/\"fetch\"/\"read\"/g' { my_history_file_path }\nsed -i 's/\"push\"/\"write\"/g' { my_history_file_path }\n
"},{"location":"UPGRADE/#upgrade_ralph_library_usage_backends","title":"Upgrade Ralph library usage (backends)","text":"

If you use Ralph\u2019s backends in your application, the following changes might affect you:

Backends from ralph.backends.database, ralph.backends.http, ralph.backends.stream, and ralph.backends.storage packages have moved to a single ralph.backends.data package.

Ralph v3 (database/http/storage/stream) backends Ralph v4 data backends ralph.backends.database.clickhouse.ClickHouseDatabase ralph.backends.data.clickhouse.ClickHouseDataBackend ralph.backends.database.es.ESDatabase ralph.backends.data.es.ESDataBackend ralph.backends.database.mongo.MongoDatabase ralph.backends.data.mongo.MongoDataBackend ralph.backends.http.async_lrs.AsyncLRSHTTP ralph.backends.data.async_lrs.AsyncLRSDataBackend ralph.backends.http.lrs.LRSHTTP ralph.backends.data.lrs.LRSDataBackend ralph.backends.storage.fs.FSStorage ralph.backends.data.fs.FSDataBackend ralph.backends.storage.ldp.LDPStorage ralph.backends.data.ldp.LDPDataBackend ralph.backends.storage.s3.S3Storage ralph.backends.data.s3.S3DataBackend ralph.backends.storage.swift.SwiftStorage ralph.backends.data.swift.SwiftDataBackend ralph.backends.stream.ws.WSStream ralph.backends.data.async_ws.AsyncWSDataBackend

LRS-specific query_statements and query_statements_by_ids database backend methods have moved to a dedicated ralph.backends.lrs.BaseLRSBackend interface that extends the data backend interface with these two methods.

The query_statements_by_ids method return type changed to Iterator[dict].

Ralph v3 database backends for lrs usage Ralph v4 LRS data backends ralph.backends.database.clickhouse.ClickHouseDatabase ralph.backends.lrs.clickhouse.ClickHouseLRSBackend ralph.backends.database.es.ESDatabase ralph.backends.lrs.es.ESLRSBackend ralph.backends.database.mongo.MongoDatabase ralph.backends.lrs.mongo.MongoLRSBackend

Backend interface differences

  • Data backends are read-only by default
  • Data backends that support write operations inherit from the ralph.backends.data.base.Writable interface
  • Data backends that support list operations inherit from the ralph.backends.data.base.Listable interface
  • Data backends that support LRS operations (query_statements/query_statements_by_ids) inherit from the ralph.backends.lrs.BaseLRSBackend interface
  • __init__(self, **kwargs) changed to __init__(self, settings: DataBackendSettings) where each DataBackend defines it\u2019s own Settings object For example the FSDataBackend uses FSDataBackendSettings
  • stream and get methods changed to read
  • put methods changed to write

Backend usage migration example

Ralph v3 using ESDatabase:

from ralph.conf import ESClientOptions\nfrom ralph.backends.database.es import ESDatabase, ESQuery\n\n# Instantiate the backend.\nbackend = ESDatabase(\n  hosts=\"localhost\",\n  index=\"statements\"\n  client_options=ESClientOptions(verify_certs=False)\n)\n# Read records from backend.\nquery = ESQuery(query={\"query\": {\"term\": {\"modulo\": 0}}})\nes_statements = list(backend.get(query))\n\n# Write records to backend.\nbackend.put([{\"id\": 1}])\n

Ralph v4 using ESDataBackend:

from ralph.backends.data.es import (\n  ESClientOptions,\n  ESDataBackend,\n  ESDataBackendSettings,\n  ESQuery,\n)\n\n# Instantiate the backend.\nsettings = ESDataBackendSettings(\n  HOSTS=\"localhost\",\n  INDEX=\"statements\",\n  CLIENT_OPTIONS=ESClientOptions(verify_certs=False)\n)\nbackend = ESDataBackend(settings)\n\n# Read records from backend.\nquery = ESQuery(query={\"term\": {\"modulo\": 0}})\nes_statements = list(backend.read(query))\n\n# Write records to backend.\nbackend.write([{\"id\": 1}])\n
"},{"location":"UPGRADE/#upgrade_clickhouse_schema","title":"Upgrade ClickHouse schema","text":"

If you are using the ClickHouse backend, schema changes have been made to drop the existing JSON column in favor of the String version of the same data. See this issue for details.

Ralph does not manage the ClickHouse schema, so if you have existing data you will need to manually alter it as an admin user. Note: this will rewrite the statements table, which may take a long time if you have many rows. The command to run is:

-- If RALPH_BACKENDS__DATA__CLICKHOUSE__DATABASE is 'xapi'\n-- and RALPH_BACKENDS__DATA__CLICKHOUSE__EVENT_TABLE_NAME is 'test'\n\nALTER TABLE xapi.test DROP COLUMN event, RENAME COLUMN event_str to event;\n
"},{"location":"commands/","title":"Commands","text":""},{"location":"commands/#ralph","title":"ralph","text":"

The cli is a stream-based tool to play with your logs.

It offers functionalities to: - Validate or convert learning data in different standards - Read and write learning data to various databases or servers - Manage an instance of a Ralph LRS server

Usage:

ralph [OPTIONS] COMMAND [ARGS]...\n

Options:

  -v, --verbosity LVL  Either CRITICAL, ERROR, WARNING, INFO (default) or\n                       DEBUG\n  --version            Show the version and exit.\n  --help               Show this message and exit.\n
"},{"location":"commands/#ralph-auth","title":"ralph auth","text":"

Generate credentials for LRS HTTP basic authentication.

Usage:

ralph auth [OPTIONS]\n

Options:

  -u, --username TEXT             The user for which we generate credentials.\n                                  [required]\n  -p, --password TEXT             The password to encrypt for this user. Will\n                                  be prompted if missing.  [required]\n  -s, --scope TEXT                The user scope(s). This option can be\n                                  provided multiple times.  [required]\n  -t, --target TEXT               The target location where statements are\n                                  stored for the user.\n  -M, --agent-ifi-mbox TEXT       The mbox Inverse Functional Identifier of\n                                  the associated agent.\n  -S, --agent-ifi-mbox-sha1sum TEXT\n                                  The mbox-sha1sum Inverse Functional\n                                  Identifier of the associated agent.\n  -O, --agent-ifi-openid TEXT     The openid Inverse Functional Identifier of\n                                  the associated agent.\n  -A, --agent-ifi-account TEXT...\n                                  Input \"{name} {homePage}\". The account\n                                  Inverse Functional Identifier of the\n                                  associated agent.\n  -N, --agent-name TEXT           The name of the associated agent.\n  -w, --write-to-disk             Write new credentials to the LRS\n                                  authentication file.\n  --help                          Show this message and exit.\n
"},{"location":"commands/#ralph-convert","title":"ralph convert","text":"

Convert input events to a given format.

Usage:

ralph convert [OPTIONS]\n

Options:

  From edX to xAPI converter options: \n    -u, --uuid-namespace TEXT     The UUID namespace to use for the `ID` field\n                                  generation\n    -p, --platform-url TEXT       The `actor.account.homePage` to use in the\n                                  xAPI statements  [required]\n  -f, --from [edx]                Input events format to convert  [required]\n  -t, --to [xapi]                 Output events format  [required]\n  -I, --ignore-errors             Continue writing regardless of raised errors\n  -F, --fail-on-unknown           Stop converting at first unknown event\n  --help                          Show this message and exit.\n
"},{"location":"commands/#ralph-extract","title":"ralph extract","text":"

Extract input events from a container format using a dedicated parser.

Usage:

ralph extract [OPTIONS]\n

Options:

  -p, --parser [gelf|es]  Container format parser used to extract events\n                          [required]\n  --help                  Show this message and exit.\n
"},{"location":"commands/#ralph-validate","title":"ralph validate","text":"

Validate input events of given format.

Usage:

ralph validate [OPTIONS]\n

Options:

  -f, --format [edx|xapi]  Input events format to validate  [required]\n  -I, --ignore-errors      Continue validating regardless of raised errors\n  -F, --fail-on-unknown    Stop validating at first unknown event\n  --help                   Show this message and exit.\n
"},{"location":"contribute/","title":"Contributing to Ralph","text":"

Thank you for considering contributing to Ralph! We appreciate your interest and support. This documentation provides guidelines on how to contribute effectively to our project.

"},{"location":"contribute/#issues","title":"Issues","text":"

Issues are a valuable way to contribute to Ralph. They can include bug reports, feature requests, and general questions or discussions. When creating or interacting with issues, please keep the following in mind:

"},{"location":"contribute/#1_search_for_existing_issues","title":"1. Search for existing issues","text":"

Before creating a new issue, search the existing issues to see if your concern has already been raised. If you find a related issue, you can add your input or follow the discussion. Feel free to engage in discussions, offer help, or provide feedback on existing issues. Your input is valuable in shaping the project\u2019s future.

"},{"location":"contribute/#2_creating_a_new_issue","title":"2. Creating a new issue","text":"

Use the provided issue template that fits the best to your concern. Provide as much information as possible when writing your issue. Your issue will be reviewed by a project maintainer and you may be offered to open a PR if you want to contribute to the code. If not, and if your issue is relevant, a contributor will apply the changes to the project. The issue will then be automatically closed when the PR is merged.

Issues will be closed by project maintainers if they are deemed invalid. You can always reopen an issue if you believe it hasn\u2019t been adequately addressed.

"},{"location":"contribute/#3_code_of_conduct_in_discussion","title":"3. Code of conduct in discussion","text":"
  • Be respectful and considerate when participating in discussions.
  • Avoid using offensive language, and maintain a positive and collaborative tone.
  • Stay on topic and avoid derailing discussions.
"},{"location":"contribute/#discussions","title":"Discussions","text":"

Discussions in the Ralph repository are a place for open-ended conversations, questions, and general community interactions. Here\u2019s how to effectively use discussions:

"},{"location":"contribute/#1_creating_a_discussion","title":"1. Creating a discussion","text":"
  • Use a clear and concise title that summarizes the topic.
  • In the description, provide context and details regarding the discussion.
  • Use labels to categorize the discussion (e.g., \u201cquestion,\u201d \u201cgeneral discussion,\u201d \u201cannouncements,\u201d etc.).
"},{"location":"contribute/#2_participating_in_discussions","title":"2. Participating in discussions","text":"
  • Engage in conversations respectfully, respecting others\u2019 opinions.
  • Avoid spamming or making off-topic comments.
  • Help answer questions when you can.
"},{"location":"contribute/#pull_requests_pr","title":"Pull Requests (PR)","text":"

Contributing to Ralph through pull requests is a powerful way to advance the project. If you want to make changes or add new features, please follow these steps to submit a PR:

"},{"location":"contribute/#1_fork_the_repository","title":"1. Fork the repository","text":"

Begin by forking Ralph project\u2019s repository.

"},{"location":"contribute/#2_clone_the_fork","title":"2. Clone the fork","text":"

Clone the forked repository to your local machine and change the directory to the project folder using the following commands (replace <your_fork> with your GitHub username):

git clone https://github.com/<your_fork>/ralph.git\ncd ralph\n
"},{"location":"contribute/#3_create_a_new_branch","title":"3. Create a new branch","text":"

Create a new branch for your changes, ideally with a descriptive name:

git checkout -b your-new-feature\n
"},{"location":"contribute/#4_make_changes","title":"4. Make changes","text":"

Implement the changes or additions to the code, ensuring it follows OpenFUN coding and documentation standards.

For comprehensive guidance on starting your development journey with Ralph and preparing your pull request, please refer to our dedicated Start developing with Ralph tutorial.

When committing your changes, please adhere to OpenFUN commit practices. Follow the low granularity commit splitting approach and use commit messages based on the Angular commit message guidelines.

"},{"location":"contribute/#5_push_changes","title":"5. Push changes","text":"

Push your branch to your GitHub repository:

git push origin feature/your-new-feature\n
"},{"location":"contribute/#6_create_a_pull_request","title":"6. Create a pull request","text":"

To initiate a Pull Request (PR), head to Ralph project\u2019s GitHub repository and click on New Pull Request.

Set your branch as the source and Ralph project\u2019s main branch as the target.

Provide a clear title for your PR and make use of the provided PR body template to document the changes made by your PR. This helps streamline the review process and maintain a well-documented project history.

"},{"location":"contribute/#7_review_and_discussion","title":"7. Review and discussion","text":"

Ralph project maintainers will review your PR. Be prepared to make necessary changes or address any feedback. Patience during this process is appreciated.

"},{"location":"contribute/#8_merge","title":"8. Merge","text":"

Once your PR is approved, Ralph maintainers will merge your changes into the main project. Congratulations, you\u2019ve successfully contributed to Ralph! \ud83c\udf89

"},{"location":"features/api/","title":"LRS HTTP server","text":"

Ralph implements the Learning Record Store (LRS) specification defined by ADL.

Ralph LRS, based on FastAPI, has the following key features:

  • Supports of multiple databases through different backends
  • Secured with multiple authentication methods
  • Supports multitenancy
  • Enables the Total Learning Architecture with statements forwarding
  • Monitored thanks to the Sentry integration
"},{"location":"features/api/#api_documentation","title":"API documentation","text":""},{"location":"features/api/#fastapi_010","title":"FastAPI 0.1.0","text":""},{"location":"features/api/#endpoints","title":"Endpoints","text":""},{"location":"features/api/#get_xapistatements","title":"GET /xAPI/statements/","text":"

Get

Description

Read a single xAPI Statement or multiple xAPI Statements.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#213-get-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication activity query string No Filter, only return Statements for which the Object of the Statement is an Activity with the specified id agent query string No Filter, only return Statements for which the specified Agent or Group is the Actor or Object of the Statement ascending query boolean False No If \"true\", return results in ascending order of stored time attachments query boolean False No **Not implemented** If \"true\", the LRS uses the multipart response format and includes all attachments as described previously. If \"false\", the LRS sends the prescribed response with Content-Type application/json and does not send attachment data. format query string exact No **Not implemented** If \"ids\", only include minimum information necessary in Agent, Activity, Verb and Group Objects to identify them. For Anonymous Groups this means including the minimum information needed to identify each member. If \"exact\", return Agent, Activity, Verb and Group Objects populated exactly as they were when the Statement was received. An LRS requesting Statements for the purpose of importing them would use a format of \"exact\" in order to maintain Statement Immutability. If \"canonical\", return Activity Objects and Verbs populated with the canonical definition of the Activity Objects and Display of the Verbs as determined by the LRS, after applying the language filtering process defined below, and return the original Agent and Group Objects as in \"exact\" mode. limit query integer 100 No Maximum number of Statements to return. 0 indicates return the maximum the server will allow mine query boolean False No If \"true\", return only the results for which the authority matches the \"agent\" associated to the user that is making the query. pit_id query string No Point-in-time ID to ensure consistency of search requests through multiple pages.NB: for internal use, not part of the LRS specification. registration query string No **Not implemented** Filter, only return Statements matching the specified registration id related_activities query boolean False No **Not implemented** Apply the Activity filter broadly. Include Statements for which the Object, any of the context Activities, or any of those properties in a contained SubStatement match the Activity parameter, instead of that parameter's normal behaviour related_agents query boolean False No **Not implemented** Apply the Agent filter broadly. Include Statements for which the Actor, Object, Authority, Instructor, Team, or any of these properties in a contained SubStatement match the Agent parameter, instead of that parameter's normal behaviour. search_after query string No Sorting data to allow pagination through large number of search results. NB: for internal use, not part of the LRS specification. since query string No Only Statements stored since the specified Timestamp (exclusive) are returned statementId query string No Id of Statement to fetch until query string No Only Statements stored at or before the specified Timestamp are returned verb query string No Filter, only return Statements matching the specified Verb id voidedStatementId query string No **Not implemented** Id of voided Statement to fetch

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Get Xapi Statements  Get\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#put_xapistatements","title":"PUT /xAPI/statements/","text":"

Put

Description

Store a single statement as a single member of a set.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#211-put-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication statementId query string No

Request body

application/json

{\n    \"actor\": null,\n    \"id\": \"90971f98-4a68-41d2-bf30-44c840e050be\",\n    \"object\": {\n        \"id\": \"string\"\n    },\n    \"verb\": {\n        \"id\": \"string\"\n    }\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the request body
{\n    \"properties\": {\n        \"actor\": {\n            \"anyOf\": [\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithAccount\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAnonymousGroup\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithAccount\"\n                }\n            ],\n            \"title\": \"Actor\"\n        },\n        \"id\": {\n            \"type\": \"string\",\n            \"format\": \"uuid\",\n            \"title\": \"Id\"\n        },\n        \"object\": {\n            \"$ref\": \"#/components/schemas/LaxObjectField\"\n        },\n        \"verb\": {\n            \"$ref\": \"#/components/schemas/LaxVerbField\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"actor\",\n        \"object\",\n        \"verb\"\n    ],\n    \"title\": \"LaxStatement\",\n    \"description\": \"Pydantic model for lax statement.\\n\\nIt accepts without validating all fields beyond the bare minimum required to\\nqualify an object as an XAPI statement.\"\n}\n

Response 204 No Content

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#post_xapistatements","title":"POST /xAPI/statements/","text":"

Post

Description

Store a set of statements (or a single statement as a single member of a set).

NB: at this time, using POST to make a GET request, is not supported. LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#212-post-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication

Request body

application/json Schema of the request body
{\n    \"anyOf\": [\n        {\n            \"$ref\": \"#/components/schemas/LaxStatement\"\n        },\n        {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/LaxStatement\"\n            },\n            \"type\": \"array\"\n        }\n    ],\n    \"title\": \"Statements\"\n}\n

Response 200 OK

application/json

[\n    null\n]\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"items\": {},\n    \"type\": \"array\",\n    \"title\": \"Response Post Xapi Statements  Post\"\n}\n

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#get_xapistatements_1","title":"GET /xAPI/statements","text":"

Get

Description

Read a single xAPI Statement or multiple xAPI Statements.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#213-get-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication activity query string No Filter, only return Statements for which the Object of the Statement is an Activity with the specified id agent query string No Filter, only return Statements for which the specified Agent or Group is the Actor or Object of the Statement ascending query boolean False No If \"true\", return results in ascending order of stored time attachments query boolean False No **Not implemented** If \"true\", the LRS uses the multipart response format and includes all attachments as described previously. If \"false\", the LRS sends the prescribed response with Content-Type application/json and does not send attachment data. format query string exact No **Not implemented** If \"ids\", only include minimum information necessary in Agent, Activity, Verb and Group Objects to identify them. For Anonymous Groups this means including the minimum information needed to identify each member. If \"exact\", return Agent, Activity, Verb and Group Objects populated exactly as they were when the Statement was received. An LRS requesting Statements for the purpose of importing them would use a format of \"exact\" in order to maintain Statement Immutability. If \"canonical\", return Activity Objects and Verbs populated with the canonical definition of the Activity Objects and Display of the Verbs as determined by the LRS, after applying the language filtering process defined below, and return the original Agent and Group Objects as in \"exact\" mode. limit query integer 100 No Maximum number of Statements to return. 0 indicates return the maximum the server will allow mine query boolean False No If \"true\", return only the results for which the authority matches the \"agent\" associated to the user that is making the query. pit_id query string No Point-in-time ID to ensure consistency of search requests through multiple pages.NB: for internal use, not part of the LRS specification. registration query string No **Not implemented** Filter, only return Statements matching the specified registration id related_activities query boolean False No **Not implemented** Apply the Activity filter broadly. Include Statements for which the Object, any of the context Activities, or any of those properties in a contained SubStatement match the Activity parameter, instead of that parameter's normal behaviour related_agents query boolean False No **Not implemented** Apply the Agent filter broadly. Include Statements for which the Actor, Object, Authority, Instructor, Team, or any of these properties in a contained SubStatement match the Agent parameter, instead of that parameter's normal behaviour. search_after query string No Sorting data to allow pagination through large number of search results. NB: for internal use, not part of the LRS specification. since query string No Only Statements stored since the specified Timestamp (exclusive) are returned statementId query string No Id of Statement to fetch until query string No Only Statements stored at or before the specified Timestamp are returned verb query string No Filter, only return Statements matching the specified Verb id voidedStatementId query string No **Not implemented** Id of voided Statement to fetch

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Get Xapi Statements Get\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#put_xapistatements_1","title":"PUT /xAPI/statements","text":"

Put

Description

Store a single statement as a single member of a set.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#211-put-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication statementId query string No

Request body

application/json

{\n    \"actor\": null,\n    \"id\": \"676abdad-ebe3-469e-8cb3-2b55db3e7845\",\n    \"object\": {\n        \"id\": \"string\"\n    },\n    \"verb\": {\n        \"id\": \"string\"\n    }\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the request body
{\n    \"properties\": {\n        \"actor\": {\n            \"anyOf\": [\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithAccount\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAnonymousGroup\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithAccount\"\n                }\n            ],\n            \"title\": \"Actor\"\n        },\n        \"id\": {\n            \"type\": \"string\",\n            \"format\": \"uuid\",\n            \"title\": \"Id\"\n        },\n        \"object\": {\n            \"$ref\": \"#/components/schemas/LaxObjectField\"\n        },\n        \"verb\": {\n            \"$ref\": \"#/components/schemas/LaxVerbField\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"actor\",\n        \"object\",\n        \"verb\"\n    ],\n    \"title\": \"LaxStatement\",\n    \"description\": \"Pydantic model for lax statement.\\n\\nIt accepts without validating all fields beyond the bare minimum required to\\nqualify an object as an XAPI statement.\"\n}\n

Response 204 No Content

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#post_xapistatements_1","title":"POST /xAPI/statements","text":"

Post

Description

Store a set of statements (or a single statement as a single member of a set).

NB: at this time, using POST to make a GET request, is not supported. LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#212-post-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication

Request body

application/json Schema of the request body
{\n    \"anyOf\": [\n        {\n            \"$ref\": \"#/components/schemas/LaxStatement\"\n        },\n        {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/LaxStatement\"\n            },\n            \"type\": \"array\"\n        }\n    ],\n    \"title\": \"Statements\"\n}\n

Response 200 OK

application/json

[\n    null\n]\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"items\": {},\n    \"type\": \"array\",\n    \"title\": \"Response Post Xapi Statements Post\"\n}\n

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#get_lbheartbeat","title":"GET /lbheartbeat","text":"

Lbheartbeat

Description

Load balancer heartbeat.

Return a 200 when the server is running.

Response 200 OK

application/json Schema of the response body"},{"location":"features/api/#get_heartbeat","title":"GET /heartbeat","text":"

Heartbeat

Description

Application heartbeat.

Return a 200 if all checks are successful.

Response 200 OK

application/json Schema of the response body"},{"location":"features/api/#get_whoami","title":"GET /whoami","text":"

Whoami

Description

Return the current user\u2019s username along with their scopes.

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Whoami Whoami Get\"\n}\n
"},{"location":"features/api/#schemas","title":"Schemas","text":""},{"location":"features/api/#basexapiaccount","title":"BaseXapiAccount","text":"Name Type homePage string name string"},{"location":"features/api/#basexapiagentwithaccount","title":"BaseXapiAgentWithAccount","text":"Name Type account BaseXapiAccount name string objectType string"},{"location":"features/api/#basexapiagentwithmbox","title":"BaseXapiAgentWithMbox","text":"Name Type mbox string name string objectType string"},{"location":"features/api/#basexapiagentwithmboxsha1sum","title":"BaseXapiAgentWithMboxSha1Sum","text":"Name Type mbox_sha1sum string name string objectType string"},{"location":"features/api/#basexapiagentwithopenid","title":"BaseXapiAgentWithOpenId","text":"Name Type name string objectType string openid string(uri)"},{"location":"features/api/#basexapianonymousgroup","title":"BaseXapiAnonymousGroup","text":"Name Type member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithaccount","title":"BaseXapiIdentifiedGroupWithAccount","text":"Name Type account BaseXapiAccount member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithmbox","title":"BaseXapiIdentifiedGroupWithMbox","text":"Name Type mbox string member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithmboxsha1sum","title":"BaseXapiIdentifiedGroupWithMboxSha1Sum","text":"Name Type mbox_sha1sum string member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithopenid","title":"BaseXapiIdentifiedGroupWithOpenId","text":"Name Type member Array<> name string objectType string openid string(uri)"},{"location":"features/api/#errordetail","title":"ErrorDetail","text":"Name Type detail string"},{"location":"features/api/#httpvalidationerror","title":"HTTPValidationError","text":"Name Type detail Array<ValidationError>"},{"location":"features/api/#laxobjectfield","title":"LaxObjectField","text":"Name Type id string(uri)"},{"location":"features/api/#laxstatement","title":"LaxStatement","text":"Name Type actor id string(uuid) object LaxObjectField verb LaxVerbField"},{"location":"features/api/#laxverbfield","title":"LaxVerbField","text":"Name Type id string(uri)"},{"location":"features/api/#validationerror","title":"ValidationError","text":"Name Type loc Array<> msg string type string"},{"location":"features/api/#security_schemes","title":"Security schemes","text":"Name Type Scheme Description HTTPBasic http basic"},{"location":"features/backends/","title":"Backends for data storage","text":"

Ralph supports various backends that can be accessed to read from or write to (learning events or random data). Implemented backends are listed below along with their configuration parameters. If your favourite data storage method is missing, feel free to submit your implementation or get in touch!

"},{"location":"features/backends/#key_concepts","title":"Key concepts","text":"

Each backend has its own parameter requirements. These parameters can be set as command line options or environment variables; the later is the recommended solution for sensitive data such as service credentials. For example, the os_username (OpenStack user name) parameter of the OpenStack Swift backend, can be set as a command line option using swift as the option prefix (and replacing underscores in its name by dashes):

ralph list --backend swift --swift-os-username johndoe # [...] more options\n

Alternatively, this parameter can be set as an environment variable (in upper case, prefixed by the program name, e.g. RALPH_):

export RALPH_BACKENDS__DATA__SWIFT__OS_USERNAME=\"johndoe\"\nralph list --backend swift # [...] more options\n

The general patterns for backend parameters are:

  • --{{ backend_name }}-{{ parameter | underscore_to_dash }} for command options, and,
  • RALPH_BACKENDS__DATA__{{ backend_name | uppercase }}__{{ parameter | uppercase }} for environment variables.
"},{"location":"features/backends/#elasticsearch","title":"Elasticsearch","text":"

Elasticsearch backend is mostly used for indexation purpose (as a datalake) but it can also be used to fetch indexed data from it.

Elasticsearch data backend default configuration.

Attributes:

Name Type Description ALLOW_YELLOW_STATUS bool

Whether to consider Elasticsearch yellow health status to be ok.

CLIENT_OPTIONS dict

A dictionary of valid options for the Elasticsearch class initialization.

DEFAULT_INDEX str

The default index to use for querying Elasticsearch.

HOSTS str or tuple

The comma-separated list of Elasticsearch nodes to connect to.

LOCALE_ENCODING str

The encoding used for reading/writing documents.

POINT_IN_TIME_KEEP_ALIVE str

The duration for which Elasticsearch should keep a point in time alive.

READ_CHUNK_SIZE int

The default chunk size for reading batches of documents.

REFRESH_AFTER_WRITE str or bool

Whether the Elasticsearch index should be refreshed after the write operation.

WRITE_CHUNK_SIZE int

The default chunk size for writing batches of documents.

"},{"location":"features/backends/#mongodb","title":"MongoDB","text":"

MongoDB backend is mostly used for indexation purpose (as a datalake) but it can also be used to fetch collections of documents from it.

MongoDB data backend default configuration.

Attributes:

Name Type Description CONNECTION_URI str

The MongoDB connection URI.

DEFAULT_DATABASE str

The MongoDB database to connect to.

DEFAULT_COLLECTION str

The MongoDB database collection to get objects from.

CLIENT_OPTIONS MongoClientOptions

A dictionary of MongoDB client options.

LOCALE_ENCODING str

The locale encoding to use when none is provided.

READ_CHUNK_SIZE int

The default chunk size for reading batches of documents.

WRITE_CHUNK_SIZE int

The default chunk size for writing batches of documents.

"},{"location":"features/backends/#clickhouse","title":"ClickHouse","text":"

The ClickHouse backend can be used as a data lake and to fetch collections of documents from it.

ClickHouse data backend default configuration.

Attributes:

Name Type Description HOST str

ClickHouse server host to connect to.

PORT int

ClickHouse server port to connect to.

DATABASE str

ClickHouse database to connect to.

EVENT_TABLE_NAME str

Table where events live.

USERNAME str

ClickHouse username to connect as (optional).

PASSWORD str

Password for the given ClickHouse username (optional).

CLIENT_OPTIONS ClickHouseClientOptions

A dictionary of valid options for the ClickHouse client connection.

LOCALE_ENCODING str

The locale encoding to use when none is provided.

READ_CHUNK_SIZE int

The default chunk size for reading.

WRITE_CHUNK_SIZE int

The default chunk size for writing.

The ClickHouse client options supported in Ralph can be found in these locations:

  • Python driver specific
  • General ClickHouse client settings
"},{"location":"features/backends/#ovh_-_log_data_platform_ldp","title":"OVH - Log Data Platform (LDP)","text":"

LDP is a nice service built by OVH on top of Graylog to follow, analyse and store your logs. Learning events (aka tracking logs) can be stored in GELF format using this backend.

Read-only backend

For now the LDP backend is read-only as we consider that it is mostly used to collect primary logs and not as a Ralph target. Feel free to get in touch to prove us wrong, or better: submit your proposal for the write method implementation.

To access OVH\u2019s LDP API, you need to register Ralph as an authorized application and generate an application key, an application secret and a consumer key.

While filling the registration form available at: eu.api.ovh.com/createToken/, be sure to give an appropriate validity time span to your token and allow only GET requests on the /dbaas/logs/* path.

OVH LDP (Log Data Platform) data backend default configuration.

Attributes:

Name Type Description APPLICATION_KEY str

The OVH API application key (AK).

APPLICATION_SECRET str

The OVH API application secret (AS).

CONSUMER_KEY str

The OVH API consumer key (CK).

DEFAULT_STREAM_ID str

The default stream identifier to query.

ENDPOINT str

The OVH API endpoint.

READ_CHUNK_SIZE str

The default chunk size for reading archives.

REQUEST_TIMEOUT int

HTTP request timeout in seconds.

SERVICE_NAME str

The default LDP account name.

For more information about OVH\u2019s API client parameters, please refer to the project\u2019s documentation: github.com/ovh/python-ovh.

"},{"location":"features/backends/#openstack_swift","title":"OpenStack Swift","text":"

Swift is the OpenStack object storage service. This storage backend is fully supported (read and write operations) to stream and store log archives.

Parameters correspond to a standard authentication using OpenStack Keystone service and configuration to work with the target container.

Swift data backend default configuration.

Attributes:

Name Type Description AUTH_URL str

The authentication URL.

USERNAME str

The name of the openstack swift user.

PASSWORD str

The password of the openstack swift user.

IDENTITY_API_VERSION str

The keystone API version to authenticate to.

TENANT_ID str

The identifier of the tenant of the container.

TENANT_NAME str

The name of the tenant of the container.

PROJECT_DOMAIN_NAME str

The project domain name.

REGION_NAME str

The region where the container is.

OBJECT_STORAGE_URL str

The default storage URL.

USER_DOMAIN_NAME str

The user domain name.

DEFAULT_CONTAINER str

The default target container.

LOCALE_ENCODING str

The encoding used for reading/writing documents.

READ_CHUNK_SIZE str

The default chunk size for reading objects.

WRITE_CHUNK_SIZE str

The default chunk size for writing objects.

"},{"location":"features/backends/#amazon_s3","title":"Amazon S3","text":"

S3 is the Amazon Simple Storage Service. This storage backend is fully supported (read and write operations) to stream and store log archives.

Parameters correspond to a standard authentication with AWS CLI and configuration to work with the target bucket.

S3 data backend default configuration.

Attributes:

Name Type Description ACCESS_KEY_ID str

The access key id for the S3 account.

SECRET_ACCESS_KEY str

The secret key for the S3 account.

SESSION_TOKEN str

The session token for the S3 account.

ENDPOINT_URL str

The endpoint URL of the S3.

DEFAULT_REGION str

The default region used in instantiating the client.

DEFAULT_BUCKET_NAME str

The default bucket name targeted.

LOCALE_ENCODING str

The encoding used for writing dictionaries to objects.

READ_CHUNK_SIZE str

The default chunk size for reading objects.

WRITE_CHUNK_SIZE str

The default chunk size for writing objects.

"},{"location":"features/backends/#file_system","title":"File system","text":"

The file system backend is a dummy template that can be used to develop your own backend. It is a \u201cdummy\u201d backend as it is not intended for practical use (UNIX ls and cat would be more practical).

The only required parameter is the path we want to list or stream content from.

FileSystem data backend default configuration.

Attributes:

Name Type Description DEFAULT_DIRECTORY_PATH str or Path

The default target directory path where to perform list, read and write operations.

DEFAULT_QUERY_STRING str

The default query string to match files for the read operation.

LOCALE_ENCODING str

The encoding used for writing dictionaries to files.

READ_CHUNK_SIZE int

The default chunk size for reading files.

WRITE_CHUNK_SIZE int

The default chunk size for writing files.

"},{"location":"features/backends/#learning_record_store_lrs","title":"Learning Record Store (LRS)","text":"

The LRS backend is used to store and retrieve xAPI statements from various systems that follow the xAPI specification (such as our own Ralph LRS, which can be run from this package). LRS systems are mostly used in e-learning infrastructures.

LRS data backend default configuration.

Attributes:

Name Type Description BASE_URL AnyHttpUrl

LRS server URL.

USERNAME str

Basic auth username for LRS authentication.

PASSWORD str

Basic auth password for LRS authentication.

HEADERS dict

Headers defined for the LRS server connection.

LOCALE_ENCODING str

The encoding used for reading statements.

READ_CHUNK_SIZE int

The default chunk size for reading statements.

STATUS_ENDPOINT str

Endpoint used to check server status.

STATEMENTS_ENDPOINT str

Default endpoint for LRS statements resource.

WRITE_CHUNK_SIZE int

The default chunk size for writing statements.

"},{"location":"features/backends/#websocket","title":"WebSocket","text":"

The webSocket backend is read-only and can be used to get real-time events.

If you use OVH\u2019s Logs Data Platform (LDP), you can retrieve a WebSocket URI to test your data stream by following instructions from the official documentation.

Websocket data backend default configuration.

Attributes:

Name Type Description CLIENT_OPTIONS dict

A dictionary of valid options for the websocket client connection. See WSClientOptions.

URI str

The URI to connect to.

Client options for websockets.connection.

For mode details, see the websockets.connection documentation

Attributes:

Name Type Description close_timeout float

Timeout for opening the connection in seconds.

compression str

Per-message compression (deflate) is activated by default. Setting it to None disables compression.

max_size int

Maximum size of incoming messages in bytes. Setting it to None disables the limit.

max_queue int

Maximum number of incoming messages in receive buffer. Setting it to None disables the limit.

open_timeout float

Timeout for opening the connection in seconds. Setting it to None disables the timeout.

origin str

Value of the Origin header, for servers that require it.

ping_interval float

Delay between keepalive pings in seconds. Setting it to None disables keepalive pings.

ping_timeout float

Timeout for keepalive pings in seconds. Setting it to None disables timeouts.

read_limit int

High-water mark of read buffer in bytes.

user_agent_header str

Value of the User-Agent request header. It defaults to \u201cPython/x.y.z websockets/X.Y\u201d. Setting it to None removes the header.

write_limit int

High-water mark of write buffer in bytes.

"},{"location":"features/models/","title":"Learning statement models","text":"

The learning statement models validation and conversion tools in Ralph empower you to work with an LRS and ensure the quality of xAPI statements. These features not only enhance the integrity of your learning data but also facilitate integration and compliance with industry standards.

This section provides insights into the supported models, their conversion, and validation.

"},{"location":"features/models/#supported_statements","title":"Supported statements","text":"

Learning statement models encompass a wide array of xAPI and OpenEdx statement types, ensuring comprehensive support for your e-learning data.

  1. xAPI statements models:

    • LMS
    • Video
    • Virtual classroom
  2. OpenEdx statements models:

    • Enrollment
    • Navigational
    • Open Reponse Assessment
    • Peer instruction
    • Problem interaction
    • Textbook interaction
    • Video interaction
"},{"location":"features/models/#statements_validation","title":"Statements validation","text":"

In learning analytics, the validation of statements takes on significant importance. These statements, originating from diverse sources, systems or applications, must align with specific standards such as xAPI for the best known. The validation process becomes essential in ensuring that these statements meet the required standards, facilitating data quality and reliability.

Ralph allows you to automate the validation process in your production stack. OpenEdx related events and xAPI statements are supported.

Warning

For now, validation is effective only with supported learning statement models on Ralph. About xAPI statements, an issue is open to extend validation to any xAPI statement.

Check out tutorials to test the validation feature:

  • validate with Ralph as a CLI
  • validate with Ralph as a library
"},{"location":"features/models/#statements_conversion","title":"Statements conversion","text":"

Ralph currently supports conversion from OpenEdx learning events to xAPI statements. Here is the up-to-date conversion sets availables:

FROM TO edx.course.enrollment.activated registered to a course edx.course.enrollment.deactivated unregistered to a course load_video/edx.video.loaded initialized a video play_video/edx.video.played played a video pause_video/edx.video.paused paused a video stop_video/edx.video.stopped terminated a video seek_video/edx.video.position.changed seeked in a video

Check out tutorials to test the conversion feature:

  • convert with Ralph as a CLI
  • convert with Ralph as a library
"},{"location":"tutorials/cli/","title":"How to use Ralph as a CLI ?","text":"

WIP.

"},{"location":"tutorials/cli/#prerequisites","title":"Prerequisites","text":"
  • Ralph should be properly installed to be used as a CLI. Follow Installation section for more information
  • [Recommended] To easily manipulate JSON streams, please install jq on your machine
"},{"location":"tutorials/cli/#validate_command","title":"validate command","text":"

In this tutorial, we\u2019ll walk you through the process of using validate command to check the validity of xAPI statements.

"},{"location":"tutorials/cli/#with_an_invalid_xapi_statement","title":"With an invalid xAPI statement","text":"

First, let\u2019s test the validate command with a dummy JSON string.

  • Create in the terminal a dummy statement as follows:
invalid_statement='{\"foo\": \"invalid xapi\"}'\n
  • Run validation on this statement with this command:
echo \"$invalid_statement\" | ralph validate -f xapi \n
  • You should observe the following output from the terminal:
INFO     ralph.cli Validating xapi events (ignore_errors=False | fail-on-unknown=False)\nERROR    ralph.models.validator No matching pydantic model found for input event\nINFO     ralph.models.validator Total events: 1, Invalid events: 1\n
"},{"location":"tutorials/cli/#with_a_valid_xapi_statement","title":"With a valid xAPI statement","text":"

Now, let\u2019s test the validate command with a valid xAPI statement.

The tutorial is made on a completed video xAPI statement.

Info

According to the specification, an xAPI statement to be valid should contain, at least the three following fields:

  • an actor (with a correct IFI),
  • a verb (with an id property),
  • an object (with an id property).
  • Create in the terminal a valid xAPI statement as follows:
valid_statement='{\"actor\": {\"mbox\": \"mailto:johndoe@example.com\", \"name\": \"John Doe\"}, \"verb\": {\"id\": \"http://adlnet.gov/expapi/verbs/completed\"}, \"object\": {\"id\": \"http://example.com/video/001-introduction\"}, \"timestamp\": \"2023-10-31T15:30:00Z\"}'\n
  • Run validation on this statement with this command:
echo \"$valid_statement\" | bin/ralph validate -f xapi \n
  • You should observe the following output from the terminal:
INFO     ralph.cli Validating xapi events (ignore_errors=False | fail-on-unknown=False)\nINFO     ralph.models.validator Total events: 1, Invalid events: 1\n
"},{"location":"tutorials/cli/#convert_command","title":"convert command","text":"

In this tutorial, you\u2019ll learn how to convert OpenEdx events into xAPI statements with Ralph.

Note

Please note that this feature is currently only supported for a set of OpenEdx events. When converting Edx events to xAPI statements, always refer to the list of supported event types to ensure accurate and successful conversion.

For this example, let\u2019s choose the page_close OpenEdx event that is converted into a terminated a page xAPI statement.

  • Create in the terminal a page_close OpenEdx event as follows:
edx_statements={\"username\": \"\", \"ip\": \"0.0.0.0\", \"agent\": \"0\", \"host\": \"0\", \"referer\": \"\", \"accept_language\": \"0\", \"context\": {\"course_id\": \"\", \"course_user_tags\": null, \"module\": null, \"org_id\": \"0\", \"path\": \".\", \"user_id\": null}, \"time\": \"2000-01-01T00:00:00\", \"page\": \"http://A.ac/\", \"event_source\": \"browser\", \"session\": \"\", \"event\": \"{}\", \"event_type\": \"page_close\", \"name\": \"page_close\"}\n
  • Convert this statement into a terminated a page statement with this command:
echo \"$edx_statements\" | \\ \nralph convert \\\n    --platform-url \"http://lms-example.com\" \\\n    --uuid-namespace \"ee241f8b-174f-5bdb-bae9-c09de5fe017f\" \\\n    --from edx \\\n    --to xapi | \\\n    jq\n
  • You should observe the following output from the terminal:
INFO     ralph.cli Converting edx events to xapi format (ignore_errors=False | fail-on-unknown=False)\nINFO     ralph.models.converter Total events: 1, Invalid events: 0\n{\n  \"id\": \"8670c7d4-5485-52bd-b10a-a8ae27a51501\",\n  \"actor\": {\n    \"account\": {\n      \"homePage\": \"http://lms-example.com\",\n      \"name\": \"anonymous\"\n    }\n  },\n  \"verb\": {\n    \"id\": \"http://adlnet.gov/expapi/verbs/terminated\"\n  },\n  \"object\": {\n    \"id\": \"http://A.ac/\",\n    \"definition\": {\n      \"type\": \"http://activitystrea.ms/schema/1.0/page\"\n    }\n  },\n  \"timestamp\": \"2000-01-01T00:00:00\",\n  \"version\": \"1.0.0\"\n}\n

\ud83c\udf89 Congratulations! You just have converted an event generated from OpenEdx LMS to a standardised xAPI statement!

Store locally converted statements

To stored the converted statements locally on your machine, send the output of the convert command to a JSON file as follows:

echo \"$edx_statements\" | \\ \nralph convert \\\n    --platform-url \"http://lms-example.com\" \\\n    --uuid-namespace \"ee241f8b-174f-5bdb-bae9-c09de5fe017f\" \\\n    --from edx \\\n    --to xapi \\\n    > converted_event.json\n

"},{"location":"tutorials/development_guide/","title":"Development guide","text":"

Welcome to our developer contribution guidelines!

You should know that we would be glad to help you contribute to Ralph! Here\u2019s our Discord to contact us easily.

"},{"location":"tutorials/development_guide/#preparation","title":"Preparation","text":"

Prerequisites

Ralph development environment is containerized with Docker for consistency. Before diving in, ensure you have the following installed:

  • Docker Engine
  • Docker Compose
  • make

Info

In this tutorial, and even more generally in others tutorials, we tend to use Elasticsearch backend. Note that you can do the same with another LRS backend implemented in Ralph.

To start playing with ralph, you should first bootstrap using:

make bootstrap\n

When bootstrapping the project for the first time, the env.dist template file is copied to the .env file. You may want to edit the generated .env file to set up available backend parameters that will be injected into the running container as environment variables to configure Ralph (see backends documentation):

# Elasticsearch backend\nRALPH_BACKENDS__LRS__ES__HOSTS=http://elasticsearch:9200\nRALPH_BACKENDS__LRS__ES__INDEX=statements\nRALPH_BACKENDS__LRS__ES__TEST_HOSTS=http://elasticsearch:9200\nRALPH_BACKENDS__LRS__ES__TEST_INDEX=test-index\n\n# [...]\n

Default configuration in .env file

Defaults are provided for some environment variables that you can use by uncommenting them.

"},{"location":"tutorials/development_guide/#backends","title":"Backends","text":"

Virtual memory for Elasticsearch

In order to run the Elasticsearch backend locally on GNU/Linux operating systems, ensure that your virtual memory limits are not too low and increase them if needed by typing this command from your terminal (as root or using sudo):

sysctl -w vm.max_map_count=262144

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/master/vm-max-map-count.html

Disk space for Elasticsearch

Ensure that you have at least 10% of available disk space on your machine to run Elasticsearch.

Once configured, start the database container using the following command, substituting [BACKEND] by the backend name (e.g. es for Elasticsearch):

make run-[BACKEND]\n

You can also start other services with the following commands:

make run-es\nmake run-swift\nmake run-mongo\nmake run-clickhouse\n# Start all backends\nmake run-all\n

Now that you have started the elasticsearch and swift backends, it\u2019s time to play with them with Ralph CLI:

We can store a JSON file in the Swift backend:

echo '{\"id\": 1, \"foo\": \"bar\"}' | \\\n    ./bin/ralph write -b swift -t foo.json\n

We can check that we have created a new JSON file in the Swift backend:

bin/ralph list -b swift\n>>> foo.json\n

Let\u2019s read the content of the JSON file and index it in Elasticsearch

bin/ralph read -b swift -t foo.json | \\\n    bin/ralph write -b es\n

We can now check that we have properly indexed the JSON file in Elasticsearch

bin/ralph read -b es\n>>> {\"id\": 1, \"foo\": \"bar\"}\n

"},{"location":"tutorials/development_guide/#wip_lrs","title":"[WIP] LRS","text":""},{"location":"tutorials/development_guide/#tray","title":"Tray","text":"

Ralph is distributed along with its tray (a deployable package for Kubernetes clusters using Arnold). If you intend to work on this tray, please refer to Arnold\u2019s documentation first.

Prerequisites

  • Kubectl (>v.1.23.5): This CLI is used to communicate with the running Kubernetes instance you will use.
  • k3d (>v.5.0.0): This tool is used to set up and run a lightweight Kubernetes cluster, in order to have a local environment (it is required to complete quickstart instructions below to avoid depending on an existing Kubernetes cluster).
  • curl is required by Arnold\u2019s CLI.
  • gnupg to encrypt Ansible vaults passwords and collaborate with your team.
"},{"location":"tutorials/development_guide/#create_a_local_k3d_cluster","title":"Create a local k3d cluster","text":"

To create (or run) a local kubernetes cluster, we use k3d. The cluster\u2019s bootstrapping should be run via:

make k3d-cluster\n

Running a k3d-cluster locally supposes that the 80 and 443 ports of your machine are available, so that the ingresses created for your project responds properly. If one or both ports are already used by another service running on your machine, the make k3d-cluster command may fail.

You can check that your cluster is running using the k3d cluster command:

k3d cluster list\n

You should expect the following output:

NAME     SERVERS   AGENTS   LOADBALANCER\nralph    1/1       0/0      true\n

As you can see, we are running a single node cluster called ralph.

"},{"location":"tutorials/development_guide/#bootstrap_an_arnold_project","title":"Bootstrap an Arnold project","text":"

Once your Kubernetes cluster is running, you need to create a standard Arnold project describing applications and environments you need to deploy:

make arnold-bootstrap\n

Once bootstrapped, Arnold should have created a group_vars directory.

Run the following command to discover the directory tree.

tree group_vars\n

The output should be as follows:

group_vars\n\u251c\u2500\u2500 common\n\u2514\u2500\u2500 customer\n    \u2514\u2500\u2500 ralph\n        \u251c\u2500\u2500 development\n        \u2502\u00a0\u00a0 \u251c\u2500\u2500 main.yml\n        \u2502\u00a0\u00a0 \u2514\u2500\u2500 secrets\n        \u2502\u00a0\u00a0     \u251c\u2500\u2500 databases.vault.yml\n        \u2502\u00a0\u00a0     \u251c\u2500\u2500 elasticsearch.vault.yml\n        \u2502\u00a0\u00a0     \u2514\u2500\u2500 ralph.vault.yml\n        \u2514\u2500\u2500 main.yml\n\n5 directories, 5 files\n

To create the LRS credentials file, you need to provide a list of accounts allowed to request the LRS in Ralph\u2019s vault:

# Setup your kubernetes environment\nsource .k3d-cluster.env.sh\n\n# Decrypt the vault\nbin/arnold -d -c ralph -e development -- vault -a ralph decrypt\n

Edit the vault file to add a new account for the foo user with the bar password and a relevant scope:

# group_vars/customer/ralph/development/secrets/ralph.vault.yml\n#\n# [...]\n#\n# LRS\nLRS_AUTH:\n  - username: \"foo\"\n    hash: \"$2b$12$lCggI749U6TrzK7Qyr7xGe1KVSAXdPjtkMew.BD6lzIk//T5YSb72\"\n    scopes:\n      - \"all\"\n

The password hash has been generated using bcrypt as explained in the LRS user guide.

And finally (re-)encrypt Ralph\u2019s vault:

bin/arnold -d -c ralph -e development -- vault -a ralph encrypt\n

You are now ready to create the related Kubernetes Secret while initializing Arnold project in the next step.

"},{"location":"tutorials/development_guide/#prepare_working_namespace","title":"Prepare working namespace","text":"

You are now ready to create required Kubernetes objects to start working on Ralph\u2019s deployment:

make arnold-init\n

At this point an Elasticsearch cluster should be running on your Kubernetes cluster:

kubectl -n development-ralph get -l app=elasticsearch pod\nNAME                                         READY   STATUS      RESTARTS   AGE\nelasticsearch-node-0                         1/1     Running     0          69s\nelasticsearch-node-1                         1/1     Running     0          69s\nelasticsearch-node-2                         1/1     Running     0          69s\nes-index-template-j-221010-09h25m24s-nx5qz   0/1     Completed   0          49s\n

We are now ready to deploy Ralph to Kubernetes!

"},{"location":"tutorials/development_guide/#deploy_code_repeat","title":"Deploy, code, repeat","text":"

To test your local docker image, you need to build it and publish it to the local kubernetes cluster docker registry using the k3d-push Makefile rule:

make k3d-push\n

Note

Each time you modify Ralph\u2019s application or its Docker image, you will need to make this update.

Now that your Docker image is published, it\u2019s time to deploy it!

make arnold-deploy\n

To test this deployment, let\u2019s try to make an authenticated request to the LRS:

curl -sLk \\\n  --user foo:bar \\\n  \"https://$(\\\n      kubectl -n development-ralph \\\n      get \\\n      ingress/ralph-app-current \\\n      -o jsonpath='{.spec.rules[0].host}')/whoami\"\n

Let\u2019s also send some test statements:

gunzip -c data/statements.json.gz | \\\nhead -n 100 | \\\njq -s . | \\\ncurl -sLk \\\n  --user foo:bar \\\n  -X POST \\\n  -H \"Content-Type: application/json\" \\\n  -d @- \\\n  \"https://$(\\\n      kubectl -n development-ralph \\\n      get \\\n      ingress/ralph-app-current \\\n      -o jsonpath='{.spec.rules[0].host}')/xAPI/statements/\"\n

Install jq

This example requires jq command to serialize the request payload (xAPI statements). When dealing with JSON data, we strongly recommend installing it to manipulate them from the command line.

"},{"location":"tutorials/development_guide/#perform_arnolds_operations","title":"Perform Arnold\u2019s operations","text":"

If you want to run the bin/arnold script to run specific Arnold commands, you must ensure that your environment is properly set and that Arnold runs in development mode (i.e. using the -d flag):

source .k3d-cluster.env.sh\nbin/arnold -d -c ralph -e development -- vault -a ralph view\n
"},{"location":"tutorials/development_guide/#stop_k3d_cluster","title":"Stop k3d cluster","text":"

When finished to work on the Tray, you can stop the k3d cluster using the k3d-stop helper:

make k3d-stop\n
"},{"location":"tutorials/development_guide/#after_your_development","title":"After your development","text":""},{"location":"tutorials/development_guide/#testing","title":"Testing","text":"

To run tests on your code, either use the test Make target or the bin/pytest script to pass specific arguments to the test runner:

# Run all tests\nmake test\n\n# Run pytest with options\nbin/pytest -x -k mixins\n\n# Run pytest with options and more debugging logs\nbin/pytest tests/api -x -vvv -s --log-level=DEBUG -k mixins\n
"},{"location":"tutorials/development_guide/#linting","title":"Linting","text":"

To lint your code, either use the lint meta target or one of the linting tools we use:

# Run all linters\nmake lint\n\n# Run ruff linter\nmake lint-ruff\n\n# Run ruff linter and resolve fixable errors\nmake lint-ruff-fix\n\n# List available linters\nmake help | grep lint-\n
"},{"location":"tutorials/development_guide/#documentation","title":"Documentation","text":"

In case you need to document your code, use the following targets:

# Build documentation site\nmake docs-build\n\n# Run mkdocs live server for dev docs\nmake docs-serve\n
"},{"location":"tutorials/helm/","title":"Ralph Helm chart","text":"

Ralph LRS is distributed as a Helm chart in the DockerHub OCI openfuncharts.

"},{"location":"tutorials/helm/#setting_environment_values","title":"Setting environment values","text":"

All default values are in the values.yaml file. With Helm, you can extend the values file: there is no need to copy/paste all the default values. You can create an environment values file, e.g. custom-values.yaml and only set needed customizations.

All sensitive environment values, needed for Ralph to work, are expected to be in an external Secret Kubernetes object. An example manifest is provided in the ralph-env-secret.yaml file here that you can adapt to fit your needs.

All other non-sensitive environment values, also needed for Ralph to work, are expected to be in an external ConfigMap Kubernetes object. An example manifest is provided in the ralph-env-cm.yaml file here that you can adapt to fit your needs.

"},{"location":"tutorials/helm/#creating_authentication_secret","title":"Creating authentication secret","text":"

Ralph stores users credentials in an external Secret Kubernetes object. An example authentication file auth-demo.json is provided here, that you can take inspiration from. Refer to the LRS guide for creating user credentials.

"},{"location":"tutorials/helm/#reviewing_manifest","title":"Reviewing manifest","text":"

To generate and review your Helm generated manifest, under ./src/helm run the following command:

helm template oci://registry-1.docker.io/openfuncharts/ralph\n
"},{"location":"tutorials/helm/#installing_the_chart","title":"Installing the chart","text":"

Ralph Helm chart is distributed on DockerHub, and you can install it with:

helm install RELEASE_NAME oci://registry-1.docker.io/openfuncharts/ralph\n

Tips:

  • use --values to pass an env values file to extend and/or replace the default values
  • --set var=value to replace one var/value
  • --dry-run to verify your manifest before deploying
"},{"location":"tutorials/helm/#tutorial_deploying_ralph_lrs_on_a_local_cluster","title":"Tutorial: deploying Ralph LRS on a local cluster","text":"

This tutorial aims at deploying Ralph LRS on a local Kubernetes cluster using Helm. In this tutorial, you will learn to:

  • run and configure a small Kubernetes cluster on your machine,
  • deploy a data lake that stores learning records: we choose Elasticsearch,
  • deploy Ralph LRS (Learning Records Store) that receives and sends learning records in xAPI,
"},{"location":"tutorials/helm/#requirements","title":"Requirements","text":"
  • curl, the CLI to make HTTP requests.
  • jq, the JSON data Swiss-Knife.
  • kubectl, the Kubernetes CLI.
  • helm, the package manager for Kubernetes.
  • minikube, a lightweight kubernetes distribution to work locally on the project.
"},{"location":"tutorials/helm/#bootstrapping_a_local_cluster","title":"Bootstrapping a local cluster","text":"

Let\u2019s begin by running a local cluster with Minikube, where we will deploy Ralph on.

# Start a local kubernetes cluster\nminikube start\n

We will now create our own Kubernetes namespace to work on:

# This is our namespace\nexport K8S_NAMESPACE=\"learning-analytics\"\n\n# Check your namespace value\necho ${K8S_NAMESPACE}\n\n# Create the namespace\nkubectl create namespace ${K8S_NAMESPACE}\n\n# Activate the namespace\nkubectl config set-context --current --namespace=${K8S_NAMESPACE}\n
"},{"location":"tutorials/helm/#deploying_the_data_lake_elasticsearch","title":"Deploying the data lake: Elasticsearch","text":"

In its recent releases, Elastic recommends deploying its services using Custom Resource Definitions (CRDs) installed via its official Helm chart. We will first install the Elasticsearch (ECK) operator cluster-wide:

# Add elastic official helm charts repository\nhelm repo add elastic https://helm.elastic.co\n\n# Update available charts list\nhelm repo update\n\n# Install the ECK operator\nhelm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace\n

Now that CRDs are already deployed cluster-wide, we can deploy an Elasticsearch cluster. To help you in this task, we provide an example manifest data-lake.yml, that deploy a two-nodes elasticsearch \u201ccluster\u201d. Adapt it to match your needs, then apply it with:

kubectl apply -f data-lake.yml\n

Once applied, your elasticsearch pod should be running. You can check this using the following command:

kubectl get pods -w\n

We expect to see two pods called data-lake-es-default-0 and data-lake-es-default-1.

When our Elasticsearch cluster is up (this can take few minutes), you may create the Elasticsearch index that will be used to store learning traces (xAPI statements):

# Store elastic user password\nexport ELASTIC_PASSWORD=\"$(kubectl get secret data-lake-es-elastic-user -o jsonpath=\"{.data.elastic}\" | base64 -d)\"\n\n# Execute an index creation request in the elasticsearch container\nkubectl exec data-lake-es-default-0 --container elasticsearch -- \\\n    curl -ks -X PUT \"https://elastic:${ELASTIC_PASSWORD}@localhost:9200/statements?pretty\"\n

Our Elasticsearch cluster is all set. In the next section, we will now deploy Ralph, our LRS.

"},{"location":"tutorials/helm/#deploy_the_lrs_ralph","title":"Deploy the LRS: Ralph","text":"

First and foremost, we should create a Secret object containing the user credentials file. We provide an example authentication file auth-demo.json that you can take inspiration from. We can create a secret object directly from the file with the command:

kubectl create secret generic ralph-auth-secret \\\n    --from-file=auth.json=auth-demo.json\n

Secondly, we should create two objects containing environment values necessary for Ralph:

  • a Secret containing sensitive environment variables such as passwords, tokens etc;
  • a ConfigMap containing all other non-sensitive environment variables.

We provide two example manifests (ralph-env-secret.yaml and ralph-env-cm.yml) that you can adapt to fit your needs.

For this tutorial, we only need to replace the <PASSWORD> tag in the Secret manifest by the actual password of the elastic user with the command:

sed -i -e \"s|<PASSWORD>|$ELASTIC_PASSWORD|g\" ralph-env-secret.yaml\n

We can now apply both manifests, to create a ConfigMap and a Secret object in our local cluster:

# Create Secret object\nkubectl apply -f ralph-env-secret.yaml\n\n# Create ConfigMap object\nkubectl apply -f ralph-env-cm.yaml\n

We can now deploy Ralph:

helm install lrs oci://registry-1.docker.io/openfuncharts/ralph \\\n  --values development.yaml\n

One can check if the server is running by opening a network tunnel to the service using the port-forward sub-command:

kubectl port-forward svc/lrs-ralph 8080:8080\n

And then send a request to the server using this tunnel:

curl --user admin:password localhost:8080/whoami\n

We expect a valid JSON response stating about the user you are using for this request.

If everything went well, we can send 22k xAPI statements to the LRS using:

gunzip -c ../../data/statements.jsonl.gz | \\\n  sed \"s/@timestamp/timestamp/g\" | \\\n  jq -s . | \\\n  curl -Lk \\\n    --user admin:password \\\n    -X POST \\\n    -H \"Content-Type: application/json\" \\\n    http://localhost:8080/xAPI/statements/ -d @-\n

Congrats \ud83c\udf89

"},{"location":"tutorials/helm/#go_further","title":"Go further","text":"

Now that the LRS is running, we can go further and deploy the dashboard suite Warren. Refer to the tutorial of the Warren Helm chart.

"},{"location":"tutorials/library/","title":"How to use Ralph as a library ?","text":"

WIP.

"},{"location":"tutorials/library/#validate_method","title":"validate method","text":"

WIP.

"},{"location":"tutorials/library/#convert_method","title":"convert method","text":"

WIP.

"},{"location":"tutorials/lrs/","title":"How to use Ralph LRS?","text":"

This tutorial shows you how to run Ralph LRS, step by step.

Warning

Ralph LRS will be executed locally for demonstration purpose. If you want to deploy Ralph LRS on a production server, please refer to the deployment guide.

Ralph LRS is based on FastAPI. In this tutorial, we will run the server manually with Uvicorn, but other alternatives exists (Hypercorn, Daphne).

Prerequisites

Some tools are required to run the commands of this tutorial. Make sure they are installed first:

  • Ralph package with CLI optional dependencies, e.g. pip install ralph-malph[cli] (check the CLI tutorial)
  • Docker Compose
  • curl or httpie
"},{"location":"tutorials/lrs/backends/","title":"Backends","text":"

Ralph LRS is built to be used with a database instead of writing learning records in a local file.

Ralph LRS supports the following databases:

  • Elasticsearch
  • Mongo
  • ClickHouse

Let\u2019s add the service of your choice to the docker-compose.yml file:

ElasticsearchMongoClickHouse docker-compose.yml
version: \"3.9\"\n\nservices:\n  db:\n    image: elasticsearch:8.1.0\n    environment:\n      discovery.type: single-node\n      xpack.security.enabled: \"false\"\n    ports:\n      - \"9200:9200\"\n    mem_limit: 2g\n    ulimits:\n      memlock:\n        soft: -1\n        hard: -1\n    healthcheck:\n      test: curl --fail http://localhost:9200/_cluster/health?wait_for_status=green || exit 1\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: es\n      RALPH_BACKENDS__LRS__ES__HOSTS: http://db:9200\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n

We can now start the database service and wait for it to be up and healthy:

docker compose up -d --wait db\n

Before using Elasticsearch, we need to create an index, which we call statements for this example:

curlHTTPie
curl -X PUT http://localhost:9200/statements\n
http PUT :9200/statements\n

docker-compose.yml

version: \"3.9\"\n\nservices:\n  db:\n    image: mongo:5.0.9\n    ports:\n      - \"27017:27017\"\n    healthcheck:\n      test: mongosh --eval 'db.runCommand(\"ping\").ok' localhost:27017/test --quiet\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: mongo\n      RALPH_BACKENDS__LRS__MONGO__CONNECTION_URI: mongodb://db:27017\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
We can now start the database service and wait for it to be up and healthy:
docker compose up -d --wait db\n

docker-compose.yml

version: \"3.9\"\n\nservices:\n  db:\n    image: clickhouse/clickhouse-server:23.1.1.3077-alpine\n    environment:\n      CLICKHOUSE_DB: xapi\n      CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1\n    ports:\n      - 8123:8123\n      - 9000:9000\n    # ClickHouse needs to maintain a lot of open files, so they\n    # suggest running the container with increased limits:\n    # https://hub.docker.com/r/clickhouse/clickhouse-server/#!\n    ulimits:\n      nofile:\n        soft: 262144\n        hard: 262144\n    healthcheck:\n      test:  wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: clickhouse\n      RALPH_BACKENDS__LRS__CLICKHOUSE__HOST: db\n      RALPH_BACKENDS__LRS__CLICKHOUSE__PORT: 8123\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
We can now start the database service and wait for it to be up and healthy:
docker compose up -d --wait db\n

Before using ClickHouse, we need to create a table in the xapi database, which we call xapi_events_all:

curlHTTPie
  echo \"CREATE TABLE xapi.xapi_events_all (\n    event_id UUID NOT NULL,\n    emission_time DateTime64(6) NOT NULL,\n    event String NOT NULL\n    )\n    ENGINE MergeTree ORDER BY (emission_time, event_id)\n    PRIMARY KEY (emission_time, event_id)\" | \\\n  curl --data-binary @- \"http://localhost:8123/\"\n
  echo \"CREATE TABLE xapi.xapi_events_all (\n    event_id UUID NOT NULL,\n    emission_time DateTime64(6) NOT NULL,\n    event String NOT NULL\n    )\n    ENGINE MergeTree ORDER BY (emission_time, event_id)\n    PRIMARY KEY (emission_time, event_id)\" | \\\n  http :8123\n

Then we can start Ralph LRS:

docker compose up -d lrs\n

We can finally send some xAPI statements to Ralph LRS:

curlHTTPie
curl -sL https://github.com/openfun/ralph/raw/master/data/statements.json.gz | \\\ngunzip | \\\nhead -n 100 | \\\njq -s . | \\\ncurl \\\n  --user janedoe:supersecret \\\n  -H \"Content-Type: application/json\" \\\n  -X POST \\\n  -d @- \\\n  \"http://localhost:8100/xAPI/statements\"\n
curl -sL https://github.com/openfun/ralph/raw/master/data/statements.json.gz | \\\ngunzip | \\\nhead -n 100 | \\\njq -s . | \\\nhttp -a janedoe:supersecret POST :8100/xAPI/statements\n

And fetch, them back:

curlHTTPie
curl \\\n  --user janedoe:supersecret \\\n  -X GET \\\n  \"http://localhost:8100/xAPI/statements\"\n
http -a janedoe:supersecret :8100/xAPI/statements\n
"},{"location":"tutorials/lrs/first-steps/","title":"First steps","text":"

Ralph LRS is distributed as a Docker image on DockerHub, following the format: fundocker/ralph:<release version | latest>.

Let\u2019s dive straight in and create a docker-compose.yml file:

docker-compose.yml
version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: fs\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n

For now, we are using the fs (File System) backend, meaning that Ralph LRS will store learning records in local files.

First, we need to manually create the .ralph directory alongside the docker-compose.yml file with the command:

mkdir .ralph\n

We can then run Ralph LRS from a terminal with the command:

docker compose up -d lrs\n

Ralph LRS server should be up and running!

We can request the whoami endpoint to check if the user is authenticated. On success, the endpoint returns the username and permission scopes.

curlHTTPie

curl http://localhost:8100/whoami\n
{\"detail\":\"Invalid authentication credentials\"}% \n

http :8100/whoami\n
HTTP/1.1 401 Unauthorized\ncontent-length: 47\ncontent-type: application/json\ndate: Mon, 06 Nov 2023 15:37:32 GMT\nserver: uvicorn\nwww-authenticate: Basic\n\n{\n    \"detail\": \"Invalid authentication credentials\"\n}\n

If you\u2019ve made it this far, congrats! \ud83c\udf89

You\u2019ve successfully deployed the Ralph LRS and got a response to your request!

Let\u2019s shutdown the Ralph LRS server with the command docker compose down and set up authentication.

"},{"location":"tutorials/lrs/forwarding/","title":"Forwarding to another LRS","text":"

Ralph LRS server can be configured to forward xAPI statements it receives to other LRSs. Statement forwarding enables the Total Learning Architecture and allows systems containing multiple LRS to share data.

To configure statement forwarding, you need to create a .env file in the current directory and define the RALPH_XAPI_FORWARDINGS variable or define the RALPH_XAPI_FORWARDINGS environment variable.

The value of the RALPH_XAPI_FORWARDINGS variable should be a JSON encoded list of dictionaries where each dictionary defines a forwarding configuration and consists of the following key/value pairs:

key value type description is_active boolean Specifies whether or not this forwarding configuration should take effect. url URL Specifies the endpoint URL where forwarded statements should be send. basic_username string Specifies the basic auth username. basic_password string Specifies the basic auth password. max_retries number Specifies the number of times a failed forwarding request should be retried. timeout number Specifies the duration in seconds of network inactivity leading to a timeout.

Warning

For a forwarding configuration to be valid it is required that all key/value pairs are defined.

Example of a valid forwarding configuration:

.env
RALPH_XAPI_FORWARDINGS='\n[\n  {\n    \"is_active\": true,\n    \"url\": \"http://lrs1.example.com/xAPI/statements/\",\n    \"basic_username\": \"admin1@example.com\",\n    \"basic_password\": \"PASSWORD1\",\n    \"max_retries\": 1,\n    \"timeout\": 5\n  },\n  {\n    \"is_active\": true,\n    \"url\": \"http://lrs2.example.com/xAPI/statements/\",\n    \"basic_username\": \"admin2@example.com\",\n    \"basic_password\": \"PASSWORD2\",\n    \"max_retries\": 5,\n    \"timeout\": 0.2\n  }\n]\n'\n
"},{"location":"tutorials/lrs/multitenancy/","title":"Multitenancy","text":"

By default, all authenticated users have full read and write access to the server. Ralph LRS implements the specified Authority mechanism to restrict behavior.

"},{"location":"tutorials/lrs/multitenancy/#filtering_results_by_authority_multitenancy","title":"Filtering results by authority (multitenancy)","text":"

In Ralph LRS, all incoming statements are assigned an authority (or ownership) derived from the user that makes the request. You may restrict read access to users \u201cown\u201d statements (thus enabling multitenancy) by setting the following environment variable:

.env
RALPH_LRS_RESTRICT_BY_AUTHORITY=True # Default: False\n

Warning

Two accounts with different credentials may share the same authority, meaning they can access the same statements. It is the administrator\u2019s responsibility to ensure that authority is properly assigned.

Info

If not using \u201cscopes\u201d, or for users with limited \u201cscopes\u201d, using this option will make the use of option ?mine=True implicit when fetching statement.

"},{"location":"tutorials/lrs/multitenancy/#scopes","title":"Scopes","text":"

In Ralph, users are assigned scopes which may be used to restrict endpoint access or functionalities. You may enable this option by setting the following environment variable:

.env
RALPH_LRS_RESTRICT_BY_SCOPES=True # Default: False\n

Valid scopes are a slight variation on those proposed by the xAPI specification:

  • statements/write
  • statements/read/mine
  • statements/read
  • state/write
  • state/read
  • define
  • profile/write
  • profile/read
  • all/read
  • all
"},{"location":"tutorials/lrs/sentry/","title":"Sentry","text":"

Ralph provides Sentry integration to monitor its LRS server and its CLI. To activate Sentry integration, one should define the following environment variables:

.env
RALPH_SENTRY_DSN={PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}{PATH}/{PROJECT_ID}\nRALPH_EXECUTION_ENVIRONMENT=development\n

The Sentry DSN (Data Source Name) can be found in your project settings from the Sentry application. The execution environment should reflect the environment Ralph has been deployed in (e.g. production).

You may also want to monitor the performance of Ralph by configuring the CLI and LRS traces sample rates:

.env
RALPH_SENTRY_CLI_TRACES_SAMPLE_RATE=0.1\nRALPH_SENTRY_LRS_TRACES_SAMPLE_RATE=0.3\n

Sample rate

A sample rate of 1.0 means 100% of transactions are sent to sentry and 0.1 only 10%.

If you want to lower noisy transactions (e.g. in a Kubernetes cluster), you can disable health checks related ones:

.env
RALPH_SENTRY_IGNORE_HEALTH_CHECKS=True\n
"},{"location":"tutorials/lrs/authentication/","title":"Authentication","text":"

The API server supports the following authentication methods:

  • HTTP basic authentication
  • OpenID Connect authentication on top of OAuth2.0

Either one or both can be enabled for Ralph LRS using the environment variable RALPH_RUNSERVER_AUTH_BACKENDS:

RALPH_RUNSERVER_AUTH_BACKENDS=basic,oidc\n
"},{"location":"tutorials/lrs/authentication/basic/","title":"HTTP Basic Authentication","text":"

The default method for securing the Ralph API server is HTTP Basic Authentication. For this, we need to create a user in Ralph LRS.

"},{"location":"tutorials/lrs/authentication/basic/#creating_user_credentials","title":"Creating user credentials","text":"

To create a new user credentials, Ralph CLI provides a dedicated command:

Ralph CLIDocker Compose
ralph auth \\\n    --write-to-disk \\\n    --username janedoe \\\n    --password supersecret \\\n    --scope statements/write \\\n    --scope statements/read \\\n    --agent-ifi-mbox mailto:janedoe@example.com\n
docker compose run --rm lrs \\\n  ralph auth \\\n    --write-to-disk \\\n    --username janedoe \\\n    --password supersecret \\\n    --scope statements/write \\\n    --scope statements/read \\\n    --agent-ifi-mbox mailto:janedoe@example.com\n

Tip

You can either display the helper with ralph auth --help or check the CLI tutorial here

This command updates your credentials file with the new janedoe user. Here is the file that has been created by the ralph auth command:

auth.json
[                                                                               \n  {                                                                             \n    \"agent\": {                                                                  \n      \"mbox\": \"mailto:janedoe@example.com\",                                     \n      \"objectType\": \"Agent\",                                                    \n      \"name\": null                                                              \n    },                                                                          \n    \"scopes\": [                                                                 \n      \"statements/write\",                                                           \n      \"statements/read\"\n    ],                                                                          \n    \"hash\": \"$2b$12$eQmMF/7ALdNuksL4lkI.NuTibNjKLd0fw2Xe.FZqD0mNkgnnjLLPa\",     \n    \"username\": \"janedoe\"                                                       \n  }                                                                             \n] \n

Alternatively, the credentials file can also be created manually. It is expected to be a valid JSON file. Its location is specified by the RALPH_AUTH_FILE configuration value.

Tip

By default, Ralph LRS looks for the auth.json file in the application directory (see click documentation for details).

The expected format is a list of entries (JSON objects) each containing:

  • the username
  • the user\u2019s hashed+salted password
  • the scopes they can access
  • an agent object used to represent the user in the LRS.

Info

The agent is constrained by LRS specifications, and must use one of four valid Inverse Functional Identifiers.

"},{"location":"tutorials/lrs/authentication/basic/#making_a_get_request","title":"Making a GET request","text":"

After changing the docker-compose.yml file as follow: docker-compose.yml

version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: fs\n      RALPH_RUNSERVER_AUTH_BACKENDS: basic\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
and running the Ralph LRS with:

docker compose up -d lrs\n

we can request the whoami endpoint again, but this time sending our username and password through Basic Auth:

curlHTTPie

curl --user janedoe:supersecret http://localhost:8100/whoami\n
{\"agent\":{\"mbox\":\"mailto:janedoe@example.com\",\"objectType\":\"Agent\",\"name\":null},\"scopes\":[\"statements/read\",\"statements/write\"]}\n

http -a janedoe:supersecret :8100/whoami \n
HTTP/1.1 200 OK\ncontent-length: 107\ncontent-type: application/json\ndate: Tue, 07 Nov 2023 17:32:31 GMT\nserver: uvicorn\n\n{\n    \"agent\": {\n        \"mbox\": \"mailto:janedoe@example.com\",\n        \"name\": null,\n        \"objectType\": \"Agent\"\n    },\n    \"scopes\": [\n        \"statements/read\",\n        \"statements/write\"\n    ]\n}\n

Congrats! \ud83c\udf89 You have been successfully authenticated!

HTTP Basic auth caching

HTTP Basic auth implementation uses the secure and standard bcrypt algorithm to hash/salt passwords before storing them. This implementation comes with a performance cost.

To speed up requests, credentials are stored in an LRU cache with a \u201cTime To Live\u201d.

To configure this cache, you can define the following environment variables:

  • the maximum number of entries in the cache. Select a value greater than the maximum number of individual user credentials, for better performance. Defaults to 100.

RALPH_AUTH_CACHE_MAX_SIZE=100\n
- the \u201cTime To Live\u201d of the cache entries in seconds. Defaults to 3600s.

RALPH_AUTH_CACHE_TTL=3600\n
"},{"location":"tutorials/lrs/authentication/oidc/","title":"OpenID Connect authentication","text":"

Ralph LRS also supports OpenID Connect on top of OAuth 2.0 for authentication and authorization.

To enable OpenID Connect authentication mode, we should change the RALPH_RUNSERVER_AUTH_BACKENDS environment variable to oidc and we should define the RALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI environment variable with the identity provider\u2019s Issuer Identifier URI as follows:

RALPH_RUNSERVER_AUTH_BACKENDS=oidc\nRALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI=http://{provider_host}:{provider_port}/auth/realms/{realm_name}\n

This address must be accessible to the LRS on startup as it will perform OpenID Connect Discovery to retrieve public keys and other information about the OpenID Connect environment.

It is also strongly recommended to set the optional RALPH_RUNSERVER_AUTH_OIDC_AUDIENCE environment variable to the origin address of Ralph LRS itself (e.g. \u201chttp://localhost:8100\u201d) to enable verification that a given token was issued specifically for that Ralph LRS.

"},{"location":"tutorials/lrs/authentication/oidc/#identity_providers","title":"Identity Providers","text":"

OpenID Connect support is currently developed and tested against Keycloak but may work with other identity providers that implement the specification.

"},{"location":"tutorials/lrs/authentication/oidc/#an_example_with_keycloak","title":"An example with Keycloak","text":"

The Learning analytics playground repository contains a Docker Compose file and configuration for a demonstration instance of Keycloak with a ralph client.

First, we should stop the Ralph LRS server (if it\u2019s still running):

docker compose down\n

We can clone the learning-analytics-playground repository:

git clone git@github.com:openfun/learning-analytics-playground\n

And then bootstrap the project:

cd learning-analytics-playground/\nmake bootstrap\n

After a couple of minutes, the playground containers should be up and running.

Create another docker compose file, let\u2019s call it docker-compose.oidc.yml, with the following content: docker-compose.oidc.yml

version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_AUTH_BACKENDS: oidc\n      RALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI: http://learning-analytics-playground-keycloak-1:8080/auth/realms/fun-mooc\n      RALPH_RUNSERVER_BACKEND: fs\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n    networks:\n      - ralph\n\nnetworks:\n  ralph:\n    external: true\n

Again, we need to create the .ralph directory:

mkdir .ralph\n

Then we can start the lrs service:

docker compose -f docker-compose.oidc.yml up -d lrs\n

Now that both Keycloak and Ralph LRS server are up and running, we should be able to get the access token from Keycloak with the command:

curlHTTPie
curl -X POST \\\n  -d \"grant_type=password\" \\\n  -d \"client_id=ralph\" \\\n  -d \"client_secret=bcef3562-730d-4575-9e39-63e185f99bca\" \\\n  -d \"username=ralph_admin\" \\\n  -d \"password=funfunfun\" \\\n  http://localhost:8080/auth/realms/fun-mooc/protocol/openid-connect/token\n
{\"access_token\":\"<access token content>\",\"expires_in\":300,\"refresh_expires_in\":1800,\"refresh_token\":\"<refresh token content>\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"0889b3a5-d742-45fb-98b3-20e967960e74\",\"scope\":\"email profile\"} \n
http -f POST \\\n  :8080/auth/realms/fun-mooc/protocol/openid-connect/token \\\n  grant_type=password \\\n  client_id=ralph \\\n  client_secret=bcef3562-730d-4575-9e39-63e185f99bca \\\n  username=ralph_admin \\\n  password=funfunfun\n
HTTP/1.1 200 OK\n...\n{\n    \"access_token\": \"<access token content>\",\n    \"expires_in\": 300,\n    \"not-before-policy\": 0,\n    \"refresh_expires_in\": 1800,\n    \"refresh_token\": \"<refresh token content>\",\n    \"scope\": \"email profile\",\n    \"session_state\": \"1e826fa2-b4b3-42bf-837f-158fe9d5e1e5\",\n    \"token_type\": \"Bearer\"\n}\n

With this access token, we can now make a request to the Ralph LRS server:

curlHTTPie
curl -H 'Authorization: Bearer <access token content>' \\\nhttp://localhost:8100/whoami\n
{\"agent\":{\"openid\":\"http://localhost:8080/auth/realms/fun-mooc/b6e85bd0-ce6e-4b24-9f0e-6e18d8744e54\"},\"scopes\":[\"email\",\"profile\"]}\n
http -A bearer -a <access token content> :8100/whoami\n
HTTP/1.1 200 OK\n...\n{\n    \"agent\": {\n        \"openid\": \"http://localhost:8080/auth/realms/fun-mooc/b6e85bd0-ce6e-4b24-9f0e-6e18d8744e54\"\n    },\n    \"scopes\": [\n        \"email\",\n        \"profile\"\n    ]\n}\n

Congrats, you\u2019ve managed to authenticate using OpenID Connect! \ud83c\udf89

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Ralph","text":"

\u2699\ufe0f The ultimate toolbox for your learning analytics (expect some xAPI \u2764\ufe0f)

Ralph is a toolbox for your learning analytics, it can be used as a:

  • LRS, an HTTP API server to collect xAPI statements (learning events), following the ADL LRS standard
  • command-line interface (CLI), to build data pipelines the UNIX-way\u2122\ufe0f,
  • library, to fetch learning events from various backends, (de)serialize or convert them from and to various standard formats such as xAPI, or openedx
"},{"location":"#what_is_an_lrs","title":"What is an LRS?","text":"

A Learning Record Store, or LRS, is a key component in the context of learning analytics and the Experience API (xAPI).

The Experience API (or Tin Can API) is a standard for tracking and reporting learning experiences. In particular, it defines:

  • the xAPI format of the learning events. xAPI statements include an actor, a verb, an object as well as contextual information. Here\u2019s an example statement:
    {\n    \"id\": \"12345678-1234-5678-1234-567812345678\",\n    \"actor\":{\n        \"mbox\":\"mailto:xapi@adlnet.gov\"\n    },\n    \"verb\":{\n        \"id\":\"http://adlnet.gov/expapi/verbs/created\",\n        \"display\":{\n            \"en-US\":\"created\"\n        }\n    },\n    \"object\":{\n        \"id\":\"http://example.adlnet.gov/xapi/example/activity\"\n    }\n}\n
  • the Learning Record Store (LRS), is a RESTful API that collects, stores and retrieves these events. Think of it as a learning database that unifies data from various learning platforms and applications. These events can come from an LMS (Moodle, edX), or any other learning component that supports sending xAPI statements to an LRS (e.g. an embedded video player), from various platforms.

xAPI specification version

In Ralph, we\u2019re following the xAPI specification 1.0.3 that you can find here.

For your information, xAPI specification 2.0 is out! It\u2019s not currently supported in Ralph, but you can check it here.

"},{"location":"#installation","title":"Installation","text":""},{"location":"#install_from_pypi","title":"Install from PyPI","text":"

Ralph is distributed as a standard python package; it can be installed via pip or any other python package manager (e.g. Poetry, Pipenv, etc.):

Use a virtual environment for installation

To maintain a clean and controlled environment when installing ralph-malph, consider using a virtual environment.

  • Create a virtual environment:

    python3.12 -m venv <path-to-virtual-environment>\n

  • Activate the virtual environment:

    source venv/bin/activate\n

If you want to generate xAPI statements from your application and only need to integrate learning statement models in your project, you don\u2019t need to install the backends, cli or lrs extra dependencies, the core library is what you need:

pip install ralph-malph\n

If you want to use the Ralph LRS server, add the lrs flavour in your installation. You also have to choose the type of backend you will use for LRS data storage (backend-clickhouse,backend-es,backend-mongo).

  • Install the core package with the LRS and the Elasticsearch backend. For example:
pip install ralph-malph[backend-es,lrs]\n
  • Add the cli flavour if you want to use the LRS on the command line:
pip install ralph-malph[backend-es,lrs,cli]\n
  • If you want to play around with backends with Ralph as a library, you can install:
pip install ralph-malph[backends]\n
  • If you have various uses for Ralph\u2019s features or would like to discover all the existing functionnalities, it is recommended to install the full package:
pip install ralph-malph[full]\n
"},{"location":"#install_from_dockerhub","title":"Install from DockerHub","text":"

Ralph is distributed as a Docker image. If Docker is installed on your machine, it can be pulled from DockerHub:

docker run --rm -i fundocker/ralph:latest ralph --help\n
Use a ralph alias in your local environment

Simplify your workflow by creating an alias for easy access to Ralph commands:

alias ralph=\"docker run --rm -i fundocker/ralph:latest ralph\"\n
"},{"location":"#lrs_specification_compliance","title":"LRS specification compliance","text":"

WIP.

"},{"location":"#contributing_to_ralph","title":"Contributing to Ralph","text":"

If you\u2019re interested in contributing to Ralph, whether it\u2019s by reporting issues, suggesting improvements, or submitting code changes, please head over to our dedicated Contributing to Ralph page. There, you\u2019ll find detailed guidelines and instructions on how to take part in the project.

We look forward to your contributions and appreciate your commitment to making Ralph a more valuable tool for everyone.

"},{"location":"#contributors","title":"Contributors","text":""},{"location":"#license","title":"License","text":"

This work is released under the MIT License (see LICENSE).

"},{"location":"CHANGELOG/","title":"Changelog","text":"

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

"},{"location":"CHANGELOG/#unreleased","title":"Unreleased","text":""},{"location":"CHANGELOG/#501_-_2024-07-11","title":"5.0.1 - 2024-07-11","text":""},{"location":"CHANGELOG/#changed","title":"Changed","text":"
  • Force Elasticsearch REFRESH_AFTER_WRITE setting to be a string
"},{"location":"CHANGELOG/#fixed","title":"Fixed","text":"
  • Fix LaxStatement validation to prevent statements IDs modification
"},{"location":"CHANGELOG/#500_-_2024-05-02","title":"5.0.0 - 2024-05-02","text":""},{"location":"CHANGELOG/#added","title":"Added","text":"
  • Models: Add Webinar xAPI activity type
"},{"location":"CHANGELOG/#changed_1","title":"Changed","text":"
  • Upgrade pydantic to 2.7.0
  • Migrate model tests from hypothesis strategies to polyfactory
  • Replace soon-to-be deprecated parse_obj_as with TypeAdapter
"},{"location":"CHANGELOG/#420_-_2024-04-08","title":"4.2.0 - 2024-04-08","text":""},{"location":"CHANGELOG/#added_1","title":"Added","text":"
  • Models: Add Edx teams-related events support
  • Models: Add Edx notes events support
  • Models: Add Edx certificate events support
  • Models: Add Edx bookmark (renamed Course Resource) events support
  • Models: Add Edx poll and survey events support
  • Models: Add Edx Course Content Completion events support
  • Models: Add Edx drag and drop events support
  • Models: Add Edx cohort events support
  • Models: Add Edx content library interaction events support
  • Backends: Add ralph.backends.data and ralph.backends.lrs entry points to discover backends from plugins.
"},{"location":"CHANGELOG/#changed_2","title":"Changed","text":"
  • Backends: the first argument of the get_backends method now requires a list of EntryPoints, each pointing to a backend class, instead of a tuple of packages containing backends.
  • API: The RUNSERVER_BACKEND configuration value is no longer validated to point to an existing backend.
"},{"location":"CHANGELOG/#fixed_1","title":"Fixed","text":"
  • LRS: Fix querying on activity when LRS contains statements with an object lacking a objectType attribute
"},{"location":"CHANGELOG/#410_-_2024-02-12","title":"4.1.0 - 2024-02-12","text":""},{"location":"CHANGELOG/#added_2","title":"Added","text":"
  • Add LRS multitenancy support for user-specific target storage
"},{"location":"CHANGELOG/#changed_3","title":"Changed","text":"
  • query_statements and query_statements_by_ids methods can now take an optional user-specific target
"},{"location":"CHANGELOG/#fixed_2","title":"Fixed","text":"
  • Backends: switch LRSStatementsQuery since/until field types to iso 8601 string
"},{"location":"CHANGELOG/#removed","title":"Removed","text":"
  • Removed event_table_name attribute of the ClickHouse data backend
"},{"location":"CHANGELOG/#400_-_2024-01-23","title":"4.0.0 - 2024-01-23","text":""},{"location":"CHANGELOG/#added_3","title":"Added","text":"
  • Backends: Add Writable and Listable interfaces to distinguish supported functionalities among data backends
  • Backends: Add max_statements option to data backends read method
  • Backends: Add prefetch option to async data backends read method
  • Backends: Add concurrency option to async data backends write method
  • Backends: Add get_backends function to automatically discover backends for CLI and LRS usage
  • Backends: Add client options for WSDataBackend
  • Backends: Add READ_CHUNK_SIZE and WRITE_CHUNK_SIZE data backend settings
  • Models: Implement Pydantic model for LRS Statements resource query parameters
  • Models: Implement xAPI LMS Profile statements validation
  • Models: Add EdX to xAPI converters for enrollment events
  • Project: Add aliases for ralph-malph extra dependencies: backends and full
"},{"location":"CHANGELOG/#changed_4","title":"Changed","text":"
  • Arnold: Add variable to override PVC name in arnold deployment
  • API: GET /statements now has \u201cmine\u201d option which matches statements that have an authority field matching that of the user
  • API: Invalid parameters now return 400 status code
  • API: Forwarding PUT now uses PUT (instead of POST)
  • API: Incoming statements are enriched with id, timestamp, stored and authority
  • API: Add RALPH_LRS_RESTRICT_BY_AUTHORITY option making ?mine=True implicit
  • API: Add RALPH_LRS_RESTRICT_BY_SCOPE option enabling endpoint access control by user scopes
  • API: Enhance \u2018limit\u2019 query parameter\u2019s validation
  • API: Variable RUNSERVER_AUTH_BACKEND becomes RUNSERVER_AUTH_BACKENDS, and multiple authentication methods are supported simultaneously
  • Backends: Refactor LRS Statements resource query parameters defined for ralph API
  • Backends: Refactor database, storage, http and stream backends under the unified data backend interface [BC]
  • Backends: Refactor LRS query_statements and query_statements_by_ids backends methods under the unified lrs backend interface [BC]
  • Backends: Update statementId and voidedStatementId to snake_case, with camelCase alias, in LRSStatementsQuery
  • Backends: Replace reference to a JSON column in ClickHouse with function calls on the String column [BC]
  • CLI: User credentials must now include an \u201cagent\u201d field which can be created using the cli
  • CLI: Change push to write and fetch to read [BC]
  • CLI: Change -c --chunk-size option to -s --chunk-size [BC]
  • CLI: Change websocket backend name -b ws to -b async_ws along with it\u2019s uri option --ws-uri to --async-ws-uri [BC]
  • CLI: List cli usage strings in alphabetical order
  • CLI: Change backend configuration environment variable prefixes from RALPH_BACKENDS__{{DATABASE|HTTP|STORAGE|STREAM}}__{{BACKEND}}__{{OPTION}} to RALPH_BACKENDS__DATA__{{BACKEND}}__{{OPTION}}
  • Models: The xAPI context.contextActivities.category field is now mandatory in the video and virtual classroom profiles. [BC]
  • Upgrade base python version to 3.12 for the development stack and Docker image
  • Upgrade bcrypt to 4.1.2
  • Upgrade cachetools to 5.3.2
  • Upgrade fastapi to 0.108.0
  • Upgrade sentry_sdk to 1.39.1
  • Upgrade uvicorn to 0.25.0
"},{"location":"CHANGELOG/#fixed_3","title":"Fixed","text":"
  • API: Fix a typo (\u2018attachements\u2019 -> \u2018attachments\u2019) to ensure compliance with the LRS specification and prevent potential silent bugs
"},{"location":"CHANGELOG/#removed_1","title":"Removed","text":"
  • Project: Drop support for Python 3.7
  • Models: Remove school, course, module context extensions in Edx to xAPI base converter
  • Models: Remove name field in VideoActivity xAPI model mistakenly used in video profile
  • CLI: Remove DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration
"},{"location":"CHANGELOG/#390_-_2023-07-21","title":"3.9.0 - 2023-07-21","text":""},{"location":"CHANGELOG/#changed_5","title":"Changed","text":"
  • Upgrade fastapi to 0.100.0
  • Upgrade sentry_sdk to 1.28.1
  • Upgrade uvicorn to 0.23.0
  • Enforce valid IRI for activity parameter in GET /statements
  • Change how duplicate xAPI statements are handled for clickhouse backend
"},{"location":"CHANGELOG/#380_-_2023-06-21","title":"3.8.0 - 2023-06-21","text":""},{"location":"CHANGELOG/#added_4","title":"Added","text":"
  • Implement edX open response assessment events pydantic models
  • Implement edx peer instruction events pydantic models
  • Implement xAPI VideoDownloaded pydantic model (using xAPI TinCan downloaded verb)
"},{"location":"CHANGELOG/#changed_6","title":"Changed","text":"
  • Allow to use a query for HTTP backends in the CLI
"},{"location":"CHANGELOG/#370_-_2023-06-13","title":"3.7.0 - 2023-06-13","text":""},{"location":"CHANGELOG/#added_5","title":"Added","text":"
  • Implement asynchronous async_lrs backend
  • Implement synchronous lrs backend
  • Implement xAPI virtual classroom pydantic models
  • Allow to insert custom endpoint url for S3 service
  • Cache the HTTP Basic auth credentials to improve API response time
  • Support OpenID Connect authentication method
"},{"location":"CHANGELOG/#changed_7","title":"Changed","text":"
  • Clean xAPI pydantic models naming convention
  • Upgrade fastapi to 0.97.0
  • Upgrade sentry_sdk to 1.25.1
  • Set Clickhouse client_options to a dedicated pydantic model
  • Upgrade httpx to 0.24.1
  • Force a valid (JSON-formatted) IFI to be passed for the /statements GET query agent filtering
  • Upgrade cachetools to 5.3.1
"},{"location":"CHANGELOG/#removed_2","title":"Removed","text":"
  • verb.display field no longer mandatory in xAPI models and for converter
"},{"location":"CHANGELOG/#360_-_2023-05-17","title":"3.6.0 - 2023-05-17","text":""},{"location":"CHANGELOG/#added_6","title":"Added","text":"
  • Allow to ignore health check routes for Sentry transactions
"},{"location":"CHANGELOG/#changed_8","title":"Changed","text":"
  • Upgrade sentry_sdk to 1.22.2
  • Upgrade uvicorn to 0.22.0
  • LRS /statements GET method returns a code 400 with certain parameters as per the xAPI specification
  • Use batch/v1 api in cronjob_pipeline manifest
  • Use autoscaling/v2 in HorizontalPodAutoscaler manifest
"},{"location":"CHANGELOG/#fixed_4","title":"Fixed","text":"
  • Fix the more IRL building in LRS /statements GET requests
"},{"location":"CHANGELOG/#351_-_2023-04-18","title":"3.5.1 - 2023-04-18","text":""},{"location":"CHANGELOG/#changed_9","title":"Changed","text":"
  • Upgrade httpx to 0.24.0
  • Upgrade fastapi to 0.95.1
  • Upgrade sentry_sdk to 1.19.1
  • Upgrade uvicorn to 0.21.1
"},{"location":"CHANGELOG/#fixed_5","title":"Fixed","text":"
  • An issue with starting Ralph in pre-built Docker containers
  • Fix double quoting in ClickHouse backend server parameters
  • An issue Ralph starting when ClickHouse is down
"},{"location":"CHANGELOG/#350_-_2023-03-08","title":"3.5.0 - 2023-03-08","text":""},{"location":"CHANGELOG/#added_7","title":"Added","text":"
  • Implement PUT verb on statements endpoint
  • Add ClickHouse database backend support
"},{"location":"CHANGELOG/#changed_10","title":"Changed","text":"
  • Make trailing slashes optional on statements endpoint
  • Upgrade sentry_sdk to 1.16.0
"},{"location":"CHANGELOG/#340_-_2023-03-01","title":"3.4.0 - 2023-03-01","text":""},{"location":"CHANGELOG/#changed_11","title":"Changed","text":"
  • Upgrade fastapi to 0.92.0
  • Upgrade sentry_sdk to 1.15.0
"},{"location":"CHANGELOG/#fixed_6","title":"Fixed","text":"
  • Restore sentry integration in the LRS server
"},{"location":"CHANGELOG/#330_-_2023-02-03","title":"3.3.0 - 2023-02-03","text":""},{"location":"CHANGELOG/#added_8","title":"Added","text":"
  • Restore python 3.7+ support for library usage (models)
"},{"location":"CHANGELOG/#changed_12","title":"Changed","text":"
  • Allow xAPI extra fields in extensions fields
"},{"location":"CHANGELOG/#321_-_2023-02-01","title":"3.2.1 - 2023-02-01","text":""},{"location":"CHANGELOG/#changed_13","title":"Changed","text":"
  • Relax required Python version to 3.7+
"},{"location":"CHANGELOG/#320_-_2023-01-25","title":"3.2.0 - 2023-01-25","text":""},{"location":"CHANGELOG/#added_9","title":"Added","text":"
  • Add a new auth subcommand to generate required credentials file for the LRS
  • Implement support for AWS S3 storage backend
  • Add CLI --version option
"},{"location":"CHANGELOG/#changed_14","title":"Changed","text":"
  • Upgrade fastapi to 0.89.1
  • Upgrade httpx to 0.23.3
  • Upgrade sentry_sdk to 1.14.0
  • Upgrade uvicorn to 0.20.0
  • Tray: add the ca_certs path for the ES backend client option (LRS)
  • Improve Sentry integration for the LRS
  • Update handbook link to https://handbook.openfun.fr
  • Upgrade base python version to 3.11 for the development stack and Docker image
"},{"location":"CHANGELOG/#fixed_7","title":"Fixed","text":"
  • Restore ES and Mongo backends ability to use client options
"},{"location":"CHANGELOG/#310_-_2022-11-17","title":"3.1.0 - 2022-11-17","text":""},{"location":"CHANGELOG/#added_10","title":"Added","text":"
  • EdX to xAPI converters for video events
"},{"location":"CHANGELOG/#changed_15","title":"Changed","text":"
  • Improve Ralph\u2019s library integration by unpinning dependencies (and prefer ranges)
  • Upgrade fastapi to 0.87.0
"},{"location":"CHANGELOG/#removed_3","title":"Removed","text":"
  • ModelRules constraint
"},{"location":"CHANGELOG/#300_-_2022-10-19","title":"3.0.0 - 2022-10-19","text":""},{"location":"CHANGELOG/#added_11","title":"Added","text":"
  • Implement edX video browser events pydantic models
  • Create a post endpoint for statements implementing the LRS spec
  • Implement support for the MongoDB database backend
  • Implement support for custom queries when using database backends get method (used in the fetch command)
  • Add dotenv configuration file support and python-dotenv dependency
  • Add host and port options for the runserver cli command
  • Add support for database selection when running the Ralph LRS server
  • Implement support for xAPI statement forwarding
  • Add database backends status checking
  • Add health LRS router
  • Tray: add LRS server support
"},{"location":"CHANGELOG/#changed_16","title":"Changed","text":"
  • Migrate to python-legacy handler for mkdocstrings package
  • Upgrade click to 8.1.3
  • Upgrade elasticsearch to 8.3.3
  • Upgrade fastapi to 0.79.1
  • Upgrade ovh to 1.0.0
  • Upgrade pydantic to 1.9.2
  • Upgrade pymongo to 4.2.0
  • Upgrade python-keystoneclient to 5.0.0
  • Upgrade python-swiftclient to 4.0.1
  • Upgrade requests to 2.28.1
  • Upgrade sentry_sdk to 1.9.5
  • Upgrade uvicorn to 0.18.2
  • Upgrade websockets to 10.3
  • Make backends yield results instead of writing to standard streams (BC)
  • Use pydantic settings management instead of global variables in defaults.py
  • Rename backend and parser parameter environment variables (BC)
  • Make project dependencies management more modular for library usage
"},{"location":"CHANGELOG/#removed_4","title":"Removed","text":"
  • Remove YAML configuration file support and pyyaml dependency (BC)
"},{"location":"CHANGELOG/#fixed_8","title":"Fixed","text":"
  • Tray: do not create a cronjobs list when no cronjob has been defined
  • Restore history mixin logger
"},{"location":"CHANGELOG/#210_-_2022-04-13","title":"2.1.0 - 2022-04-13","text":""},{"location":"CHANGELOG/#added_12","title":"Added","text":"
  • Implement edX problem interaction events pydantic models
  • Implement edX textbook interaction events pydantic models
  • ws websocket stream backend (compatible with the fetch command)
  • bundle jq, curl and wget in the fundocker/ralph Docker image
  • Tray: enable ralph app deployment command configuration
  • Add a runserver command with basic auth and a Whoami route
  • Create a get endpoint for statements implementing the LRS spec
  • Add optional fields to BaseXapiModel
"},{"location":"CHANGELOG/#changed_17","title":"Changed","text":"
  • Upgrade uvicorn to 0.17.4
  • Upgrade elasticsearch to 7.17.0
  • Upgrade sentry_sdk to 1.5.5
  • Upgrade fastapi to 0.73.0
  • Upgrade pyparsing to 3.0.7
  • Upgrade pydantic to 1.9.0
  • Upgrade python-keystoneclient to 4.4.0
  • Upgrade python-swiftclient to 3.13.0
  • Upgrade pyyaml to 6.0
  • Upgrade requests to 2.27.1
  • Upgrade websockets to 10.1
"},{"location":"CHANGELOG/#201_-_2021-07-15","title":"2.0.1 - 2021-07-15","text":""},{"location":"CHANGELOG/#changed_18","title":"Changed","text":"
  • Upgrade elasticsearch to 7.13.3
"},{"location":"CHANGELOG/#fixed_9","title":"Fixed","text":"
  • Restore elasticsearch backend datastream compatibility for bulk operations
"},{"location":"CHANGELOG/#200_-_2021-07-09","title":"2.0.0 - 2021-07-09","text":""},{"location":"CHANGELOG/#added_13","title":"Added","text":"
  • xAPI video interacted pydantic models
  • xAPI video terminated pydantic models
  • xAPI video completed pydantic models
  • xAPI video seeked pydantic models
  • xAPI video initialized pydantic models
  • xAPI video paused pydantic models
  • convert command to transform edX events to xAPI format
  • EdX to xAPI converters for page viewed andpage_close events
  • Implement core event format converter
  • xAPI video played pydantic models
  • xAPI page viewed and page terminated pydantic models
  • Implement edX navigational events pydantic models
  • Implement edX enrollment events pydantic models
  • Install security updates in project Docker images
  • Model selector to retrieve associated pydantic model of a given event
  • validate command to lint edX events using pydantic models
  • Support all available bulk operation types for the elasticsearch backend (create, index, update, delete) using the --es-op-type option
"},{"location":"CHANGELOG/#changed_19","title":"Changed","text":"
  • Upgrade elasticsearch to 7.13.2
  • Upgrade python-swiftclient to 3.12.0
  • Upgrade click to 8.0.1
  • Upgrade click-option-group to 0.5.3
  • Upgrade pydantic to 1.8.2
  • Upgrade sentry_sdk to 1.1.0
  • Rename edX models
  • Migrate model tests from factories to hypothesis strategies
  • Tray: switch from openshift to k8s (BC)
  • Tray: remove useless deployment probes
"},{"location":"CHANGELOG/#fixed_10","title":"Fixed","text":"
  • Tray: remove version immutable field in DC selector
"},{"location":"CHANGELOG/#120_-_2021-02-26","title":"1.2.0 - 2021-02-26","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":"
  • edX server event pydantic model and factory
  • edX page_close browser event pydantic model and factory
  • Tray: allow to specify a self-generated elasticsearch cluster CA certificate
"},{"location":"CHANGELOG/#fixed_11","title":"Fixed","text":"
  • Tray: add missing Swift variables in the secret
  • Tray: fix pods anti-affinity selector
"},{"location":"CHANGELOG/#removed_5","title":"Removed","text":"
  • pandas is no longer required
"},{"location":"CHANGELOG/#110_-_2021-02-04","title":"1.1.0 - 2021-02-04","text":""},{"location":"CHANGELOG/#added_15","title":"Added","text":"
  • Support for Swift storage backend
  • Use the push command --ignore-errors option to ignore ES bulk import errors
  • The elasticsearch backend now accepts passing all supported client options
"},{"location":"CHANGELOG/#changed_20","title":"Changed","text":"
  • Upgrade pyyaml to 5.4.1
  • Upgrade pandas to 1.2.1
"},{"location":"CHANGELOG/#removed_6","title":"Removed","text":"
  • click_log is no longer required as we are able to configure logging
"},{"location":"CHANGELOG/#100_-_2021-01-13","title":"1.0.0 - 2021-01-13","text":""},{"location":"CHANGELOG/#added_16","title":"Added","text":"
  • Implement base CLI commands (list, extract, fetch & push) for supported backends
  • Support for ElasticSearch database backend
  • Support for LDP storage backend
  • Support for FS storage backend
  • Parse (gzipped) tracking logs in GELF format
  • Support for application\u2019s configuration file
  • Add optional sentry integration
  • Distribute Arnold\u2019s tray to deploy Ralph in a k8s cluster as cronjobs
"},{"location":"LICENSE/","title":"License","text":"

MIT License

Copyright (c) 2020-present France Universit\u00e9 Num\u00e9rique

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

"},{"location":"UPGRADE/","title":"Upgrade","text":"

All instructions to upgrade this project from one release to the next will be documented in this file. Upgrades must be run sequentially, meaning you should not skip minor/major releases while upgrading (fix releases can be skipped).

This project adheres to Semantic Versioning.

"},{"location":"UPGRADE/#4x_to_5y","title":"4.x to 5.y","text":""},{"location":"UPGRADE/#upgrade_learning_events_models","title":"Upgrade learning events models","text":"

xAPI learning statements validator and converter are built with Pydantic. Ralph 5.x is compatible with Pydantic 2.x. Please refer to Pydantic migration guide if you are using Ralph models feature.

Most of fields in Pydantic models that are optional are set with None as default value in Ralph 5.y. If you serialize some Pydantic models from ralph and want to keep the same content in your serialization, please set exclude_none to True in the serialization method model_dump.

"},{"location":"UPGRADE/#3x_to_4y","title":"3.x to 4.y","text":""},{"location":"UPGRADE/#upgrade_user_credentials","title":"Upgrade user credentials","text":"

To conform to xAPI specifications, we need to represent users as xAPI Agents. You must therefore delete and re-create the credentials file using the updated cli, OR you can modify it directly to add the agents field. The credentials file is located in { RALPH_APP_DIR }/{ RALPH_AUTH_FILE } (defaults to .ralph/auth.json). Each user profile must follow the following pattern (see this post for examples of valid agent objects) :

{\n  \"username\": \"USERNAME_UNCHANGED\",\n  \"hash\": \"PASSWORD_HASH_UNCHANGED\",\n  \"scopes\": [ LIST_OF_SCOPES_UNCHANGED ],\n  \"agent\": { A_VALID_AGENT_OBJECT }\n}\n
Agent can take one of the following forms, as specified by the xAPI specification: - mbox:
\"agent\": {\n      \"mbox\": \"mailto:john.doe@example.com\"\n}\n
- mbox_sha1sum:
\"agent\": {\n        \"mbox_sha1sum\": \"ebd31e95054c018b10727ccffd2ef2ec3a016ee9\",\n}\n
- openid:
\"agent\": {\n      \"openid\": \"http://foo.openid.example.org/\"\n}\n
- account:
\"agent\": {\n      \"account\": {\n        \"name\": \"simonsAccountName\",\n        \"homePage\": \"http://www.exampleHomePage.com\"\n}\n

For example here is a valid auth.json file:

[\n  {\n    \"username\": \"john.doe@example.com\",\n    \"hash\": \"$2b$12$yBXrzIuRIk6yaft5KUgVFOIPv0PskCCh9PXmF2t7pno.qUZ5LK0D2\",\n    \"scopes\": [\"example_scope\"],\n    \"agent\": {\n      \"mbox\": \"mailto:john.doe@example.com\"\n    }\n  },\n  {\n    \"username\": \"simon.says@example.com\",\n    \"hash\": \"$2b$12$yBXrzIuRIk6yaft5KUgVFOIPv0PskCCh9PXmF2t7pno.qUZ5LK0D2\",\n    \"scopes\": [\"second_scope\", \"third_scope\"],\n    \"agent\": {\n      \"account\": {\n        \"name\": \"simonsAccountName\",\n        \"homePage\": \"http://www.exampleHomePage.com\"\n      }\n    }\n  }\n]\n
"},{"location":"UPGRADE/#upgrade_ralph_cli_usage","title":"Upgrade Ralph CLI usage","text":"

If you are using Ralph\u2019s CLI, the following changes may affect you:

  • The ralph fetch command changed to ralph read
  • The -b ws backend option changed to -b async_ws
    • The corresponding --ws-uri option changed to --async-ws-uri
  • The -c --chunk-size option changed to -s --chunk-size
  • The DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration is removed in favor of allowing each backend to define their own defaults:

    Backend Environment variable for default (read) chunk size async_es/es RALPH_BACKENDS__DATA__ES__READ_CHUNK_SIZE=500 async_lrs/lrs RALPH_BACKENDS__DATA__LRS__READ_CHUNK_SIZE=500 async_mongo/mongo RALPH_BACKENDS__DATA__MONGO__READ_CHUNK_SIZE=500 clickhouse RALPH_BACKENDS__DATA__CLICKHOUSE__READ_CHUNK_SIZE=500 fs RALPH_BACKENDS__DATA__FS__READ_CHUNK_SIZE=4096 ldp RALPH_BACKENDS__DATA__LDP__READ_CHUNK_SIZE=4096 s3 RALPH_BACKENDS__DATA__S3__READ_CHUNK_SIZE=4096 swift RALPH_BACKENDS__DATA__SWIFT__READ_CHUNK_SIZE=4096
  • The ralph push command changed to ralph write

  • The -c --chunk-size option changed to -s --chunk-size
  • The DEFAULT_BACKEND_CHUNK_SIZE environment variable configuration is removed in favor of allowing each backend to define their own defaults:

    Backend Environment variable for default (write) chunk size async_es/es RALPH_BACKENDS__DATA__ES__WRITE_CHUNK_SIZE=500 async_lrs/lrs RALPH_BACKENDS__DATA__LRS__WRITE_CHUNK_SIZE=500 async_mongo/mongo RALPH_BACKENDS__DATA__MONGO__WRITE_CHUNK_SIZE=500 clickhouse RALPH_BACKENDS__DATA__CLICKHOUSE__WRITE_CHUNK_SIZE=500 fs RALPH_BACKENDS__DATA__FS__WRITE_CHUNK_SIZE=4096 ldp RALPH_BACKENDS__DATA__LDP__WRITE_CHUNK_SIZE=4096 s3 RALPH_BACKENDS__DATA__S3__WRITE_CHUNK_SIZE=4096 swift RALPH_BACKENDS__DATA__SWIFT__WRITE_CHUNK_SIZE=4096
  • Environment variables used to configure backend options for CLI usage (read/write/list commands) changed their prefix: RALPH_BACKENDS__{{DATABASE or HTTP or STORAGE or STREAM}}__{{BACKEND}}__{{OPTION}} changed to RALPH_BACKENDS__DATA__{{BACKEND}}__{{OPTION}}

  • Environment variables used to configure backend options for LRS usage (runserver command) changed their prefix: RALPH_BACKENDS__{{DATABASE}}__{{BACKEND}}__{{OPTION}} changed to RALPH_BACKENDS__LRS__{{BACKEND}}__{{OPTION}}
"},{"location":"UPGRADE/#upgrade_history_syntax","title":"Upgrade history syntax","text":"

CLI syntax has been changed from fetch & push to read & write affecting the command history. You must replace the command history after updating: - locate your history file path, which is in { RALPH_APP_DIR }/history.json (defaults to .ralph/history.json) - run the commands below to update history

sed -i 's/\"fetch\"/\"read\"/g' { my_history_file_path }\nsed -i 's/\"push\"/\"write\"/g' { my_history_file_path }\n
"},{"location":"UPGRADE/#upgrade_ralph_library_usage_backends","title":"Upgrade Ralph library usage (backends)","text":"

If you use Ralph\u2019s backends in your application, the following changes might affect you:

Backends from ralph.backends.database, ralph.backends.http, ralph.backends.stream, and ralph.backends.storage packages have moved to a single ralph.backends.data package.

Ralph v3 (database/http/storage/stream) backends Ralph v4 data backends ralph.backends.database.clickhouse.ClickHouseDatabase ralph.backends.data.clickhouse.ClickHouseDataBackend ralph.backends.database.es.ESDatabase ralph.backends.data.es.ESDataBackend ralph.backends.database.mongo.MongoDatabase ralph.backends.data.mongo.MongoDataBackend ralph.backends.http.async_lrs.AsyncLRSHTTP ralph.backends.data.async_lrs.AsyncLRSDataBackend ralph.backends.http.lrs.LRSHTTP ralph.backends.data.lrs.LRSDataBackend ralph.backends.storage.fs.FSStorage ralph.backends.data.fs.FSDataBackend ralph.backends.storage.ldp.LDPStorage ralph.backends.data.ldp.LDPDataBackend ralph.backends.storage.s3.S3Storage ralph.backends.data.s3.S3DataBackend ralph.backends.storage.swift.SwiftStorage ralph.backends.data.swift.SwiftDataBackend ralph.backends.stream.ws.WSStream ralph.backends.data.async_ws.AsyncWSDataBackend

LRS-specific query_statements and query_statements_by_ids database backend methods have moved to a dedicated ralph.backends.lrs.BaseLRSBackend interface that extends the data backend interface with these two methods.

The query_statements_by_ids method return type changed to Iterator[dict].

Ralph v3 database backends for lrs usage Ralph v4 LRS data backends ralph.backends.database.clickhouse.ClickHouseDatabase ralph.backends.lrs.clickhouse.ClickHouseLRSBackend ralph.backends.database.es.ESDatabase ralph.backends.lrs.es.ESLRSBackend ralph.backends.database.mongo.MongoDatabase ralph.backends.lrs.mongo.MongoLRSBackend

Backend interface differences

  • Data backends are read-only by default
  • Data backends that support write operations inherit from the ralph.backends.data.base.Writable interface
  • Data backends that support list operations inherit from the ralph.backends.data.base.Listable interface
  • Data backends that support LRS operations (query_statements/query_statements_by_ids) inherit from the ralph.backends.lrs.BaseLRSBackend interface
  • __init__(self, **kwargs) changed to __init__(self, settings: DataBackendSettings) where each DataBackend defines it\u2019s own Settings object For example the FSDataBackend uses FSDataBackendSettings
  • stream and get methods changed to read
  • put methods changed to write

Backend usage migration example

Ralph v3 using ESDatabase:

from ralph.conf import ESClientOptions\nfrom ralph.backends.database.es import ESDatabase, ESQuery\n\n# Instantiate the backend.\nbackend = ESDatabase(\n  hosts=\"localhost\",\n  index=\"statements\"\n  client_options=ESClientOptions(verify_certs=False)\n)\n# Read records from backend.\nquery = ESQuery(query={\"query\": {\"term\": {\"modulo\": 0}}})\nes_statements = list(backend.get(query))\n\n# Write records to backend.\nbackend.put([{\"id\": 1}])\n

Ralph v4 using ESDataBackend:

from ralph.backends.data.es import (\n  ESClientOptions,\n  ESDataBackend,\n  ESDataBackendSettings,\n  ESQuery,\n)\n\n# Instantiate the backend.\nsettings = ESDataBackendSettings(\n  HOSTS=\"localhost\",\n  INDEX=\"statements\",\n  CLIENT_OPTIONS=ESClientOptions(verify_certs=False)\n)\nbackend = ESDataBackend(settings)\n\n# Read records from backend.\nquery = ESQuery(query={\"term\": {\"modulo\": 0}})\nes_statements = list(backend.read(query))\n\n# Write records to backend.\nbackend.write([{\"id\": 1}])\n
"},{"location":"UPGRADE/#upgrade_clickhouse_schema","title":"Upgrade ClickHouse schema","text":"

If you are using the ClickHouse backend, schema changes have been made to drop the existing JSON column in favor of the String version of the same data. See this issue for details.

Ralph does not manage the ClickHouse schema, so if you have existing data you will need to manually alter it as an admin user. Note: this will rewrite the statements table, which may take a long time if you have many rows. The command to run is:

-- If RALPH_BACKENDS__DATA__CLICKHOUSE__DATABASE is 'xapi'\n-- and RALPH_BACKENDS__DATA__CLICKHOUSE__EVENT_TABLE_NAME is 'test'\n\nALTER TABLE xapi.test DROP COLUMN event, RENAME COLUMN event_str to event;\n
"},{"location":"commands/","title":"Commands","text":""},{"location":"commands/#ralph","title":"ralph","text":"

The cli is a stream-based tool to play with your logs.

It offers functionalities to: - Validate or convert learning data in different standards - Read and write learning data to various databases or servers - Manage an instance of a Ralph LRS server

Usage:

ralph [OPTIONS] COMMAND [ARGS]...\n

Options:

  -v, --verbosity LVL  Either CRITICAL, ERROR, WARNING, INFO (default) or\n                       DEBUG\n  --version            Show the version and exit.\n  --help               Show this message and exit.\n
"},{"location":"commands/#ralph-auth","title":"ralph auth","text":"

Generate credentials for LRS HTTP basic authentication.

Usage:

ralph auth [OPTIONS]\n

Options:

  -u, --username TEXT             The user for which we generate credentials.\n                                  [required]\n  -p, --password TEXT             The password to encrypt for this user. Will\n                                  be prompted if missing.  [required]\n  -s, --scope TEXT                The user scope(s). This option can be\n                                  provided multiple times.  [required]\n  -t, --target TEXT               The target location where statements are\n                                  stored for the user.\n  -M, --agent-ifi-mbox TEXT       The mbox Inverse Functional Identifier of\n                                  the associated agent.\n  -S, --agent-ifi-mbox-sha1sum TEXT\n                                  The mbox-sha1sum Inverse Functional\n                                  Identifier of the associated agent.\n  -O, --agent-ifi-openid TEXT     The openid Inverse Functional Identifier of\n                                  the associated agent.\n  -A, --agent-ifi-account TEXT...\n                                  Input \"{name} {homePage}\". The account\n                                  Inverse Functional Identifier of the\n                                  associated agent.\n  -N, --agent-name TEXT           The name of the associated agent.\n  -w, --write-to-disk             Write new credentials to the LRS\n                                  authentication file.\n  --help                          Show this message and exit.\n
"},{"location":"commands/#ralph-convert","title":"ralph convert","text":"

Convert input events to a given format.

Usage:

ralph convert [OPTIONS]\n

Options:

  From edX to xAPI converter options: \n    -u, --uuid-namespace TEXT     The UUID namespace to use for the `ID` field\n                                  generation\n    -p, --platform-url TEXT       The `actor.account.homePage` to use in the\n                                  xAPI statements  [required]\n  -f, --from [edx]                Input events format to convert  [required]\n  -t, --to [xapi]                 Output events format  [required]\n  -I, --ignore-errors             Continue writing regardless of raised errors\n  -F, --fail-on-unknown           Stop converting at first unknown event\n  --help                          Show this message and exit.\n
"},{"location":"commands/#ralph-extract","title":"ralph extract","text":"

Extract input events from a container format using a dedicated parser.

Usage:

ralph extract [OPTIONS]\n

Options:

  -p, --parser [gelf|es]  Container format parser used to extract events\n                          [required]\n  --help                  Show this message and exit.\n
"},{"location":"commands/#ralph-validate","title":"ralph validate","text":"

Validate input events of given format.

Usage:

ralph validate [OPTIONS]\n

Options:

  -f, --format [edx|xapi]  Input events format to validate  [required]\n  -I, --ignore-errors      Continue validating regardless of raised errors\n  -F, --fail-on-unknown    Stop validating at first unknown event\n  --help                   Show this message and exit.\n
"},{"location":"contribute/","title":"Contributing to Ralph","text":"

Thank you for considering contributing to Ralph! We appreciate your interest and support. This documentation provides guidelines on how to contribute effectively to our project.

"},{"location":"contribute/#issues","title":"Issues","text":"

Issues are a valuable way to contribute to Ralph. They can include bug reports, feature requests, and general questions or discussions. When creating or interacting with issues, please keep the following in mind:

"},{"location":"contribute/#1_search_for_existing_issues","title":"1. Search for existing issues","text":"

Before creating a new issue, search the existing issues to see if your concern has already been raised. If you find a related issue, you can add your input or follow the discussion. Feel free to engage in discussions, offer help, or provide feedback on existing issues. Your input is valuable in shaping the project\u2019s future.

"},{"location":"contribute/#2_creating_a_new_issue","title":"2. Creating a new issue","text":"

Use the provided issue template that fits the best to your concern. Provide as much information as possible when writing your issue. Your issue will be reviewed by a project maintainer and you may be offered to open a PR if you want to contribute to the code. If not, and if your issue is relevant, a contributor will apply the changes to the project. The issue will then be automatically closed when the PR is merged.

Issues will be closed by project maintainers if they are deemed invalid. You can always reopen an issue if you believe it hasn\u2019t been adequately addressed.

"},{"location":"contribute/#3_code_of_conduct_in_discussion","title":"3. Code of conduct in discussion","text":"
  • Be respectful and considerate when participating in discussions.
  • Avoid using offensive language, and maintain a positive and collaborative tone.
  • Stay on topic and avoid derailing discussions.
"},{"location":"contribute/#discussions","title":"Discussions","text":"

Discussions in the Ralph repository are a place for open-ended conversations, questions, and general community interactions. Here\u2019s how to effectively use discussions:

"},{"location":"contribute/#1_creating_a_discussion","title":"1. Creating a discussion","text":"
  • Use a clear and concise title that summarizes the topic.
  • In the description, provide context and details regarding the discussion.
  • Use labels to categorize the discussion (e.g., \u201cquestion,\u201d \u201cgeneral discussion,\u201d \u201cannouncements,\u201d etc.).
"},{"location":"contribute/#2_participating_in_discussions","title":"2. Participating in discussions","text":"
  • Engage in conversations respectfully, respecting others\u2019 opinions.
  • Avoid spamming or making off-topic comments.
  • Help answer questions when you can.
"},{"location":"contribute/#pull_requests_pr","title":"Pull Requests (PR)","text":"

Contributing to Ralph through pull requests is a powerful way to advance the project. If you want to make changes or add new features, please follow these steps to submit a PR:

"},{"location":"contribute/#1_fork_the_repository","title":"1. Fork the repository","text":"

Begin by forking Ralph project\u2019s repository.

"},{"location":"contribute/#2_clone_the_fork","title":"2. Clone the fork","text":"

Clone the forked repository to your local machine and change the directory to the project folder using the following commands (replace <your_fork> with your GitHub username):

git clone https://github.com/<your_fork>/ralph.git\ncd ralph\n
"},{"location":"contribute/#3_create_a_new_branch","title":"3. Create a new branch","text":"

Create a new branch for your changes, ideally with a descriptive name:

git checkout -b your-new-feature\n
"},{"location":"contribute/#4_make_changes","title":"4. Make changes","text":"

Implement the changes or additions to the code, ensuring it follows OpenFUN coding and documentation standards.

For comprehensive guidance on starting your development journey with Ralph and preparing your pull request, please refer to our dedicated Start developing with Ralph tutorial.

When committing your changes, please adhere to OpenFUN commit practices. Follow the low granularity commit splitting approach and use commit messages based on the Angular commit message guidelines.

"},{"location":"contribute/#5_push_changes","title":"5. Push changes","text":"

Push your branch to your GitHub repository:

git push origin feature/your-new-feature\n
"},{"location":"contribute/#6_create_a_pull_request","title":"6. Create a pull request","text":"

To initiate a Pull Request (PR), head to Ralph project\u2019s GitHub repository and click on New Pull Request.

Set your branch as the source and Ralph project\u2019s main branch as the target.

Provide a clear title for your PR and make use of the provided PR body template to document the changes made by your PR. This helps streamline the review process and maintain a well-documented project history.

"},{"location":"contribute/#7_review_and_discussion","title":"7. Review and discussion","text":"

Ralph project maintainers will review your PR. Be prepared to make necessary changes or address any feedback. Patience during this process is appreciated.

"},{"location":"contribute/#8_merge","title":"8. Merge","text":"

Once your PR is approved, Ralph maintainers will merge your changes into the main project. Congratulations, you\u2019ve successfully contributed to Ralph! \ud83c\udf89

"},{"location":"features/api/","title":"LRS HTTP server","text":"

Ralph implements the Learning Record Store (LRS) specification defined by ADL.

Ralph LRS, based on FastAPI, has the following key features:

  • Supports of multiple databases through different backends
  • Secured with multiple authentication methods
  • Supports multitenancy
  • Enables the Total Learning Architecture with statements forwarding
  • Monitored thanks to the Sentry integration
"},{"location":"features/api/#api_documentation","title":"API documentation","text":""},{"location":"features/api/#fastapi_010","title":"FastAPI 0.1.0","text":""},{"location":"features/api/#endpoints","title":"Endpoints","text":""},{"location":"features/api/#get_xapistatements","title":"GET /xAPI/statements/","text":"

Get

Description

Read a single xAPI Statement or multiple xAPI Statements.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#213-get-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication activity query string No Filter, only return Statements for which the Object of the Statement is an Activity with the specified id agent query string No Filter, only return Statements for which the specified Agent or Group is the Actor or Object of the Statement ascending query boolean False No If \"true\", return results in ascending order of stored time attachments query boolean False No **Not implemented** If \"true\", the LRS uses the multipart response format and includes all attachments as described previously. If \"false\", the LRS sends the prescribed response with Content-Type application/json and does not send attachment data. format query string exact No **Not implemented** If \"ids\", only include minimum information necessary in Agent, Activity, Verb and Group Objects to identify them. For Anonymous Groups this means including the minimum information needed to identify each member. If \"exact\", return Agent, Activity, Verb and Group Objects populated exactly as they were when the Statement was received. An LRS requesting Statements for the purpose of importing them would use a format of \"exact\" in order to maintain Statement Immutability. If \"canonical\", return Activity Objects and Verbs populated with the canonical definition of the Activity Objects and Display of the Verbs as determined by the LRS, after applying the language filtering process defined below, and return the original Agent and Group Objects as in \"exact\" mode. limit query integer 100 No Maximum number of Statements to return. 0 indicates return the maximum the server will allow mine query boolean False No If \"true\", return only the results for which the authority matches the \"agent\" associated to the user that is making the query. pit_id query string No Point-in-time ID to ensure consistency of search requests through multiple pages.NB: for internal use, not part of the LRS specification. registration query string No **Not implemented** Filter, only return Statements matching the specified registration id related_activities query boolean False No **Not implemented** Apply the Activity filter broadly. Include Statements for which the Object, any of the context Activities, or any of those properties in a contained SubStatement match the Activity parameter, instead of that parameter's normal behaviour related_agents query boolean False No **Not implemented** Apply the Agent filter broadly. Include Statements for which the Actor, Object, Authority, Instructor, Team, or any of these properties in a contained SubStatement match the Agent parameter, instead of that parameter's normal behaviour. search_after query string No Sorting data to allow pagination through large number of search results. NB: for internal use, not part of the LRS specification. since query string No Only Statements stored since the specified Timestamp (exclusive) are returned statementId query string No Id of Statement to fetch until query string No Only Statements stored at or before the specified Timestamp are returned verb query string No Filter, only return Statements matching the specified Verb id voidedStatementId query string No **Not implemented** Id of voided Statement to fetch

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Get Xapi Statements  Get\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#put_xapistatements","title":"PUT /xAPI/statements/","text":"

Put

Description

Store a single statement as a single member of a set.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#211-put-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication statementId query string No

Request body

application/json

{\n    \"actor\": null,\n    \"id\": \"af583046-98a3-42e7-877f-00ad8bfcd6df\",\n    \"object\": {\n        \"id\": \"string\"\n    },\n    \"verb\": {\n        \"id\": \"string\"\n    }\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the request body
{\n    \"properties\": {\n        \"actor\": {\n            \"anyOf\": [\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithAccount\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAnonymousGroup\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithAccount\"\n                }\n            ],\n            \"title\": \"Actor\"\n        },\n        \"id\": {\n            \"type\": \"string\",\n            \"format\": \"uuid\",\n            \"title\": \"Id\"\n        },\n        \"object\": {\n            \"$ref\": \"#/components/schemas/LaxObjectField\"\n        },\n        \"verb\": {\n            \"$ref\": \"#/components/schemas/LaxVerbField\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"actor\",\n        \"object\",\n        \"verb\"\n    ],\n    \"title\": \"LaxStatement\",\n    \"description\": \"Pydantic model for lax statement.\\n\\nIt accepts without validating all fields beyond the bare minimum required to\\nqualify an object as an XAPI statement.\"\n}\n

Response 204 No Content

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#post_xapistatements","title":"POST /xAPI/statements/","text":"

Post

Description

Store a set of statements (or a single statement as a single member of a set).

NB: at this time, using POST to make a GET request, is not supported. LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#212-post-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication

Request body

application/json Schema of the request body
{\n    \"anyOf\": [\n        {\n            \"$ref\": \"#/components/schemas/LaxStatement\"\n        },\n        {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/LaxStatement\"\n            },\n            \"type\": \"array\"\n        }\n    ],\n    \"title\": \"Statements\"\n}\n

Response 200 OK

application/json

[\n    null\n]\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"items\": {},\n    \"type\": \"array\",\n    \"title\": \"Response Post Xapi Statements  Post\"\n}\n

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#get_xapistatements_1","title":"GET /xAPI/statements","text":"

Get

Description

Read a single xAPI Statement or multiple xAPI Statements.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#213-get-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication activity query string No Filter, only return Statements for which the Object of the Statement is an Activity with the specified id agent query string No Filter, only return Statements for which the specified Agent or Group is the Actor or Object of the Statement ascending query boolean False No If \"true\", return results in ascending order of stored time attachments query boolean False No **Not implemented** If \"true\", the LRS uses the multipart response format and includes all attachments as described previously. If \"false\", the LRS sends the prescribed response with Content-Type application/json and does not send attachment data. format query string exact No **Not implemented** If \"ids\", only include minimum information necessary in Agent, Activity, Verb and Group Objects to identify them. For Anonymous Groups this means including the minimum information needed to identify each member. If \"exact\", return Agent, Activity, Verb and Group Objects populated exactly as they were when the Statement was received. An LRS requesting Statements for the purpose of importing them would use a format of \"exact\" in order to maintain Statement Immutability. If \"canonical\", return Activity Objects and Verbs populated with the canonical definition of the Activity Objects and Display of the Verbs as determined by the LRS, after applying the language filtering process defined below, and return the original Agent and Group Objects as in \"exact\" mode. limit query integer 100 No Maximum number of Statements to return. 0 indicates return the maximum the server will allow mine query boolean False No If \"true\", return only the results for which the authority matches the \"agent\" associated to the user that is making the query. pit_id query string No Point-in-time ID to ensure consistency of search requests through multiple pages.NB: for internal use, not part of the LRS specification. registration query string No **Not implemented** Filter, only return Statements matching the specified registration id related_activities query boolean False No **Not implemented** Apply the Activity filter broadly. Include Statements for which the Object, any of the context Activities, or any of those properties in a contained SubStatement match the Activity parameter, instead of that parameter's normal behaviour related_agents query boolean False No **Not implemented** Apply the Agent filter broadly. Include Statements for which the Actor, Object, Authority, Instructor, Team, or any of these properties in a contained SubStatement match the Agent parameter, instead of that parameter's normal behaviour. search_after query string No Sorting data to allow pagination through large number of search results. NB: for internal use, not part of the LRS specification. since query string No Only Statements stored since the specified Timestamp (exclusive) are returned statementId query string No Id of Statement to fetch until query string No Only Statements stored at or before the specified Timestamp are returned verb query string No Filter, only return Statements matching the specified Verb id voidedStatementId query string No **Not implemented** Id of voided Statement to fetch

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Get Xapi Statements Get\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#put_xapistatements_1","title":"PUT /xAPI/statements","text":"

Put

Description

Store a single statement as a single member of a set.

LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#211-put-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication statementId query string No

Request body

application/json

{\n    \"actor\": null,\n    \"id\": \"43871fb4-8c97-4d2e-bb4d-c0589b2d5f68\",\n    \"object\": {\n        \"id\": \"string\"\n    },\n    \"verb\": {\n        \"id\": \"string\"\n    }\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the request body
{\n    \"properties\": {\n        \"actor\": {\n            \"anyOf\": [\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAgentWithAccount\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiAnonymousGroup\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMbox\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithMboxSha1Sum\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithOpenId\"\n                },\n                {\n                    \"$ref\": \"#/components/schemas/BaseXapiIdentifiedGroupWithAccount\"\n                }\n            ],\n            \"title\": \"Actor\"\n        },\n        \"id\": {\n            \"type\": \"string\",\n            \"format\": \"uuid\",\n            \"title\": \"Id\"\n        },\n        \"object\": {\n            \"$ref\": \"#/components/schemas/LaxObjectField\"\n        },\n        \"verb\": {\n            \"$ref\": \"#/components/schemas/LaxVerbField\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"actor\",\n        \"object\",\n        \"verb\"\n    ],\n    \"title\": \"LaxStatement\",\n    \"description\": \"Pydantic model for lax statement.\\n\\nIt accepts without validating all fields beyond the bare minimum required to\\nqualify an object as an XAPI statement.\"\n}\n

Response 204 No Content

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#post_xapistatements_1","title":"POST /xAPI/statements","text":"

Post

Description

Store a set of statements (or a single statement as a single member of a set).

NB: at this time, using POST to make a GET request, is not supported. LRS Specification: https://github.com/adlnet/xAPI-Spec/blob/1.0.3/xAPI- Communication.md#212-post-statements

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication HTTPBasic header string N/A No Basic authentication

Request body

application/json Schema of the request body
{\n    \"anyOf\": [\n        {\n            \"$ref\": \"#/components/schemas/LaxStatement\"\n        },\n        {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/LaxStatement\"\n            },\n            \"type\": \"array\"\n        }\n    ],\n    \"title\": \"Statements\"\n}\n

Response 200 OK

application/json

[\n    null\n]\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"items\": {},\n    \"type\": \"array\",\n    \"title\": \"Response Post Xapi Statements Post\"\n}\n

Response 400 Bad Request

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 409 Conflict

application/json

{\n    \"detail\": \"string\"\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"type\": \"string\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"required\": [\n        \"detail\"\n    ],\n    \"title\": \"ErrorDetail\",\n    \"description\": \"Pydantic model for errors raised detail.\\n\\nType for return value for errors raised in API endpoints.\\nUseful for OpenAPI documentation generation.\"\n}\n

Response 422 Unprocessable Entity

application/json

{\n    \"detail\": [\n        {\n            \"loc\": [\n                null\n            ],\n            \"msg\": \"string\",\n            \"type\": \"string\"\n        }\n    ]\n}\n
\u26a0\ufe0f This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.

Schema of the response body
{\n    \"properties\": {\n        \"detail\": {\n            \"items\": {\n                \"$ref\": \"#/components/schemas/ValidationError\"\n            },\n            \"type\": \"array\",\n            \"title\": \"Detail\"\n        }\n    },\n    \"type\": \"object\",\n    \"title\": \"HTTPValidationError\"\n}\n
"},{"location":"features/api/#get_lbheartbeat","title":"GET /lbheartbeat","text":"

Lbheartbeat

Description

Load balancer heartbeat.

Return a 200 when the server is running.

Response 200 OK

application/json Schema of the response body"},{"location":"features/api/#get_heartbeat","title":"GET /heartbeat","text":"

Heartbeat

Description

Application heartbeat.

Return a 200 if all checks are successful.

Response 200 OK

application/json Schema of the response body"},{"location":"features/api/#get_whoami","title":"GET /whoami","text":"

Whoami

Description

Return the current user\u2019s username along with their scopes.

Input parameters

Parameter In Type Default Nullable Description HTTPBasic header string N/A No Basic authentication

Response 200 OK

application/json Schema of the response body
{\n    \"type\": \"object\",\n    \"title\": \"Response Whoami Whoami Get\"\n}\n
"},{"location":"features/api/#schemas","title":"Schemas","text":""},{"location":"features/api/#basexapiaccount","title":"BaseXapiAccount","text":"Name Type homePage string name string"},{"location":"features/api/#basexapiagentwithaccount","title":"BaseXapiAgentWithAccount","text":"Name Type account BaseXapiAccount name string objectType string"},{"location":"features/api/#basexapiagentwithmbox","title":"BaseXapiAgentWithMbox","text":"Name Type mbox string name string objectType string"},{"location":"features/api/#basexapiagentwithmboxsha1sum","title":"BaseXapiAgentWithMboxSha1Sum","text":"Name Type mbox_sha1sum string name string objectType string"},{"location":"features/api/#basexapiagentwithopenid","title":"BaseXapiAgentWithOpenId","text":"Name Type name string objectType string openid string(uri)"},{"location":"features/api/#basexapianonymousgroup","title":"BaseXapiAnonymousGroup","text":"Name Type member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithaccount","title":"BaseXapiIdentifiedGroupWithAccount","text":"Name Type account BaseXapiAccount member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithmbox","title":"BaseXapiIdentifiedGroupWithMbox","text":"Name Type mbox string member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithmboxsha1sum","title":"BaseXapiIdentifiedGroupWithMboxSha1Sum","text":"Name Type mbox_sha1sum string member Array<> name string objectType string"},{"location":"features/api/#basexapiidentifiedgroupwithopenid","title":"BaseXapiIdentifiedGroupWithOpenId","text":"Name Type member Array<> name string objectType string openid string(uri)"},{"location":"features/api/#errordetail","title":"ErrorDetail","text":"Name Type detail string"},{"location":"features/api/#httpvalidationerror","title":"HTTPValidationError","text":"Name Type detail Array<ValidationError>"},{"location":"features/api/#laxobjectfield","title":"LaxObjectField","text":"Name Type id string(uri)"},{"location":"features/api/#laxstatement","title":"LaxStatement","text":"Name Type actor id string(uuid) object LaxObjectField verb LaxVerbField"},{"location":"features/api/#laxverbfield","title":"LaxVerbField","text":"Name Type id string(uri)"},{"location":"features/api/#validationerror","title":"ValidationError","text":"Name Type loc Array<> msg string type string"},{"location":"features/api/#security_schemes","title":"Security schemes","text":"Name Type Scheme Description HTTPBasic http basic"},{"location":"features/backends/","title":"Backends for data storage","text":"

Ralph supports various backends that can be accessed to read from or write to (learning events or random data). Implemented backends are listed below along with their configuration parameters. If your favourite data storage method is missing, feel free to submit your implementation or get in touch!

"},{"location":"features/backends/#key_concepts","title":"Key concepts","text":"

Each backend has its own parameter requirements. These parameters can be set as command line options or environment variables; the later is the recommended solution for sensitive data such as service credentials. For example, the os_username (OpenStack user name) parameter of the OpenStack Swift backend, can be set as a command line option using swift as the option prefix (and replacing underscores in its name by dashes):

ralph list --backend swift --swift-os-username johndoe # [...] more options\n

Alternatively, this parameter can be set as an environment variable (in upper case, prefixed by the program name, e.g. RALPH_):

export RALPH_BACKENDS__DATA__SWIFT__OS_USERNAME=\"johndoe\"\nralph list --backend swift # [...] more options\n

The general patterns for backend parameters are:

  • --{{ backend_name }}-{{ parameter | underscore_to_dash }} for command options, and,
  • RALPH_BACKENDS__DATA__{{ backend_name | uppercase }}__{{ parameter | uppercase }} for environment variables.
"},{"location":"features/backends/#elasticsearch","title":"Elasticsearch","text":"

Elasticsearch backend is mostly used for indexation purpose (as a datalake) but it can also be used to fetch indexed data from it.

Elasticsearch data backend default configuration.

Attributes:

Name Type Description ALLOW_YELLOW_STATUS bool

Whether to consider Elasticsearch yellow health status to be ok.

CLIENT_OPTIONS dict

A dictionary of valid options for the Elasticsearch class initialization.

DEFAULT_INDEX str

The default index to use for querying Elasticsearch.

HOSTS str or tuple

The comma-separated list of Elasticsearch nodes to connect to.

LOCALE_ENCODING str

The encoding used for reading/writing documents.

POINT_IN_TIME_KEEP_ALIVE str

The duration for which Elasticsearch should keep a point in time alive.

READ_CHUNK_SIZE int

The default chunk size for reading batches of documents.

REFRESH_AFTER_WRITE str or bool

Whether the Elasticsearch index should be refreshed after the write operation.

WRITE_CHUNK_SIZE int

The default chunk size for writing batches of documents.

"},{"location":"features/backends/#mongodb","title":"MongoDB","text":"

MongoDB backend is mostly used for indexation purpose (as a datalake) but it can also be used to fetch collections of documents from it.

MongoDB data backend default configuration.

Attributes:

Name Type Description CONNECTION_URI str

The MongoDB connection URI.

DEFAULT_DATABASE str

The MongoDB database to connect to.

DEFAULT_COLLECTION str

The MongoDB database collection to get objects from.

CLIENT_OPTIONS MongoClientOptions

A dictionary of MongoDB client options.

LOCALE_ENCODING str

The locale encoding to use when none is provided.

READ_CHUNK_SIZE int

The default chunk size for reading batches of documents.

WRITE_CHUNK_SIZE int

The default chunk size for writing batches of documents.

"},{"location":"features/backends/#clickhouse","title":"ClickHouse","text":"

The ClickHouse backend can be used as a data lake and to fetch collections of documents from it.

ClickHouse data backend default configuration.

Attributes:

Name Type Description HOST str

ClickHouse server host to connect to.

PORT int

ClickHouse server port to connect to.

DATABASE str

ClickHouse database to connect to.

EVENT_TABLE_NAME str

Table where events live.

USERNAME str

ClickHouse username to connect as (optional).

PASSWORD str

Password for the given ClickHouse username (optional).

CLIENT_OPTIONS ClickHouseClientOptions

A dictionary of valid options for the ClickHouse client connection.

LOCALE_ENCODING str

The locale encoding to use when none is provided.

READ_CHUNK_SIZE int

The default chunk size for reading.

WRITE_CHUNK_SIZE int

The default chunk size for writing.

The ClickHouse client options supported in Ralph can be found in these locations:

  • Python driver specific
  • General ClickHouse client settings
"},{"location":"features/backends/#ovh_-_log_data_platform_ldp","title":"OVH - Log Data Platform (LDP)","text":"

LDP is a nice service built by OVH on top of Graylog to follow, analyse and store your logs. Learning events (aka tracking logs) can be stored in GELF format using this backend.

Read-only backend

For now the LDP backend is read-only as we consider that it is mostly used to collect primary logs and not as a Ralph target. Feel free to get in touch to prove us wrong, or better: submit your proposal for the write method implementation.

To access OVH\u2019s LDP API, you need to register Ralph as an authorized application and generate an application key, an application secret and a consumer key.

While filling the registration form available at: eu.api.ovh.com/createToken/, be sure to give an appropriate validity time span to your token and allow only GET requests on the /dbaas/logs/* path.

OVH LDP (Log Data Platform) data backend default configuration.

Attributes:

Name Type Description APPLICATION_KEY str

The OVH API application key (AK).

APPLICATION_SECRET str

The OVH API application secret (AS).

CONSUMER_KEY str

The OVH API consumer key (CK).

DEFAULT_STREAM_ID str

The default stream identifier to query.

ENDPOINT str

The OVH API endpoint.

READ_CHUNK_SIZE str

The default chunk size for reading archives.

REQUEST_TIMEOUT int

HTTP request timeout in seconds.

SERVICE_NAME str

The default LDP account name.

For more information about OVH\u2019s API client parameters, please refer to the project\u2019s documentation: github.com/ovh/python-ovh.

"},{"location":"features/backends/#openstack_swift","title":"OpenStack Swift","text":"

Swift is the OpenStack object storage service. This storage backend is fully supported (read and write operations) to stream and store log archives.

Parameters correspond to a standard authentication using OpenStack Keystone service and configuration to work with the target container.

Swift data backend default configuration.

Attributes:

Name Type Description AUTH_URL str

The authentication URL.

USERNAME str

The name of the openstack swift user.

PASSWORD str

The password of the openstack swift user.

IDENTITY_API_VERSION str

The keystone API version to authenticate to.

TENANT_ID str

The identifier of the tenant of the container.

TENANT_NAME str

The name of the tenant of the container.

PROJECT_DOMAIN_NAME str

The project domain name.

REGION_NAME str

The region where the container is.

OBJECT_STORAGE_URL str

The default storage URL.

USER_DOMAIN_NAME str

The user domain name.

DEFAULT_CONTAINER str

The default target container.

LOCALE_ENCODING str

The encoding used for reading/writing documents.

READ_CHUNK_SIZE str

The default chunk size for reading objects.

WRITE_CHUNK_SIZE str

The default chunk size for writing objects.

"},{"location":"features/backends/#amazon_s3","title":"Amazon S3","text":"

S3 is the Amazon Simple Storage Service. This storage backend is fully supported (read and write operations) to stream and store log archives.

Parameters correspond to a standard authentication with AWS CLI and configuration to work with the target bucket.

S3 data backend default configuration.

Attributes:

Name Type Description ACCESS_KEY_ID str

The access key id for the S3 account.

SECRET_ACCESS_KEY str

The secret key for the S3 account.

SESSION_TOKEN str

The session token for the S3 account.

ENDPOINT_URL str

The endpoint URL of the S3.

DEFAULT_REGION str

The default region used in instantiating the client.

DEFAULT_BUCKET_NAME str

The default bucket name targeted.

LOCALE_ENCODING str

The encoding used for writing dictionaries to objects.

READ_CHUNK_SIZE str

The default chunk size for reading objects.

WRITE_CHUNK_SIZE str

The default chunk size for writing objects.

"},{"location":"features/backends/#file_system","title":"File system","text":"

The file system backend is a dummy template that can be used to develop your own backend. It is a \u201cdummy\u201d backend as it is not intended for practical use (UNIX ls and cat would be more practical).

The only required parameter is the path we want to list or stream content from.

FileSystem data backend default configuration.

Attributes:

Name Type Description DEFAULT_DIRECTORY_PATH str or Path

The default target directory path where to perform list, read and write operations.

DEFAULT_QUERY_STRING str

The default query string to match files for the read operation.

LOCALE_ENCODING str

The encoding used for writing dictionaries to files.

READ_CHUNK_SIZE int

The default chunk size for reading files.

WRITE_CHUNK_SIZE int

The default chunk size for writing files.

"},{"location":"features/backends/#learning_record_store_lrs","title":"Learning Record Store (LRS)","text":"

The LRS backend is used to store and retrieve xAPI statements from various systems that follow the xAPI specification (such as our own Ralph LRS, which can be run from this package). LRS systems are mostly used in e-learning infrastructures.

LRS data backend default configuration.

Attributes:

Name Type Description BASE_URL AnyHttpUrl

LRS server URL.

USERNAME str

Basic auth username for LRS authentication.

PASSWORD str

Basic auth password for LRS authentication.

HEADERS dict

Headers defined for the LRS server connection.

LOCALE_ENCODING str

The encoding used for reading statements.

READ_CHUNK_SIZE int

The default chunk size for reading statements.

STATUS_ENDPOINT str

Endpoint used to check server status.

STATEMENTS_ENDPOINT str

Default endpoint for LRS statements resource.

WRITE_CHUNK_SIZE int

The default chunk size for writing statements.

"},{"location":"features/backends/#websocket","title":"WebSocket","text":"

The webSocket backend is read-only and can be used to get real-time events.

If you use OVH\u2019s Logs Data Platform (LDP), you can retrieve a WebSocket URI to test your data stream by following instructions from the official documentation.

Websocket data backend default configuration.

Attributes:

Name Type Description CLIENT_OPTIONS dict

A dictionary of valid options for the websocket client connection. See WSClientOptions.

URI str

The URI to connect to.

Client options for websockets.connection.

For mode details, see the websockets.connection documentation

Attributes:

Name Type Description close_timeout float

Timeout for opening the connection in seconds.

compression str

Per-message compression (deflate) is activated by default. Setting it to None disables compression.

max_size int

Maximum size of incoming messages in bytes. Setting it to None disables the limit.

max_queue int

Maximum number of incoming messages in receive buffer. Setting it to None disables the limit.

open_timeout float

Timeout for opening the connection in seconds. Setting it to None disables the timeout.

origin str

Value of the Origin header, for servers that require it.

ping_interval float

Delay between keepalive pings in seconds. Setting it to None disables keepalive pings.

ping_timeout float

Timeout for keepalive pings in seconds. Setting it to None disables timeouts.

read_limit int

High-water mark of read buffer in bytes.

user_agent_header str

Value of the User-Agent request header. It defaults to \u201cPython/x.y.z websockets/X.Y\u201d. Setting it to None removes the header.

write_limit int

High-water mark of write buffer in bytes.

"},{"location":"features/models/","title":"Learning statement models","text":"

The learning statement models validation and conversion tools in Ralph empower you to work with an LRS and ensure the quality of xAPI statements. These features not only enhance the integrity of your learning data but also facilitate integration and compliance with industry standards.

This section provides insights into the supported models, their conversion, and validation.

"},{"location":"features/models/#supported_statements","title":"Supported statements","text":"

Learning statement models encompass a wide array of xAPI and OpenEdx statement types, ensuring comprehensive support for your e-learning data.

  1. xAPI statements models:

    • LMS
    • Video
    • Virtual classroom
  2. OpenEdx statements models:

    • Enrollment
    • Navigational
    • Open Reponse Assessment
    • Peer instruction
    • Problem interaction
    • Textbook interaction
    • Video interaction
"},{"location":"features/models/#statements_validation","title":"Statements validation","text":"

In learning analytics, the validation of statements takes on significant importance. These statements, originating from diverse sources, systems or applications, must align with specific standards such as xAPI for the best known. The validation process becomes essential in ensuring that these statements meet the required standards, facilitating data quality and reliability.

Ralph allows you to automate the validation process in your production stack. OpenEdx related events and xAPI statements are supported.

Warning

For now, validation is effective only with supported learning statement models on Ralph. About xAPI statements, an issue is open to extend validation to any xAPI statement.

Check out tutorials to test the validation feature:

  • validate with Ralph as a CLI
  • validate with Ralph as a library
"},{"location":"features/models/#statements_conversion","title":"Statements conversion","text":"

Ralph currently supports conversion from OpenEdx learning events to xAPI statements. Here is the up-to-date conversion sets availables:

FROM TO edx.course.enrollment.activated registered to a course edx.course.enrollment.deactivated unregistered to a course load_video/edx.video.loaded initialized a video play_video/edx.video.played played a video pause_video/edx.video.paused paused a video stop_video/edx.video.stopped terminated a video seek_video/edx.video.position.changed seeked in a video

Check out tutorials to test the conversion feature:

  • convert with Ralph as a CLI
  • convert with Ralph as a library
"},{"location":"tutorials/cli/","title":"How to use Ralph as a CLI ?","text":"

WIP.

"},{"location":"tutorials/cli/#prerequisites","title":"Prerequisites","text":"
  • Ralph should be properly installed to be used as a CLI. Follow Installation section for more information
  • [Recommended] To easily manipulate JSON streams, please install jq on your machine
"},{"location":"tutorials/cli/#validate_command","title":"validate command","text":"

In this tutorial, we\u2019ll walk you through the process of using validate command to check the validity of xAPI statements.

"},{"location":"tutorials/cli/#with_an_invalid_xapi_statement","title":"With an invalid xAPI statement","text":"

First, let\u2019s test the validate command with a dummy JSON string.

  • Create in the terminal a dummy statement as follows:
invalid_statement='{\"foo\": \"invalid xapi\"}'\n
  • Run validation on this statement with this command:
echo \"$invalid_statement\" | ralph validate -f xapi \n
  • You should observe the following output from the terminal:
INFO     ralph.cli Validating xapi events (ignore_errors=False | fail-on-unknown=False)\nERROR    ralph.models.validator No matching pydantic model found for input event\nINFO     ralph.models.validator Total events: 1, Invalid events: 1\n
"},{"location":"tutorials/cli/#with_a_valid_xapi_statement","title":"With a valid xAPI statement","text":"

Now, let\u2019s test the validate command with a valid xAPI statement.

The tutorial is made on a completed video xAPI statement.

Info

According to the specification, an xAPI statement to be valid should contain, at least the three following fields:

  • an actor (with a correct IFI),
  • a verb (with an id property),
  • an object (with an id property).
  • Create in the terminal a valid xAPI statement as follows:
valid_statement='{\"actor\": {\"mbox\": \"mailto:johndoe@example.com\", \"name\": \"John Doe\"}, \"verb\": {\"id\": \"http://adlnet.gov/expapi/verbs/completed\"}, \"object\": {\"id\": \"http://example.com/video/001-introduction\"}, \"timestamp\": \"2023-10-31T15:30:00Z\"}'\n
  • Run validation on this statement with this command:
echo \"$valid_statement\" | bin/ralph validate -f xapi \n
  • You should observe the following output from the terminal:
INFO     ralph.cli Validating xapi events (ignore_errors=False | fail-on-unknown=False)\nINFO     ralph.models.validator Total events: 1, Invalid events: 1\n
"},{"location":"tutorials/cli/#convert_command","title":"convert command","text":"

In this tutorial, you\u2019ll learn how to convert OpenEdx events into xAPI statements with Ralph.

Note

Please note that this feature is currently only supported for a set of OpenEdx events. When converting Edx events to xAPI statements, always refer to the list of supported event types to ensure accurate and successful conversion.

For this example, let\u2019s choose the page_close OpenEdx event that is converted into a terminated a page xAPI statement.

  • Create in the terminal a page_close OpenEdx event as follows:
edx_statements={\"username\": \"\", \"ip\": \"0.0.0.0\", \"agent\": \"0\", \"host\": \"0\", \"referer\": \"\", \"accept_language\": \"0\", \"context\": {\"course_id\": \"\", \"course_user_tags\": null, \"module\": null, \"org_id\": \"0\", \"path\": \".\", \"user_id\": null}, \"time\": \"2000-01-01T00:00:00\", \"page\": \"http://A.ac/\", \"event_source\": \"browser\", \"session\": \"\", \"event\": \"{}\", \"event_type\": \"page_close\", \"name\": \"page_close\"}\n
  • Convert this statement into a terminated a page statement with this command:
echo \"$edx_statements\" | \\ \nralph convert \\\n    --platform-url \"http://lms-example.com\" \\\n    --uuid-namespace \"ee241f8b-174f-5bdb-bae9-c09de5fe017f\" \\\n    --from edx \\\n    --to xapi | \\\n    jq\n
  • You should observe the following output from the terminal:
INFO     ralph.cli Converting edx events to xapi format (ignore_errors=False | fail-on-unknown=False)\nINFO     ralph.models.converter Total events: 1, Invalid events: 0\n{\n  \"id\": \"8670c7d4-5485-52bd-b10a-a8ae27a51501\",\n  \"actor\": {\n    \"account\": {\n      \"homePage\": \"http://lms-example.com\",\n      \"name\": \"anonymous\"\n    }\n  },\n  \"verb\": {\n    \"id\": \"http://adlnet.gov/expapi/verbs/terminated\"\n  },\n  \"object\": {\n    \"id\": \"http://A.ac/\",\n    \"definition\": {\n      \"type\": \"http://activitystrea.ms/schema/1.0/page\"\n    }\n  },\n  \"timestamp\": \"2000-01-01T00:00:00\",\n  \"version\": \"1.0.0\"\n}\n

\ud83c\udf89 Congratulations! You just have converted an event generated from OpenEdx LMS to a standardised xAPI statement!

Store locally converted statements

To stored the converted statements locally on your machine, send the output of the convert command to a JSON file as follows:

echo \"$edx_statements\" | \\ \nralph convert \\\n    --platform-url \"http://lms-example.com\" \\\n    --uuid-namespace \"ee241f8b-174f-5bdb-bae9-c09de5fe017f\" \\\n    --from edx \\\n    --to xapi \\\n    > converted_event.json\n

"},{"location":"tutorials/development_guide/","title":"Development guide","text":"

Welcome to our developer contribution guidelines!

You should know that we would be glad to help you contribute to Ralph! Here\u2019s our Discord to contact us easily.

"},{"location":"tutorials/development_guide/#preparation","title":"Preparation","text":"

Prerequisites

Ralph development environment is containerized with Docker for consistency. Before diving in, ensure you have the following installed:

  • Docker Engine
  • Docker Compose
  • make

Info

In this tutorial, and even more generally in others tutorials, we tend to use Elasticsearch backend. Note that you can do the same with another LRS backend implemented in Ralph.

To start playing with ralph, you should first bootstrap using:

make bootstrap\n

When bootstrapping the project for the first time, the env.dist template file is copied to the .env file. You may want to edit the generated .env file to set up available backend parameters that will be injected into the running container as environment variables to configure Ralph (see backends documentation):

# Elasticsearch backend\nRALPH_BACKENDS__LRS__ES__HOSTS=http://elasticsearch:9200\nRALPH_BACKENDS__LRS__ES__INDEX=statements\nRALPH_BACKENDS__LRS__ES__TEST_HOSTS=http://elasticsearch:9200\nRALPH_BACKENDS__LRS__ES__TEST_INDEX=test-index\n\n# [...]\n

Default configuration in .env file

Defaults are provided for some environment variables that you can use by uncommenting them.

"},{"location":"tutorials/development_guide/#backends","title":"Backends","text":"

Virtual memory for Elasticsearch

In order to run the Elasticsearch backend locally on GNU/Linux operating systems, ensure that your virtual memory limits are not too low and increase them if needed by typing this command from your terminal (as root or using sudo):

sysctl -w vm.max_map_count=262144

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/master/vm-max-map-count.html

Disk space for Elasticsearch

Ensure that you have at least 10% of available disk space on your machine to run Elasticsearch.

Once configured, start the database container using the following command, substituting [BACKEND] by the backend name (e.g. es for Elasticsearch):

make run-[BACKEND]\n

You can also start other services with the following commands:

make run-es\nmake run-swift\nmake run-mongo\nmake run-clickhouse\n# Start all backends\nmake run-all\n

Now that you have started the elasticsearch and swift backends, it\u2019s time to play with them with Ralph CLI:

We can store a JSON file in the Swift backend:

echo '{\"id\": 1, \"foo\": \"bar\"}' | \\\n    ./bin/ralph write -b swift -t foo.json\n

We can check that we have created a new JSON file in the Swift backend:

bin/ralph list -b swift\n>>> foo.json\n

Let\u2019s read the content of the JSON file and index it in Elasticsearch

bin/ralph read -b swift -t foo.json | \\\n    bin/ralph write -b es\n

We can now check that we have properly indexed the JSON file in Elasticsearch

bin/ralph read -b es\n>>> {\"id\": 1, \"foo\": \"bar\"}\n

"},{"location":"tutorials/development_guide/#wip_lrs","title":"[WIP] LRS","text":""},{"location":"tutorials/development_guide/#tray","title":"Tray","text":"

Ralph is distributed along with its tray (a deployable package for Kubernetes clusters using Arnold). If you intend to work on this tray, please refer to Arnold\u2019s documentation first.

Prerequisites

  • Kubectl (>v.1.23.5): This CLI is used to communicate with the running Kubernetes instance you will use.
  • k3d (>v.5.0.0): This tool is used to set up and run a lightweight Kubernetes cluster, in order to have a local environment (it is required to complete quickstart instructions below to avoid depending on an existing Kubernetes cluster).
  • curl is required by Arnold\u2019s CLI.
  • gnupg to encrypt Ansible vaults passwords and collaborate with your team.
"},{"location":"tutorials/development_guide/#create_a_local_k3d_cluster","title":"Create a local k3d cluster","text":"

To create (or run) a local kubernetes cluster, we use k3d. The cluster\u2019s bootstrapping should be run via:

make k3d-cluster\n

Running a k3d-cluster locally supposes that the 80 and 443 ports of your machine are available, so that the ingresses created for your project responds properly. If one or both ports are already used by another service running on your machine, the make k3d-cluster command may fail.

You can check that your cluster is running using the k3d cluster command:

k3d cluster list\n

You should expect the following output:

NAME     SERVERS   AGENTS   LOADBALANCER\nralph    1/1       0/0      true\n

As you can see, we are running a single node cluster called ralph.

"},{"location":"tutorials/development_guide/#bootstrap_an_arnold_project","title":"Bootstrap an Arnold project","text":"

Once your Kubernetes cluster is running, you need to create a standard Arnold project describing applications and environments you need to deploy:

make arnold-bootstrap\n

Once bootstrapped, Arnold should have created a group_vars directory.

Run the following command to discover the directory tree.

tree group_vars\n

The output should be as follows:

group_vars\n\u251c\u2500\u2500 common\n\u2514\u2500\u2500 customer\n    \u2514\u2500\u2500 ralph\n        \u251c\u2500\u2500 development\n        \u2502\u00a0\u00a0 \u251c\u2500\u2500 main.yml\n        \u2502\u00a0\u00a0 \u2514\u2500\u2500 secrets\n        \u2502\u00a0\u00a0     \u251c\u2500\u2500 databases.vault.yml\n        \u2502\u00a0\u00a0     \u251c\u2500\u2500 elasticsearch.vault.yml\n        \u2502\u00a0\u00a0     \u2514\u2500\u2500 ralph.vault.yml\n        \u2514\u2500\u2500 main.yml\n\n5 directories, 5 files\n

To create the LRS credentials file, you need to provide a list of accounts allowed to request the LRS in Ralph\u2019s vault:

# Setup your kubernetes environment\nsource .k3d-cluster.env.sh\n\n# Decrypt the vault\nbin/arnold -d -c ralph -e development -- vault -a ralph decrypt\n

Edit the vault file to add a new account for the foo user with the bar password and a relevant scope:

# group_vars/customer/ralph/development/secrets/ralph.vault.yml\n#\n# [...]\n#\n# LRS\nLRS_AUTH:\n  - username: \"foo\"\n    hash: \"$2b$12$lCggI749U6TrzK7Qyr7xGe1KVSAXdPjtkMew.BD6lzIk//T5YSb72\"\n    scopes:\n      - \"all\"\n

The password hash has been generated using bcrypt as explained in the LRS user guide.

And finally (re-)encrypt Ralph\u2019s vault:

bin/arnold -d -c ralph -e development -- vault -a ralph encrypt\n

You are now ready to create the related Kubernetes Secret while initializing Arnold project in the next step.

"},{"location":"tutorials/development_guide/#prepare_working_namespace","title":"Prepare working namespace","text":"

You are now ready to create required Kubernetes objects to start working on Ralph\u2019s deployment:

make arnold-init\n

At this point an Elasticsearch cluster should be running on your Kubernetes cluster:

kubectl -n development-ralph get -l app=elasticsearch pod\nNAME                                         READY   STATUS      RESTARTS   AGE\nelasticsearch-node-0                         1/1     Running     0          69s\nelasticsearch-node-1                         1/1     Running     0          69s\nelasticsearch-node-2                         1/1     Running     0          69s\nes-index-template-j-221010-09h25m24s-nx5qz   0/1     Completed   0          49s\n

We are now ready to deploy Ralph to Kubernetes!

"},{"location":"tutorials/development_guide/#deploy_code_repeat","title":"Deploy, code, repeat","text":"

To test your local docker image, you need to build it and publish it to the local kubernetes cluster docker registry using the k3d-push Makefile rule:

make k3d-push\n

Note

Each time you modify Ralph\u2019s application or its Docker image, you will need to make this update.

Now that your Docker image is published, it\u2019s time to deploy it!

make arnold-deploy\n

To test this deployment, let\u2019s try to make an authenticated request to the LRS:

curl -sLk \\\n  --user foo:bar \\\n  \"https://$(\\\n      kubectl -n development-ralph \\\n      get \\\n      ingress/ralph-app-current \\\n      -o jsonpath='{.spec.rules[0].host}')/whoami\"\n

Let\u2019s also send some test statements:

gunzip -c data/statements.json.gz | \\\nhead -n 100 | \\\njq -s . | \\\ncurl -sLk \\\n  --user foo:bar \\\n  -X POST \\\n  -H \"Content-Type: application/json\" \\\n  -d @- \\\n  \"https://$(\\\n      kubectl -n development-ralph \\\n      get \\\n      ingress/ralph-app-current \\\n      -o jsonpath='{.spec.rules[0].host}')/xAPI/statements/\"\n

Install jq

This example requires jq command to serialize the request payload (xAPI statements). When dealing with JSON data, we strongly recommend installing it to manipulate them from the command line.

"},{"location":"tutorials/development_guide/#perform_arnolds_operations","title":"Perform Arnold\u2019s operations","text":"

If you want to run the bin/arnold script to run specific Arnold commands, you must ensure that your environment is properly set and that Arnold runs in development mode (i.e. using the -d flag):

source .k3d-cluster.env.sh\nbin/arnold -d -c ralph -e development -- vault -a ralph view\n
"},{"location":"tutorials/development_guide/#stop_k3d_cluster","title":"Stop k3d cluster","text":"

When finished to work on the Tray, you can stop the k3d cluster using the k3d-stop helper:

make k3d-stop\n
"},{"location":"tutorials/development_guide/#after_your_development","title":"After your development","text":""},{"location":"tutorials/development_guide/#testing","title":"Testing","text":"

To run tests on your code, either use the test Make target or the bin/pytest script to pass specific arguments to the test runner:

# Run all tests\nmake test\n\n# Run pytest with options\nbin/pytest -x -k mixins\n\n# Run pytest with options and more debugging logs\nbin/pytest tests/api -x -vvv -s --log-level=DEBUG -k mixins\n
"},{"location":"tutorials/development_guide/#linting","title":"Linting","text":"

To lint your code, either use the lint meta target or one of the linting tools we use:

# Run all linters\nmake lint\n\n# Run ruff linter\nmake lint-ruff\n\n# Run ruff linter and resolve fixable errors\nmake lint-ruff-fix\n\n# List available linters\nmake help | grep lint-\n
"},{"location":"tutorials/development_guide/#documentation","title":"Documentation","text":"

In case you need to document your code, use the following targets:

# Build documentation site\nmake docs-build\n\n# Run mkdocs live server for dev docs\nmake docs-serve\n
"},{"location":"tutorials/helm/","title":"Ralph Helm chart","text":"

Ralph LRS is distributed as a Helm chart in the DockerHub OCI openfuncharts.

"},{"location":"tutorials/helm/#setting_environment_values","title":"Setting environment values","text":"

All default values are in the values.yaml file. With Helm, you can extend the values file: there is no need to copy/paste all the default values. You can create an environment values file, e.g. custom-values.yaml and only set needed customizations.

All sensitive environment values, needed for Ralph to work, are expected to be in an external Secret Kubernetes object. An example manifest is provided in the ralph-env-secret.yaml file here that you can adapt to fit your needs.

All other non-sensitive environment values, also needed for Ralph to work, are expected to be in an external ConfigMap Kubernetes object. An example manifest is provided in the ralph-env-cm.yaml file here that you can adapt to fit your needs.

"},{"location":"tutorials/helm/#creating_authentication_secret","title":"Creating authentication secret","text":"

Ralph stores users credentials in an external Secret Kubernetes object. An example authentication file auth-demo.json is provided here, that you can take inspiration from. Refer to the LRS guide for creating user credentials.

"},{"location":"tutorials/helm/#reviewing_manifest","title":"Reviewing manifest","text":"

To generate and review your Helm generated manifest, under ./src/helm run the following command:

helm template oci://registry-1.docker.io/openfuncharts/ralph\n
"},{"location":"tutorials/helm/#installing_the_chart","title":"Installing the chart","text":"

Ralph Helm chart is distributed on DockerHub, and you can install it with:

helm install RELEASE_NAME oci://registry-1.docker.io/openfuncharts/ralph\n

Tips:

  • use --values to pass an env values file to extend and/or replace the default values
  • --set var=value to replace one var/value
  • --dry-run to verify your manifest before deploying
"},{"location":"tutorials/helm/#tutorial_deploying_ralph_lrs_on_a_local_cluster","title":"Tutorial: deploying Ralph LRS on a local cluster","text":"

This tutorial aims at deploying Ralph LRS on a local Kubernetes cluster using Helm. In this tutorial, you will learn to:

  • run and configure a small Kubernetes cluster on your machine,
  • deploy a data lake that stores learning records: we choose Elasticsearch,
  • deploy Ralph LRS (Learning Records Store) that receives and sends learning records in xAPI,
"},{"location":"tutorials/helm/#requirements","title":"Requirements","text":"
  • curl, the CLI to make HTTP requests.
  • jq, the JSON data Swiss-Knife.
  • kubectl, the Kubernetes CLI.
  • helm, the package manager for Kubernetes.
  • minikube, a lightweight kubernetes distribution to work locally on the project.
"},{"location":"tutorials/helm/#bootstrapping_a_local_cluster","title":"Bootstrapping a local cluster","text":"

Let\u2019s begin by running a local cluster with Minikube, where we will deploy Ralph on.

# Start a local kubernetes cluster\nminikube start\n

We will now create our own Kubernetes namespace to work on:

# This is our namespace\nexport K8S_NAMESPACE=\"learning-analytics\"\n\n# Check your namespace value\necho ${K8S_NAMESPACE}\n\n# Create the namespace\nkubectl create namespace ${K8S_NAMESPACE}\n\n# Activate the namespace\nkubectl config set-context --current --namespace=${K8S_NAMESPACE}\n
"},{"location":"tutorials/helm/#deploying_the_data_lake_elasticsearch","title":"Deploying the data lake: Elasticsearch","text":"

In its recent releases, Elastic recommends deploying its services using Custom Resource Definitions (CRDs) installed via its official Helm chart. We will first install the Elasticsearch (ECK) operator cluster-wide:

# Add elastic official helm charts repository\nhelm repo add elastic https://helm.elastic.co\n\n# Update available charts list\nhelm repo update\n\n# Install the ECK operator\nhelm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace\n

Now that CRDs are already deployed cluster-wide, we can deploy an Elasticsearch cluster. To help you in this task, we provide an example manifest data-lake.yml, that deploy a two-nodes elasticsearch \u201ccluster\u201d. Adapt it to match your needs, then apply it with:

kubectl apply -f data-lake.yml\n

Once applied, your elasticsearch pod should be running. You can check this using the following command:

kubectl get pods -w\n

We expect to see two pods called data-lake-es-default-0 and data-lake-es-default-1.

When our Elasticsearch cluster is up (this can take few minutes), you may create the Elasticsearch index that will be used to store learning traces (xAPI statements):

# Store elastic user password\nexport ELASTIC_PASSWORD=\"$(kubectl get secret data-lake-es-elastic-user -o jsonpath=\"{.data.elastic}\" | base64 -d)\"\n\n# Execute an index creation request in the elasticsearch container\nkubectl exec data-lake-es-default-0 --container elasticsearch -- \\\n    curl -ks -X PUT \"https://elastic:${ELASTIC_PASSWORD}@localhost:9200/statements?pretty\"\n

Our Elasticsearch cluster is all set. In the next section, we will now deploy Ralph, our LRS.

"},{"location":"tutorials/helm/#deploy_the_lrs_ralph","title":"Deploy the LRS: Ralph","text":"

First and foremost, we should create a Secret object containing the user credentials file. We provide an example authentication file auth-demo.json that you can take inspiration from. We can create a secret object directly from the file with the command:

kubectl create secret generic ralph-auth-secret \\\n    --from-file=auth.json=auth-demo.json\n

Secondly, we should create two objects containing environment values necessary for Ralph:

  • a Secret containing sensitive environment variables such as passwords, tokens etc;
  • a ConfigMap containing all other non-sensitive environment variables.

We provide two example manifests (ralph-env-secret.yaml and ralph-env-cm.yml) that you can adapt to fit your needs.

For this tutorial, we only need to replace the <PASSWORD> tag in the Secret manifest by the actual password of the elastic user with the command:

sed -i -e \"s|<PASSWORD>|$ELASTIC_PASSWORD|g\" ralph-env-secret.yaml\n

We can now apply both manifests, to create a ConfigMap and a Secret object in our local cluster:

# Create Secret object\nkubectl apply -f ralph-env-secret.yaml\n\n# Create ConfigMap object\nkubectl apply -f ralph-env-cm.yaml\n

We can now deploy Ralph:

helm install lrs oci://registry-1.docker.io/openfuncharts/ralph \\\n  --values development.yaml\n

One can check if the server is running by opening a network tunnel to the service using the port-forward sub-command:

kubectl port-forward svc/lrs-ralph 8080:8080\n

And then send a request to the server using this tunnel:

curl --user admin:password localhost:8080/whoami\n

We expect a valid JSON response stating about the user you are using for this request.

If everything went well, we can send 22k xAPI statements to the LRS using:

gunzip -c ../../data/statements.jsonl.gz | \\\n  sed \"s/@timestamp/timestamp/g\" | \\\n  jq -s . | \\\n  curl -Lk \\\n    --user admin:password \\\n    -X POST \\\n    -H \"Content-Type: application/json\" \\\n    http://localhost:8080/xAPI/statements/ -d @-\n

Congrats \ud83c\udf89

"},{"location":"tutorials/helm/#go_further","title":"Go further","text":"

Now that the LRS is running, we can go further and deploy the dashboard suite Warren. Refer to the tutorial of the Warren Helm chart.

"},{"location":"tutorials/library/","title":"How to use Ralph as a library ?","text":"

WIP.

"},{"location":"tutorials/library/#validate_method","title":"validate method","text":"

WIP.

"},{"location":"tutorials/library/#convert_method","title":"convert method","text":"

WIP.

"},{"location":"tutorials/lrs/","title":"How to use Ralph LRS?","text":"

This tutorial shows you how to run Ralph LRS, step by step.

Warning

Ralph LRS will be executed locally for demonstration purpose. If you want to deploy Ralph LRS on a production server, please refer to the deployment guide.

Ralph LRS is based on FastAPI. In this tutorial, we will run the server manually with Uvicorn, but other alternatives exists (Hypercorn, Daphne).

Prerequisites

Some tools are required to run the commands of this tutorial. Make sure they are installed first:

  • Ralph package with CLI optional dependencies, e.g. pip install ralph-malph[cli] (check the CLI tutorial)
  • Docker Compose
  • curl or httpie
"},{"location":"tutorials/lrs/backends/","title":"Backends","text":"

Ralph LRS is built to be used with a database instead of writing learning records in a local file.

Ralph LRS supports the following databases:

  • Elasticsearch
  • Mongo
  • ClickHouse

Let\u2019s add the service of your choice to the docker-compose.yml file:

ElasticsearchMongoClickHouse docker-compose.yml
version: \"3.9\"\n\nservices:\n  db:\n    image: elasticsearch:8.1.0\n    environment:\n      discovery.type: single-node\n      xpack.security.enabled: \"false\"\n    ports:\n      - \"9200:9200\"\n    mem_limit: 2g\n    ulimits:\n      memlock:\n        soft: -1\n        hard: -1\n    healthcheck:\n      test: curl --fail http://localhost:9200/_cluster/health?wait_for_status=green || exit 1\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: es\n      RALPH_BACKENDS__LRS__ES__HOSTS: http://db:9200\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n

We can now start the database service and wait for it to be up and healthy:

docker compose up -d --wait db\n

Before using Elasticsearch, we need to create an index, which we call statements for this example:

curlHTTPie
curl -X PUT http://localhost:9200/statements\n
http PUT :9200/statements\n

docker-compose.yml

version: \"3.9\"\n\nservices:\n  db:\n    image: mongo:5.0.9\n    ports:\n      - \"27017:27017\"\n    healthcheck:\n      test: mongosh --eval 'db.runCommand(\"ping\").ok' localhost:27017/test --quiet\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: mongo\n      RALPH_BACKENDS__LRS__MONGO__CONNECTION_URI: mongodb://db:27017\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
We can now start the database service and wait for it to be up and healthy:
docker compose up -d --wait db\n

docker-compose.yml

version: \"3.9\"\n\nservices:\n  db:\n    image: clickhouse/clickhouse-server:23.1.1.3077-alpine\n    environment:\n      CLICKHOUSE_DB: xapi\n      CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1\n    ports:\n      - 8123:8123\n      - 9000:9000\n    # ClickHouse needs to maintain a lot of open files, so they\n    # suggest running the container with increased limits:\n    # https://hub.docker.com/r/clickhouse/clickhouse-server/#!\n    ulimits:\n      nofile:\n        soft: 262144\n        hard: 262144\n    healthcheck:\n      test:  wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1\n      interval: 1s\n      retries: 60\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: clickhouse\n      RALPH_BACKENDS__LRS__CLICKHOUSE__HOST: db\n      RALPH_BACKENDS__LRS__CLICKHOUSE__PORT: 8123\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
We can now start the database service and wait for it to be up and healthy:
docker compose up -d --wait db\n

Before using ClickHouse, we need to create a table in the xapi database, which we call xapi_events_all:

curlHTTPie
  echo \"CREATE TABLE xapi.xapi_events_all (\n    event_id UUID NOT NULL,\n    emission_time DateTime64(6) NOT NULL,\n    event String NOT NULL\n    )\n    ENGINE MergeTree ORDER BY (emission_time, event_id)\n    PRIMARY KEY (emission_time, event_id)\" | \\\n  curl --data-binary @- \"http://localhost:8123/\"\n
  echo \"CREATE TABLE xapi.xapi_events_all (\n    event_id UUID NOT NULL,\n    emission_time DateTime64(6) NOT NULL,\n    event String NOT NULL\n    )\n    ENGINE MergeTree ORDER BY (emission_time, event_id)\n    PRIMARY KEY (emission_time, event_id)\" | \\\n  http :8123\n

Then we can start Ralph LRS:

docker compose up -d lrs\n

We can finally send some xAPI statements to Ralph LRS:

curlHTTPie
curl -sL https://github.com/openfun/ralph/raw/master/data/statements.json.gz | \\\ngunzip | \\\nhead -n 100 | \\\njq -s . | \\\ncurl \\\n  --user janedoe:supersecret \\\n  -H \"Content-Type: application/json\" \\\n  -X POST \\\n  -d @- \\\n  \"http://localhost:8100/xAPI/statements\"\n
curl -sL https://github.com/openfun/ralph/raw/master/data/statements.json.gz | \\\ngunzip | \\\nhead -n 100 | \\\njq -s . | \\\nhttp -a janedoe:supersecret POST :8100/xAPI/statements\n

And fetch, them back:

curlHTTPie
curl \\\n  --user janedoe:supersecret \\\n  -X GET \\\n  \"http://localhost:8100/xAPI/statements\"\n
http -a janedoe:supersecret :8100/xAPI/statements\n
"},{"location":"tutorials/lrs/first-steps/","title":"First steps","text":"

Ralph LRS is distributed as a Docker image on DockerHub, following the format: fundocker/ralph:<release version | latest>.

Let\u2019s dive straight in and create a docker-compose.yml file:

docker-compose.yml
version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: fs\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n

For now, we are using the fs (File System) backend, meaning that Ralph LRS will store learning records in local files.

First, we need to manually create the .ralph directory alongside the docker-compose.yml file with the command:

mkdir .ralph\n

We can then run Ralph LRS from a terminal with the command:

docker compose up -d lrs\n

Ralph LRS server should be up and running!

We can request the whoami endpoint to check if the user is authenticated. On success, the endpoint returns the username and permission scopes.

curlHTTPie

curl http://localhost:8100/whoami\n
{\"detail\":\"Invalid authentication credentials\"}% \n

http :8100/whoami\n
HTTP/1.1 401 Unauthorized\ncontent-length: 47\ncontent-type: application/json\ndate: Mon, 06 Nov 2023 15:37:32 GMT\nserver: uvicorn\nwww-authenticate: Basic\n\n{\n    \"detail\": \"Invalid authentication credentials\"\n}\n

If you\u2019ve made it this far, congrats! \ud83c\udf89

You\u2019ve successfully deployed the Ralph LRS and got a response to your request!

Let\u2019s shutdown the Ralph LRS server with the command docker compose down and set up authentication.

"},{"location":"tutorials/lrs/forwarding/","title":"Forwarding to another LRS","text":"

Ralph LRS server can be configured to forward xAPI statements it receives to other LRSs. Statement forwarding enables the Total Learning Architecture and allows systems containing multiple LRS to share data.

To configure statement forwarding, you need to create a .env file in the current directory and define the RALPH_XAPI_FORWARDINGS variable or define the RALPH_XAPI_FORWARDINGS environment variable.

The value of the RALPH_XAPI_FORWARDINGS variable should be a JSON encoded list of dictionaries where each dictionary defines a forwarding configuration and consists of the following key/value pairs:

key value type description is_active boolean Specifies whether or not this forwarding configuration should take effect. url URL Specifies the endpoint URL where forwarded statements should be send. basic_username string Specifies the basic auth username. basic_password string Specifies the basic auth password. max_retries number Specifies the number of times a failed forwarding request should be retried. timeout number Specifies the duration in seconds of network inactivity leading to a timeout.

Warning

For a forwarding configuration to be valid it is required that all key/value pairs are defined.

Example of a valid forwarding configuration:

.env
RALPH_XAPI_FORWARDINGS='\n[\n  {\n    \"is_active\": true,\n    \"url\": \"http://lrs1.example.com/xAPI/statements/\",\n    \"basic_username\": \"admin1@example.com\",\n    \"basic_password\": \"PASSWORD1\",\n    \"max_retries\": 1,\n    \"timeout\": 5\n  },\n  {\n    \"is_active\": true,\n    \"url\": \"http://lrs2.example.com/xAPI/statements/\",\n    \"basic_username\": \"admin2@example.com\",\n    \"basic_password\": \"PASSWORD2\",\n    \"max_retries\": 5,\n    \"timeout\": 0.2\n  }\n]\n'\n
"},{"location":"tutorials/lrs/multitenancy/","title":"Multitenancy","text":"

By default, all authenticated users have full read and write access to the server. Ralph LRS implements the specified Authority mechanism to restrict behavior.

"},{"location":"tutorials/lrs/multitenancy/#filtering_results_by_authority_multitenancy","title":"Filtering results by authority (multitenancy)","text":"

In Ralph LRS, all incoming statements are assigned an authority (or ownership) derived from the user that makes the request. You may restrict read access to users \u201cown\u201d statements (thus enabling multitenancy) by setting the following environment variable:

.env
RALPH_LRS_RESTRICT_BY_AUTHORITY=True # Default: False\n

Warning

Two accounts with different credentials may share the same authority, meaning they can access the same statements. It is the administrator\u2019s responsibility to ensure that authority is properly assigned.

Info

If not using \u201cscopes\u201d, or for users with limited \u201cscopes\u201d, using this option will make the use of option ?mine=True implicit when fetching statement.

"},{"location":"tutorials/lrs/multitenancy/#scopes","title":"Scopes","text":"

In Ralph, users are assigned scopes which may be used to restrict endpoint access or functionalities. You may enable this option by setting the following environment variable:

.env
RALPH_LRS_RESTRICT_BY_SCOPES=True # Default: False\n

Valid scopes are a slight variation on those proposed by the xAPI specification:

  • statements/write
  • statements/read/mine
  • statements/read
  • state/write
  • state/read
  • define
  • profile/write
  • profile/read
  • all/read
  • all
"},{"location":"tutorials/lrs/sentry/","title":"Sentry","text":"

Ralph provides Sentry integration to monitor its LRS server and its CLI. To activate Sentry integration, one should define the following environment variables:

.env
RALPH_SENTRY_DSN={PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}{PATH}/{PROJECT_ID}\nRALPH_EXECUTION_ENVIRONMENT=development\n

The Sentry DSN (Data Source Name) can be found in your project settings from the Sentry application. The execution environment should reflect the environment Ralph has been deployed in (e.g. production).

You may also want to monitor the performance of Ralph by configuring the CLI and LRS traces sample rates:

.env
RALPH_SENTRY_CLI_TRACES_SAMPLE_RATE=0.1\nRALPH_SENTRY_LRS_TRACES_SAMPLE_RATE=0.3\n

Sample rate

A sample rate of 1.0 means 100% of transactions are sent to sentry and 0.1 only 10%.

If you want to lower noisy transactions (e.g. in a Kubernetes cluster), you can disable health checks related ones:

.env
RALPH_SENTRY_IGNORE_HEALTH_CHECKS=True\n
"},{"location":"tutorials/lrs/authentication/","title":"Authentication","text":"

The API server supports the following authentication methods:

  • HTTP basic authentication
  • OpenID Connect authentication on top of OAuth2.0

Either one or both can be enabled for Ralph LRS using the environment variable RALPH_RUNSERVER_AUTH_BACKENDS:

RALPH_RUNSERVER_AUTH_BACKENDS=basic,oidc\n
"},{"location":"tutorials/lrs/authentication/basic/","title":"HTTP Basic Authentication","text":"

The default method for securing the Ralph API server is HTTP Basic Authentication. For this, we need to create a user in Ralph LRS.

"},{"location":"tutorials/lrs/authentication/basic/#creating_user_credentials","title":"Creating user credentials","text":"

To create a new user credentials, Ralph CLI provides a dedicated command:

Ralph CLIDocker Compose
ralph auth \\\n    --write-to-disk \\\n    --username janedoe \\\n    --password supersecret \\\n    --scope statements/write \\\n    --scope statements/read \\\n    --agent-ifi-mbox mailto:janedoe@example.com\n
docker compose run --rm lrs \\\n  ralph auth \\\n    --write-to-disk \\\n    --username janedoe \\\n    --password supersecret \\\n    --scope statements/write \\\n    --scope statements/read \\\n    --agent-ifi-mbox mailto:janedoe@example.com\n

Tip

You can either display the helper with ralph auth --help or check the CLI tutorial here

This command updates your credentials file with the new janedoe user. Here is the file that has been created by the ralph auth command:

auth.json
[                                                                               \n  {                                                                             \n    \"agent\": {                                                                  \n      \"mbox\": \"mailto:janedoe@example.com\",                                     \n      \"objectType\": \"Agent\",                                                    \n      \"name\": null                                                              \n    },                                                                          \n    \"scopes\": [                                                                 \n      \"statements/write\",                                                           \n      \"statements/read\"\n    ],                                                                          \n    \"hash\": \"$2b$12$eQmMF/7ALdNuksL4lkI.NuTibNjKLd0fw2Xe.FZqD0mNkgnnjLLPa\",     \n    \"username\": \"janedoe\"                                                       \n  }                                                                             \n] \n

Alternatively, the credentials file can also be created manually. It is expected to be a valid JSON file. Its location is specified by the RALPH_AUTH_FILE configuration value.

Tip

By default, Ralph LRS looks for the auth.json file in the application directory (see click documentation for details).

The expected format is a list of entries (JSON objects) each containing:

  • the username
  • the user\u2019s hashed+salted password
  • the scopes they can access
  • an agent object used to represent the user in the LRS.

Info

The agent is constrained by LRS specifications, and must use one of four valid Inverse Functional Identifiers.

"},{"location":"tutorials/lrs/authentication/basic/#making_a_get_request","title":"Making a GET request","text":"

After changing the docker-compose.yml file as follow: docker-compose.yml

version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_BACKEND: fs\n      RALPH_RUNSERVER_AUTH_BACKENDS: basic\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n
and running the Ralph LRS with:

docker compose up -d lrs\n

we can request the whoami endpoint again, but this time sending our username and password through Basic Auth:

curlHTTPie

curl --user janedoe:supersecret http://localhost:8100/whoami\n
{\"agent\":{\"mbox\":\"mailto:janedoe@example.com\",\"objectType\":\"Agent\",\"name\":null},\"scopes\":[\"statements/read\",\"statements/write\"]}\n

http -a janedoe:supersecret :8100/whoami \n
HTTP/1.1 200 OK\ncontent-length: 107\ncontent-type: application/json\ndate: Tue, 07 Nov 2023 17:32:31 GMT\nserver: uvicorn\n\n{\n    \"agent\": {\n        \"mbox\": \"mailto:janedoe@example.com\",\n        \"name\": null,\n        \"objectType\": \"Agent\"\n    },\n    \"scopes\": [\n        \"statements/read\",\n        \"statements/write\"\n    ]\n}\n

Congrats! \ud83c\udf89 You have been successfully authenticated!

HTTP Basic auth caching

HTTP Basic auth implementation uses the secure and standard bcrypt algorithm to hash/salt passwords before storing them. This implementation comes with a performance cost.

To speed up requests, credentials are stored in an LRU cache with a \u201cTime To Live\u201d.

To configure this cache, you can define the following environment variables:

  • the maximum number of entries in the cache. Select a value greater than the maximum number of individual user credentials, for better performance. Defaults to 100.

RALPH_AUTH_CACHE_MAX_SIZE=100\n
- the \u201cTime To Live\u201d of the cache entries in seconds. Defaults to 3600s.

RALPH_AUTH_CACHE_TTL=3600\n
"},{"location":"tutorials/lrs/authentication/oidc/","title":"OpenID Connect authentication","text":"

Ralph LRS also supports OpenID Connect on top of OAuth 2.0 for authentication and authorization.

To enable OpenID Connect authentication mode, we should change the RALPH_RUNSERVER_AUTH_BACKENDS environment variable to oidc and we should define the RALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI environment variable with the identity provider\u2019s Issuer Identifier URI as follows:

RALPH_RUNSERVER_AUTH_BACKENDS=oidc\nRALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI=http://{provider_host}:{provider_port}/auth/realms/{realm_name}\n

This address must be accessible to the LRS on startup as it will perform OpenID Connect Discovery to retrieve public keys and other information about the OpenID Connect environment.

It is also strongly recommended to set the optional RALPH_RUNSERVER_AUTH_OIDC_AUDIENCE environment variable to the origin address of Ralph LRS itself (e.g. \u201chttp://localhost:8100\u201d) to enable verification that a given token was issued specifically for that Ralph LRS.

"},{"location":"tutorials/lrs/authentication/oidc/#identity_providers","title":"Identity Providers","text":"

OpenID Connect support is currently developed and tested against Keycloak but may work with other identity providers that implement the specification.

"},{"location":"tutorials/lrs/authentication/oidc/#an_example_with_keycloak","title":"An example with Keycloak","text":"

The Learning analytics playground repository contains a Docker Compose file and configuration for a demonstration instance of Keycloak with a ralph client.

First, we should stop the Ralph LRS server (if it\u2019s still running):

docker compose down\n

We can clone the learning-analytics-playground repository:

git clone git@github.com:openfun/learning-analytics-playground\n

And then bootstrap the project:

cd learning-analytics-playground/\nmake bootstrap\n

After a couple of minutes, the playground containers should be up and running.

Create another docker compose file, let\u2019s call it docker-compose.oidc.yml, with the following content: docker-compose.oidc.yml

version: \"3.9\"\n\nservices:\n\n  lrs:\n    image: fundocker/ralph:latest\n    environment:\n      RALPH_APP_DIR: /app/.ralph\n      RALPH_RUNSERVER_AUTH_BACKENDS: oidc\n      RALPH_RUNSERVER_AUTH_OIDC_ISSUER_URI: http://learning-analytics-playground-keycloak-1:8080/auth/realms/fun-mooc\n      RALPH_RUNSERVER_BACKEND: fs\n    ports:\n      - \"8100:8100\"\n    command:\n      - \"uvicorn\"\n      - \"ralph.api:app\"\n      - \"--proxy-headers\"\n      - \"--workers\"\n      - \"1\"\n      - \"--host\"\n      - \"0.0.0.0\"\n      - \"--port\"\n      - \"8100\"\n    volumes:\n      - .ralph:/app/.ralph\n    networks:\n      - ralph\n\nnetworks:\n  ralph:\n    external: true\n

Again, we need to create the .ralph directory:

mkdir .ralph\n

Then we can start the lrs service:

docker compose -f docker-compose.oidc.yml up -d lrs\n

Now that both Keycloak and Ralph LRS server are up and running, we should be able to get the access token from Keycloak with the command:

curlHTTPie
curl -X POST \\\n  -d \"grant_type=password\" \\\n  -d \"client_id=ralph\" \\\n  -d \"client_secret=bcef3562-730d-4575-9e39-63e185f99bca\" \\\n  -d \"username=ralph_admin\" \\\n  -d \"password=funfunfun\" \\\n  http://localhost:8080/auth/realms/fun-mooc/protocol/openid-connect/token\n
{\"access_token\":\"<access token content>\",\"expires_in\":300,\"refresh_expires_in\":1800,\"refresh_token\":\"<refresh token content>\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"0889b3a5-d742-45fb-98b3-20e967960e74\",\"scope\":\"email profile\"} \n
http -f POST \\\n  :8080/auth/realms/fun-mooc/protocol/openid-connect/token \\\n  grant_type=password \\\n  client_id=ralph \\\n  client_secret=bcef3562-730d-4575-9e39-63e185f99bca \\\n  username=ralph_admin \\\n  password=funfunfun\n
HTTP/1.1 200 OK\n...\n{\n    \"access_token\": \"<access token content>\",\n    \"expires_in\": 300,\n    \"not-before-policy\": 0,\n    \"refresh_expires_in\": 1800,\n    \"refresh_token\": \"<refresh token content>\",\n    \"scope\": \"email profile\",\n    \"session_state\": \"1e826fa2-b4b3-42bf-837f-158fe9d5e1e5\",\n    \"token_type\": \"Bearer\"\n}\n

With this access token, we can now make a request to the Ralph LRS server:

curlHTTPie
curl -H 'Authorization: Bearer <access token content>' \\\nhttp://localhost:8100/whoami\n
{\"agent\":{\"openid\":\"http://localhost:8080/auth/realms/fun-mooc/b6e85bd0-ce6e-4b24-9f0e-6e18d8744e54\"},\"scopes\":[\"email\",\"profile\"]}\n
http -A bearer -a <access token content> :8100/whoami\n
HTTP/1.1 200 OK\n...\n{\n    \"agent\": {\n        \"openid\": \"http://localhost:8080/auth/realms/fun-mooc/b6e85bd0-ce6e-4b24-9f0e-6e18d8744e54\"\n    },\n    \"scopes\": [\n        \"email\",\n        \"profile\"\n    ]\n}\n

Congrats, you\u2019ve managed to authenticate using OpenID Connect! \ud83c\udf89

"}]} \ No newline at end of file