Subsections of Hermes

Chapter 1

Hermes

This section contains a brief presentation of Hermes and its features, then defines its key concepts and details how its main processes work.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Hermes

Presentation

What is Hermes

Hermes is a Change Data Capture (CDC) tool from any source(s) to any target(s).

Simplified process flow

Hermes-server will regularly poll data from data source(s) and generate a diff between the fresh dataset and the previous one stored in cache. Each difference will be converted into an Event message, and sent to a message bus (e.g. Kafka, RabbitMQ…).

The clients will receive and process each Event message to propagate data on their respective targets.

flowchart LR
  subgraph Datasources
    direction LR
    RefOracle
    RefPostgreSQL
    RefLDAP
    RefEtc
  end
  subgraph Hermes-server
    direction LR
    hermes-server
  end
  subgraph External_dependencies["External dependencies"]
    direction LR
    MessageBus
  end
  subgraph Hermes-clients
    direction LR
    hermes-client-ldap
    hermes-client-aspypsrp-ad
    hermes-client-webservice
    hermes-client-etc["..."]
  end
  subgraph Targets
    direction LR
    LDAP
    ActiveDirectory
    webservice
    etc
  end
  RefOracle[(Oracle)]-->|Data|hermes-server
  RefPostgreSQL[(PostgreSQL)]-->|Data|hermes-server
  RefLDAP[(LDAP)]-->|Data|hermes-server
  RefEtc[(...)]-->|Data|hermes-server
  hermes-server-->|Events|MessageBus((MessageBus))
  MessageBus-->|Events|hermes-client-ldap
  MessageBus-->|Events|hermes-client-aspypsrp-ad
  MessageBus-->|Events|hermes-client-webservice
  MessageBus-->|Events|hermes-client-etc
  hermes-client-ldap-->|Update|LDAP[(LDAP)]
  hermes-client-aspypsrp-ad-->|Update|ActiveDirectory[(Active Directory)]
  hermes-client-webservice-->|Update|webservice[(Web service <i>name</i>)]
  hermes-client-etc-->|Update|etc[("...")]

  classDef external fill:#fafafa,stroke-dasharray: 5 5
  class Datasources,External_dependencies,Targets external

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Features

  • Does not require any change to the source data model(s) (e.g. no need to add a last_updated column)
  • Multi-source, with ability to merge or aggregate data, and optionally set merge/aggregation constraints
  • Able to handle several data types, with link (foreign keys) between them, and to enforce integrity constraints
  • Able to transform data with Jinja filters in configuration files: no need to edit some Python code
  • Clean error handling, to avoid synchronization problems, and an optional mechanism of error auto remediation
  • Offer a trashbin on clients for removed data
  • Insensitive to unavailability and errors on each link (source, message bus, target)
  • Easy to extend by design. All following items are implemented as plugins (list of existing plugins):
    • Datasources
    • Attribute filters (data filters)
    • Clients (targets)
    • Messagebus
  • Changes to the datamodel are easy and safe to integrate and propagate, whether on the server or on the clients

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Key concepts

Datasource

A source from which the server will collect data. Can be anything that contains data: database, LDAP directory, web service, flat file…

Datasource plugin

An Hermes plugin in charge of collecting data from a specific datasource type and providing it to the server.

Server

hermes-server application: will poll datasources at regular intervals and convert all changes between fresh data and previous one into events that will be sent on message bus by message bus producer plugin.

Message bus

External service like Apache Kafka or RabbitMQ that will collect events from server and provide them to the clients in the same order that they had been emitted.

Message bus producer plugin

An Hermes plugin ran by server in charge of emitting events on a specific message bus type.

Message bus consumer plugin

An Hermes plugin ran by clients in charge of consuming events from a specific message bus type.

Client

hermes-client application: will consume events from message bus across message bus consumer plugin and call appropriate methods implemented on client plugin to propagate data changes on target.

Trashbin

If configured to, the client will not immediately remove data, but store it in trashbin for a configured number of days. If the data is added again before this delay, the client will restore it from trashbin. Otherwise, once the trashbin retention limit is reached, the data is removed.

Depending on the chosen implementation on client plugin, it may allow a lot of scenarios, e.g. disabling an account, or keeping it active for a grace period.

Error queue

When an exception is raised during an event processing on client plugin, the event is stored on an error queue. All subsequent events concerning same data objects will not be processed but stored in error queue until the first is successfully processed. The processing of events in error queue is retried periodically.

Auto remediation

Sometimes, an event may be stored in error queue due to a data problem (e.g. a group name with a trailing dot will raise an error on Active Directory). If the trailing dot is then removed from the group name on datasource, the modified event will be stored on error queue, and won’t be processed until previous one is processed, which cannot happen without proceeding to a risky and undesirable operation: manually editing client cache file.

The autoremediation solves this type of problems by merging events of a same object in error queue. It is not enabled by default, as it may break the regular processing order of events.

Client plugin

An Hermes plugin ran by client in charge of implementing simple event processing methods to propagate data changes to a specific target type.

Attribute plugin

An Hermes plugin ran by server or client that will be offered as a new Jinja filter, allowing data transformation.

Initsync

A client cannot safely begin processing new events without having the entire dataset first. So the server is able to send a specific event sequence called initsync that will contain the server datamodel and the whole data set. The already initialized client will silently ignore it, but the uninitialized will process it to initialize their target by adding all entries provided by initsync, and will then process subsequent events normally.

Datamodel

As they are some differences between them, please see server datamodel and client datamodel.

Data type

Also named “object type”. A type of data with its attributes mapping to be handled by Hermes.

Primary key

The data types attribute that is used to distinguish an entry from the others. It must obviously be unique.

Server datamodel

Configuration of data types that server must handle, with their respective attributes mapping. The remote attribute name is the attribute name used on datasource.

The server datamodel is built by specifying the following items:

Merge conflict policy

Define the behavior if a same attribute is set with different values on different datasources.

Merge constraints

Allow to declare some constraints to ensure data consistency during data merge, when server is polling data from multiple datasources.

Foreign keys

Allow to declare foreign keys in a data type, that clients will use to enforce their foreign keys policy. See Foreign keys for details.

Integrity constraints

Allow to declare some constraints between several data types to ensure data consistency.

Cache only attributes

Datamodel attributes that will only be stored in cache, but will not be sent in events, nor used to diff with cache.

Secrets attributes

Datamodel attributes that will contain sensitive data, like passwords, and must never be stored in cache nor printed in logs. They will be sent to clients unless they are defined as local attributes too.

Note

As those attributes are not cached, they will be seen as added at EACH server polling.

Local attributes

Datamodel attributes that will not be sent in events, cached, or used to diff with cache, but may be used in Jinja templates.

Client datamodel

Configuration of data types that client must handle, with their attributes mapping. The remote attribute name is the attribute name used on the server datamodel.

Info

If you’are asking yourself why this mapping is necessary, here is why:

  1. it allows local data transformation via Jinja filter and Attribute plugin on client.
  2. it allows re-using (and sharing) client plugins without requiring any change to your server datamodel nor plugin code, but simply by changing client configuration file.

The client datamodel is built by specifying the following items:

Attributes mapping

Also named “attrsmapping”. Mapping (key/value) that links the internal attribute name (as key) with the remote one (as value). The remote may be a Jinja template to process data transformation via Jinja filter and Attribute plugin.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

How it works

Explanations on how some key components work or are structured.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of How it works

hermes-server

Explanations on how some key components of hermes-server work or are structured.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of hermes-server

Workflow

hermes-server

  • 1. loads its local cache
  • 2. checks if its dataschema has changed since last run, and emits the resulting removed events (if any), and the new dataschema
  • 3. fetches all data required by its datamodel from datasource(s)
    • 3.1. enforces merge constraints
    • 3.2. merges data
    • 3.3. replaces inconsistencies and merge conflict by cached values
    • 3.4. enforce integrity constraints
  • 4. generate a diff between its cache and the fetched remote data
  • 5. loop over each diff type: added, modified, removed
    • 5.1. for each diff type, loop over each data type in their declaration order in the datamodel, except for removed diff type, for which it is the reverse declaration order
      • 5.1.1. loop over each diff item of current data type
        • 5.1.1.1. generate the corresponding event
        • 5.1.1.2. emit the event on message bus
        • 5.1.1.3. if event was successfully emitted:
          • 5.1.1.3.1. run datamodel commit_one action if any
          • 5.1.1.3.2. update the cache to reflect the new value of the item affected by event
  • 6. once all events have been emitted
    • 6.1. run datamodel commit_all action if any
    • 6.2. save cache on disk
  • 7. wait for updateInterval and restart from step 3. if app has not been requested to stop

If any exception is raised in step 2., this step is restarted until it succeeds.

If any exception is raised in steps 3. to 7., the cache is saved on disk, and the server restart from step 3..

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Integrity constraints

Hermes-server can handle several data types, with link (foreign keys) between them, and to enforce integrity constraints.

Let’s use a typical Users / Groups / GroupsMember use case to illustrate this.

classDiagram
    direction BT
    GroupsMembers <-- Users
    GroupsMembers <-- Groups
    class Users{
      user_id
      ...
    }
    class Groups{
      group_id
      ...
    }
    class GroupsMembers{
      user_id
      group_id
      integrity() _SELF.user_id in Users_pkeys and _SELF.group_id in Groups_pkeys
    }

In this scenario, entries in GroupsMembers that have a user_id that doesn’t exist in Users, or a group_id that doesn’t exist in Groups will be silently ignored.

For more details, please see integrity_constraints in hermes-server configuration.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Multi source data aggregation

In a multi-source scenario, Hermes can aggregate entries providing from multiple sources as if they were providing from one, and optionally enforce aggregation constraints to ensure data consistency.

Let’s take a use case, with a university data set where Hermes should manage user accounts. Employees and students data are stored on two separate data sources. Hermes will be able to merge the two datasources in one virtual Users, but must ensure that primary keys doesn’t collide.

Here we got two distinct data sources for a same data type.

classDiagram
    direction BT
    Users <|-- Users_employee
    Users <|-- Users_students
    class Users{
      user_id
      login
      mail
      merge_constraints() s.user_id mustNotExist in e.user_id
    }
    class Users_students{
      s.user_id
      s.login
      s.mail
    }
    class Users_employee{
      e.user_id
      e.login
      e.mail
    }

In this scenario, entries in Users_students that have a user_id that exist in Users_employee will be silently ignored.
But entries in Users_employee that have a user_id that exist in Users_students will still be processed.

For more details, please see pkey_merge_constraint and merge_constraints in hermes-server configuration.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Multi source data merging

In a multi-source scenario, Hermes can recompose entries providing from multiple sources by merging their data, and optionally setting merge constraints to ensure data consistency.

Let’s take a use case, where Hermes should manage user accounts. Main data and wifi profile name are stored on two separate data sources. Hermes will be able to aggregate the two datasources in one virtual Users, but must ensure that primary keys of second exists in first.

Here we got two distinct data sources for a same entry.

classDiagram
    direction BT
    Users <|-- Users_main
    Users <|-- Users_auxiliary
    class Users{
      user_id
      login
      mail
      wifi_profile
      merge_constraints() a.user_id mustAlreadyExist in m.user_id
    }
    class Users_auxiliary{
      a.user_id
      a.wifi_profile
    }
    class Users_main{
      m.user_id
      m.login
      m.mail
    }

In this scenario, entries in Users_auxiliary that have a user_id that doesn’t exist in Users_main will be silently ignored.
But entries in Users_main that have a user_id that doesn’t exist in Users_auxiliary will be processed, and therefore the resulting Users entry won’t have a wifi_profile value.

For more details, please see pkey_merge_constraint and merge_constraints in hermes-server configuration.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Events emitted

Event categories

An event always belongs to one of those categories:

  • base: standard event, can be of type:

    • dataschema: propagate the new dataschema to clients, after a server datamodel update
    • added: a new entry has been added to specified data type, with specified attributes and values
    • removed: entry of specified pkey has been removed from specified data type
    • modified: entry of specified pkey has been modified. Contains only added, modified, and removed attributes with their new values
  • initsync: indicate that the event is a part of an initsync sequence, can be of type:

    • init-start: beginning of an initsync sequence, also contains the current dataschema
    • added: a new entry has been added to specified data type, with specified attributes and values. As the server sends the contents of its cache to initialize clients, entries can only be added
    • init-stop: end of an initsync sequence

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Cache files

_hermes-server.json

Contains state of the server:

  • lastUpdate: datetime.datetime | None

    Datetime of latest update.

  • errors: dict[str, dict[str, dict[str, Any]]]

    Dictionary containing current errors, to be able to notify of any changes.

  • exception: str | None

    String containing latest exception trace.

_dataschema.json

Contains the Dataschema, built upon the Datamodel. This cache file permit to server to process step 2. from Workflow.

DataType.json

There’s one file per data type declared in Datamodel, containing the data cache of this data type, as a list of dict. Each dict from the list is an entry.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

hermes-client

Explanations on how some key components of hermes-client work or are structured.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of hermes-client

Workflow

hermes-client

  • 1. loads its datamodel from config file
  • 2. if it exists, loads previous datamodel from cache
  • 3. if any, notify about datamodel warnings: remote type and remote attributes present in datamodel, but not in dataschema
  • 4. if a remote schema exists in cache, load error queue from cache
  • 5. if client has not been initialized yet (no complete initSync sequence has been processed yet):
    • 5.1. process initSync sequence, if a complete initSync sequence is available on message bus
    • 5.2. restart from step 5.
  • 6. if client has already been initialized yet (a complete initSync sequence has already been processed):
    • 6.1. if it is the first iteration of loop (step 7. has never been reached):
      • 6.1.1. if datamodel in config differs from cached one, process the datamodel update:
        • 6.1.1.1 generate removed events for all entries of removed data types, process them, and purge those data type cache files
        • 6.1.1.2 generate a diff between cached data built upon previous datamodel, and the same data converted to new datamodel, and generate corresponding events and process them
    • 6.2. if errorQueue_retryInterval has passed since the last attempt, retry to process events in error queue
    • 6.3. if trashbin_purgeInterval has passed since the last attempt, retry to purge expired objects from trashbin
    • 6.4. loop over all events available on message bus, and process each one to call its corresponding handler when it exists in client plugin
  • 7. when at least an event was processed or if app was requested to stop:
    • 7.1. save cache files of error queue, app, data
    • 7.2. call special handler onSave when it exists in client plugin
    • 7.3. notify any change in error queue
  • 8. restart from step 5. if app hasn’t been requested to stop

If any exception is raised in step 6.1.1, it will be considered as a fatal error, notified, and the client will stop.

If any exception is raised in steps 5. to 6., it is notified, its event is added to error queue and the client restarts from step 7..

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Event processing

As the datamodel on server differs than that on client, the clients must convert remote events received on message bus to local events. If the resulting local event is empty (the data type or the attributes changed in remote event are not set on client datamodel), the event is ignored.

On client datamodel update, the client may generate local events that have no corresponding remote event, i.e. to update an attribute value computed with a Jinja template that just had been updated.

flowchart TB
  subgraph Hermes-client
    direction TB
    datamodelUpdate[["a datamodel update"]]
    remoteevent["Remote event"]
    localevent["Local event"]
    eventHandler(["Client plugin event handler"])
  end
  datamodelUpdate-->|generate|localevent
  MessageBus-->|produce|remoteevent
  remoteevent-->|convert to|localevent
  localevent-->|pass to appropriate|eventHandler
  eventHandler-->|process|Target

  classDef external fill:#fafafa,stroke-dasharray: 5 5
  class MessageBus,Target external

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Foreign keys

Sometimes, objects are linked together by foreign keys. When an error occurs on an object whose primary key refers to that of one or more other “parent” objects, it may be desirable to interrupt the processing of all or part of the events of these parent objects until this first event has been correctly processed. This can be done by adding the events of the parent objects to the error queue instead of trying to process them.

The first thing to do is to declare the foreign keys through hermes-server.datamodel.data-type-name.foreignkeys in hermes-server configuration. The server will do nothing with these foreign keys except propagate them to the clients.

Then, it is necessary to establish which policy to apply to the clients through hermes-client.foreignkeys_policy in each hermes-client configuration. There are three:

  • disabled: No event, policy is disabled. Probably not relevant in most cases, but could perhaps be useful to someone one day.
  • on_remove_event: Only on removed events. Should be enough in most cases.
  • on_every_event: On every event types (added, modified, removed). To ensure perfect consistency no matter what.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Auto remediation

Sometimes, an event may be stored in error queue due to a data problem (e.g. a group name with a trailing dot will raise an error on Active Directory). If the trailing dot is then removed from the group name on datasource, the modified event will be stored on error queue, and won’t be processed until previous one is processed, which cannot happen without proceeding to a risky and undesirable operation: manually editing client cache file.

The autoremediation solves this type of problems by merging events of a same object in error queue. It is not enabled by default, as it may break the regular processing order of events.

Example

Let’s take an example with a group created with an invalid name. As its name is invalid, its processing will fail, and the event will be stored in error queue like this:

flowchart TB
  subgraph errorqueue [Error queue]
    direction TB
    ev1
  end

  ev1["`**event 1**
    &nbsp;
    *eventtype*: added
    *objType*: ADGroup
    *objpkey*: 42
    *objattrs*: {
    &nbsp;&nbsp;grp_pkey: 42
    &nbsp;&nbsp;name: 'InvalidName.'
    &nbsp;&nbsp;desc: 'Demo group'
    }`"]

  classDef leftalign text-align:left
  class ev1 leftalign

As the error has been notified, someone corrects the group name in the datasource. This change will conduce to an according modified event. This modified event will not be processed, but added to the error queue as its object already has an event in error queue.

  • without autoremediation, until the first event has been successfully processed, the second one will not even be tried. The fix is stuck.
  • with autoremediation, the error queue will merge the two events, and then on the next processing of error queue, the updated event will be successfully processed.
flowchart TB
  subgraph errorqueuebis [With autoremediation]
    direction TB
    ev1bis
  end

  subgraph errorqueue [Without autoremediation]
    direction TB
    ev1
    ev2
  end

  ev1["`**event 1**
    &nbsp;
    *eventtype*: added
    *objType*: ADGroup
    *objpkey*: 42
    *objattrs*: {
    &nbsp;&nbsp;grp_pkey: 42
    &nbsp;&nbsp;name: 'InvalidName.'
    &nbsp;&nbsp;desc: 'Demo group'
    }`"]

  ev2["`**event 2**
    &nbsp;
    *eventtype*: modified
    *objType*: ADGroup
    *objpkey*: 42
    *objattrs*: {
    &nbsp;&nbsp;modified: {
    &nbsp;&nbsp;&nbsp;&nbsp;name: 'ValidName'
    &nbsp;&nbsp;}
    }`"]

  ev1bis["`**event 1**
    &nbsp;
    *eventtype*: added
    *objType*: ADGroup
    *objpkey*: 42
    *objattrs*: {
    &nbsp;&nbsp;grp_pkey: 42
    &nbsp;&nbsp;name: 'ValidName'
    &nbsp;&nbsp;desc: 'Demo group'
    }`"]

  classDef leftalign text-align:left
  class ev1,ev2,ev1bis leftalign

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Cache files

_hermes-client-name.json

Contains state of the client:

  • queueErrors: dict[str, str]

    Dictionary containing all error messages of objects in error queue, to be able to notify of any changes.

  • datamodelWarnings: dict[str, dict[str, dict[str, Any]]]

    Dictionary containing current datamodel warnings, for notifications.

  • exception: str | None

    String containing latest exception trace.

  • initstartoffset: Any | None

    Contains the offset of the first message of initSync sequence on message bus.

  • initstopoffset: Any | None

    Contains the offset of the last message of initSync sequence on message bus.

  • nextoffset: Any | None

    Contains the offset of the next message to process on message bus.

_hermesconfig.json

Cache of previous config, used to be able to build the previous datamodel and to render the Jinja templates with Attribute plugins.

_dataschema.json

Cache of latest Dataschema, received from hermes-server.

_errorqueue.json

Cache of error queue.

RemoteDataType.json

One file per remote data type, containing all remote entries, as they had been successfully processed.

When error queue is empty, must have the same content than RemoteDataType_complete__.json

RemoteDataType_complete__.json

One file per remote data type, containing all remote entries, as they should be without error.

When error queue is empty, must have the same content than RemoteDataType.json

trashbin_RemoteDataType.json

Only if trashbin is enabled. One file per remote data type, containing all remote entries that are in trashbin, as they had been successfully processed.

When error queue is empty, must have the same content than trashbin_RemoteDataType_complete__.json

trashbin_RemoteDataType_complete__.json

Only if trashbin is enabled. One file per remote data type, containing all remote entries that are in trashbin, as they should be without error.

When error queue is empty, must have the same content than trashbin_RemoteDataType.json

__LocalDataType.json

One file per local data type, containing all local entries, as they had been successfully processed.

When error queue is empty, must have the same content than __LocalDataType_complete__.json

__LocalDataType_complete__.json

One file per local data type, containing all local entries, as they should be without error.

When error queue is empty, must have the same content than __LocalDataType.json

__trashbin_LocalDataType.json

Only if trashbin is enabled. One file per local data type, containing all local entries that are in trashbin, as they had been successfully processed.

When error queue is empty, must have the same content than __trashbin_LocalDataType_complete__.json

__trashbin_LocalDataType_complete__.json

Only if trashbin is enabled. One file per local data type, containing all local entries that are in trashbin, as they should be without error.

When error queue is empty, must have the same content than __trashbin_LocalDataType.json

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Chapter 2

Setup

This section contains everything you need to install, configure, and run Hermes.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Setup

Getting started

  1. Identify your prerequisites:

    • the datasource(s) to use and their type, and the data you want to capture on each. Once done, identify if the corresponding datasource plugin(s) exists
    • choose (and maybe install) the message bus you’ll use
    • identify which hermes-client plugin(s) you’ll need
  2. Install Hermes by following the Installation section

  3. Configure hermes-server by following the following sections:

  4. Run hermes-server by following the Run section, and once it has successfully done its first data polling, generate an initsync sequence using the hermes-server CLI, as explained in the Run section

  5. Configure a first hermes-client by following the following sections:

  6. Run the appropriate hermes-client by following the Run section

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Installation

Requirements

  • Python 3.10, 3.11, 3.12 or 3.13 with pip
  • Run on Linux (required for CLI that uses Unix socket)
  • A message bus server, e.g. Apache Kafka - recommended but an sqlite implementation is provided
  • direnv - only if you wish to use the reset_venv helper script

Install guide

  1. Download and extract the hermes latest release

  2. (Optional) If you want to minimize install footprint, you may remove tests directory, tox.ini file and all unnecessary plugins by deleting their directory in:

    • plugins/attributes/
    • plugins/clients/
    • plugins/datasources/
    • plugins/messagebus_consumers/
    • plugins/messagebus_producers/

    If your installation is for running hermes-server only (without clients), you may remove the following directories:

    • clients
    • plugins/clients/
    • plugins/messagebus_consumers

    If your installation is for running one or more hermes-client only (without server), you may remove the following directories:

    • server
    • plugins/datasources
    • plugins/messagebus_producers
  3. Set up a venv and install all requirements

    • Automatically with the provided script ./reset_venv

    • Manually. You can generate and install python requirements with the following commands:

      cat "requirements.txt" "plugins/"*/*"/requirements.txt" > all_requirements.txt 2>/dev/null
      
      pip3 install -r all_requirements.txt

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Configuration

An hermes application will look for its YAML configuration file on current working directory.

The configuration file must be named with APPNAME-config.yml, e.g.:

  • hermes-server-config.yml for server and server-cli
  • hermes-client-usersgroups_null-config.yml for client-usersgroups_null and client-usersgroups_null-cli

Settings are separated in several YAML sections:

For security reasons, it may be desirable to allow certain users to use the CLI without granting them read access to the configuration file. To do this, simply create an optional CLI configuration file named APPNAME-cli-config.yml, e.g.:

  • hermes-server-cli-config.yml for server-cli
  • hermes-client-usersgroups_null-cli-config.yml for client-usersgroups_null-cli

This file should only contain the following directives :

hermes:
  cli_socket:
    path: /path/to/cli/sockfile.sock

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Configuration

hermes

Settings shared by server and all clients.

Main subsections:


hermes.umask

  • Description: Set up the default umask for each file or directory created by the application : cache dirs, cache files and log files. Warning as it is an octal value, it must be prefixed by a 0.
  • Mandatory: No
  • Type: integer
  • Valid values: 0000 - 0777
  • Default value: 0027

hermes.cache

Mandatory section to define cache settings.

hermes.cache.dirpath

  • Description: Path of an existing directory where cache files will be stored.
  • Mandatory: Yes
  • Type: string

hermes.cache.enable_compression

  • Description: If true, all cache files will be gzipped.
  • Mandatory: No
  • Type: boolean
  • Default value: true

hermes.cache.backup_count

  • Description: At each save, if the file content has changed, Hermes will keep previous cache content up to specified backup_count.
  • Mandatory: No
  • Type: integer
  • Valid values: 0 - 999999
  • Default value: 1

hermes.cli_socket

Enable CLI socket that will allow communication between app and its CLI.

hermes.cli_socket.path

  • Description: Path to CLI socket file to create/use. When left unspecified, CLI will be disabled.
  • Mandatory: No
  • Type: string

hermes.cli_socket.owner

  • Description: Name of the user that should own the socket file when created, as would be fed to chown.
    When left unspecified, it uses the current hermes-server running user.
  • Mandatory: No
  • Type: string
  • Ignored when: dont_manage_sockfile is true

hermes.cli_socket.group

  • Description: Name of the group that should own the socket file when created, as would be fed to chown.
    When left unspecified, it uses the current group of hermes-server running user.
  • Mandatory: No
  • Type: string
  • Ignored when: dont_manage_sockfile is true

hermes.cli_socket.mode

  • Description: The permissions to apply to the socket file when created, as would be fed to chmod.
    For those used to /usr/bin/chmod remember that modes are octal numbers and should be prefixed by a 0.
    If mode is not specified and the socket file does not exist, the default umask on the system will be used when setting the mode for the newly created socket file.
    If mode is not specified and the socket file does exist, the mode of the existing socket file will be used.
  • Mandatory: No
  • Type: integer
  • Default value: 00600
  • Valid values: 0 - 07777
  • Ignored when: dont_manage_sockfile is true

hermes.cli_socket.dont_manage_sockfile

  • Description: Indicates that Hermes shouldn’t handle the socket file creation, and use the socket file descriptor provided by its parent process (typically SystemD).
    The created socket must be a listening AF_UNIX stream socket. One and only one socket must be provided : Hermes will ensure this by checking that the SystemD env var LISTEN_FDS is set to 1, and will fail otherwise.
  • Mandatory: No
  • Type: boolean
  • Default value: false

hermes.logs

Mandatory section to define log settings.

hermes.logs.logfile

  • Description: Path of an existing directory where log files will be stored. When left unspecified, no log file will be stored on disk.
  • Mandatory: Yes
  • Type: string

hermes.logs.backup_count

  • Description: Hermes will rotate its log every day at midnight and keep up to specified backup_count values of previous log files.
  • Mandatory: No
  • Type: integer
  • Default value: 7
  • Valid values: 0 - 999999

hermes.logs.verbosity

  • Description: Log verbosity.
  • Mandatory: No
  • Type: string
  • Default value: warning
  • Valid values:
    • critical
    • error
    • warning
    • info
    • debug

hermes.logs.long_string_limit

  • Description: Define the limit (max size) of string attributes content to show in logs.
    If a string attribute content is greater than this limit, it will be truncated to this limit and marked as a LONG_STRING in logs.
    Can be set to null to disable this feature and always show full string content in logs.
  • Mandatory: No
  • Type: integer
  • Default value: 512
  • Valid values: [1 - 999999] or null

hermes.mail

Mandatory section to define mail settings to allow Hermes to notify errors to admins.

The email will contain 3 attachments when possible: previous.txt, current.txt, and diff.txt, containing the previous state, the current state, and the diff between previous and current states.

hermes.mail.server

  • Description: DNS name or IP address of SMTP relay.
  • Mandatory: Yes
  • Type: string

hermes.mail.from

  • Description: E-mail address that will be set as from address in the mail syntax User name <name@example.com>
  • Mandatory: Yes
  • Type: string

hermes.mail.to

  • Description: Recipient address or list of addresses.
  • Mandatory: Yes
  • Type: string | string[]

hermes.mail.compress_attachments

  • Description: Indicate if attachments must be gzipped or sent raw.
  • Mandatory: No
  • Type: boolean
  • Default value: true

hermes.mail.mailtext_maxsize

  • Description: Max size in bytes for mail content. If content size is greater than mailtext_maxsize, then a default fallback message will be set instead.
  • Mandatory: No
  • Type: integer
  • Default value: 1048576 (1 MB)
  • Valid values: >= 0

hermes.mail.attachment_maxsize

  • Description: Max size in bytes for a single mail attachment. If the attachment size is greater than attachment_maxsize, it will not be attached to the email and a message indicating this will be added to the mail content.
  • Mandatory: No
  • Type: integer
  • Default value: 5242880 (5 MB)
  • Valid values: >= 0

hermes.plugins

Mandatory section to declare which plugins must be loaded, with their settings.

It is divided into subsections by plugin type.

hermes.plugins.attributes

Facultative section to declare the attributes plugins to load, and their settings.

It must contain a subsection named with the plugin name containing a facultative settings subsection with the plugin settings to fill according to the plugin documentation.

Example with the ldapPasswordHash plugin:

hermes:
  # (...)
  plugins:
    attributes:
      ldapPasswordHash:
        settings:
          default_hash_types:
            - SMD5
            - SSHA
            - SSHA256
            - SSHA512
  # (...)

hermes.plugins.datasources

Mandatory section on hermes-server to declare the datasource(s), and their settings. If set on hermes-clients, it will be silently ignored.

A same datasource plugin can be used for several datasources, so for each datasource needed, you must declare a subsection with your desired datasource name (that will be used in datamodel), containing two mandatory entries:

  • type (string): the datasource plugin to use for this datasource.
  • settings (subsection): the datasource plugin settings for this datasource according to the plugin documentation.

Example:

hermes:
  # (...)
  plugins:
    datasources:
      my_oracle1_datasource:
        type: oracle
        settings:
          login: HERMES_DUMMY
          password: "DuMmY_p4s5w0rD"
          port: 1234
          server: dummy.example.com
          sid: DUMMY
      
      my_oracle2_datasource:
        type: oracle
        settings:
          login: HERMES_DUMMY2
          password: "DuMmY2_p4s5w0rD"
          port: 1234
          server: dummy.example.com
          sid: DUMMY2

      my_ldap_datasource:
        type: ldap
        settings:
          uri: ldaps://dummy.example.com:636
          binddn: cn=binddn,dc=example,dc=com
          bindpassword: DuMmY_p4s5w0rD
          basedn: dc=example,dc=com
  # (...)

hermes.plugins.messagebus

Mandatory section to declare the messagebus plugin to load, and its settings. Obviously, you must set up exactly one message bus plugin.

  • On hermes-server, it will look up for Message bus producer plugin in plugins/messagebus_producers/ directory.
  • On hermes-client, it will look up for Message bus consumer plugin in plugins/messagebus_consumers/ directory.

It must contain a subsection named with the plugin name containing a facultative settings subsection with the plugin settings to fill according to the messagebus producers or messagebus consumers plugin documentation.

Example with the sqlite producer plugin:

hermes:
  # (...)
  plugins:
    messagebus:
      sqlite:
        settings:
          uri: /path/to/hermes/sqlite/message/bus.sqlite
          retention_in_days: 30
  # (...)

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

hermes-server

Server settings.

Main subsections:


hermes-server

hermes-server.updateInterval

  • Description: Interval between two data updates, in seconds.
  • Mandatory: Yes
  • Type: integer
  • Valid values: >= 0

hermes-server.datamodel

Mandatory subsection used to configure server datamodel.

For each data types needed, a subsection with the desired data type name must be created and configured. The data type name MUST start with an alphanumerical character.

Obviously, at least one data type must be set up.

Note

The declaration order of data types is important to enforce data integrity:

  • add/modify events will be processed in the declaration order
  • remove events will be processed in the reversed declaration order

So you really should first declare data types that do not depend on any other types, and then types that have dependencies (foreign keys) to those declared above.

hermes-server.datamodel.data-type-name.primarykeyattr

  • Description: The name of the datamodel attribute used as primary key. If the primary key is a tuple, you may declare a list of names.
  • Mandatory: Yes
  • Type: string | string[]

hermes-server.datamodel.data-type-name.toString

  • Description: Jinja template to compose the way a data item will be represented in log files.
  • Mandatory: No
  • Type: string

hermes-server.datamodel.data-type-name.on_merge_conflict

  • Description: Behavior if a same attribute has different value on multiple sources.
  • Mandatory: No
  • Type: string
  • Default value: use_cached_entry
  • Valid values:
    • keep_first_value: use the first value met in source order.
    • use_cached_entry: ignore data fetched and keep using cached entry until conflict is solved.

hermes-server.datamodel.data-type-name.foreignkeys

  • Description: Allow to declare foreign keys in a data type, that clients will use to enforce their foreign keys policy. See Foreign keys for details.
    The setting is a dict with current data type primary key as key, a dict with two entries as value, refering to the parent data type from_objtype and its primary key from_attr.
    Although it might seem intuitive, declaring foreign keys will not create any integrity constraint rules automatically.

    Warning

    Whether for the current data type or for the parent, attributes must be primary keys of their respective types.
    In addition, the primary key of the parent cannot be multivalued (a tuple).

    These constraints could eventually be relaxed one day, but for now no relevant use case has justified the need.

    Example:

    foreignkeys:
      group_id:
        from_objtype: SRVGroups
        from_attr: gid
      user_id:
        from_objtype: SRVUsers
        from_attr: uid
  • Mandatory: No

  • Type: dict[string, dict[string, string]]

  • Default value: {}

hermes-server.datamodel.data-type-name.integrity_constraints

  • Description: Integrity constraints between datamodel type, in Jinja.
    WARNING: it could be terribly slow, so you should keep it as simple as possible, and focus upon primary keys.

    Jinja vars available are:

    • _SELF: the current object
    • data-type-name_pkeys: a set with every primary key of specified data type.
    • data-type-name: a list of dict containing each entry of specified data type.

    Example:

    integrity_constraints:
      - "{{ _SELF.pkey_attr in OTHERDataType_pkeys }}"
  • Mandatory: No

  • Type: string[]

  • Default value: []

hermes-server.datamodel.data-type-name.sources

Mandatory subsection listing the datasource(s) used to fetch current data type data.

For each datasource used, a subsection with its name must be defined and configured.

Obviously, at least one datasource must be set up.

Note

The declaration order of datasources is important to for data merging if hermes-server.datamodel.data-type-name.on_merge_conflict is set to keep_first_value, or if hermes-server.datamodel.data-type-name.sources.datasource-name.pkey_merge_constraint is used.

hermes-server.datamodel.data-type-name.sources.datasource-name.fetch

Mandatory subsection to set up the query used to fetch data.

According to datasource plugin used, query and vars may be facultative: configure them according to your datasource plugin documentation.

hermes-server.datamodel.data-type-name.sources.datasource-name.fetch.type
  • Description: Indicate to datasource plugin which flavor of query to proceed. Should probably be fetch here.
  • Mandatory: Yes
  • Type: string
  • Valid values:
    • fetch: Indicate that plugin must fetch data, without altering dataset.
    • add: Indicate that plugin will add data to dataset.
    • delete: Indicate that plugin will delete data from dataset.
    • modify: Indicate that plugin will modify data in dataset.
hermes-server.datamodel.data-type-name.sources.datasource-name.fetch.query
  • Description: The query to send to datasource. May be a Jinja template.

    Jinja vars available are:

    • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
    • CACHED_VALUES: the cache of previous query. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value. May be useful to filter the query using a cached value.
    • data-type-name_pkeys: a set with every primary key of specified data type. The var’s datatype must be declared before the current one in the datamodel, otherwise the content of the var will always be empty as its content will be fetched after that of the current datatype.
    • data-type-name: a list of dict containing each entry of specified data type. The var’s datatype must be declared before the current one in the datamodel, otherwise the content of the var will always be empty as its content will be fetched after that of the current datatype.
  • Mandatory: No

  • Type: string

hermes-server.datamodel.data-type-name.sources.datasource-name.fetch.vars

Facultative subsection containing some vars to pass to datasource plugin.

The var name as key, and its value as value. Each value may be a Jinja template.

Jinja vars available are:

  • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
  • CACHED_VALUES: the cache of previous query. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
  • data-type-name_pkeys: a set with every primary key of specified data type. The var’s datatype must be declared before the current one in the datamodel, otherwise the content of the var will always be empty as its content will be fetched after that of the current datatype.
  • data-type-name: a list of dict containing each entry of specified data type. The var’s datatype must be declared before the current one in the datamodel, otherwise the content of the var will always be empty as its content will be fetched after that of the current datatype.
hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one

Facultative subsection to set up a query to run each time an item of current data has been processed without errors.

According to datasource plugin used, query and vars may be facultative: configure them according to your datasource plugin documentation.

Warning

commit_one and commit_all are mutually exclusive: you can set none or one of them, but not both at the same time.

hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one.type
  • Description: Indicate to datasource plugin which flavor of query to proceed.
  • Mandatory: Yes
  • Type: string
  • Valid values:
    • fetch: Indicate that plugin must fetch data, without altering dataset.
    • add: Indicate that plugin will add data to dataset.
    • delete: Indicate that plugin will delete data from dataset.
    • modify: Indicate that plugin will modify data in dataset.
hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one.query
  • Description: The query to send to datasource. May be a Jinja template.

    Jinja vars available are:

    • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
    • ITEM_CACHED_VALUES: the cache values of current item. A dictionary with attrname as key, and corresponding value as value.
    • ITEM_FETCHED_VALUES: the fetched values of current item. A dictionary with attrname as key, and corresponding value as value.
  • Mandatory: No

  • Type: string

hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one.vars

Facultative subsection containing some vars to pass to datasource plugin.

The var name as key, and its value as value. Each value may be a Jinja template.

Jinja vars available are:

  • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
  • ITEM_CACHED_VALUES: the cache values of current item. A dictionary with attrname as key, and corresponding value as value.
  • ITEM_FETCHED_VALUES: the fetched values of current item. A dictionary with attrname as key, and corresponding value as value.
hermes-server.datamodel.data-type-name.sources.datasource-name.commit_all

Facultative subsection to set up a query to run once all data have been processed with no errors.

According to datasource plugin used, query and vars may be facultative: configure them according to your datasource plugin documentation.

Warning

commit_all and commit_one are mutually exclusive: you can set none or one of them, but not both at the same time.

hermes-server.datamodel.data-type-name.sources.datasource-name.commit_all.type
  • Description: Indicate to datasource plugin which flavor of query to proceed.
  • Mandatory: Yes
  • Type: string
  • Valid values:
    • fetch: Indicate that plugin must fetch data, without altering dataset.
    • add: Indicate that plugin will add data to dataset.
    • delete: Indicate that plugin will delete data from dataset.
    • modify: Indicate that plugin will modify data in dataset.
hermes-server.datamodel.data-type-name.sources.datasource-name.commit_all.query
  • Description: The query to send to datasource. May be a Jinja template.

    Jinja vars available are:

    • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
    • CACHED_VALUES: the cache of previous polling. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
    • FETCHED_VALUES: the fetched entries of current polling. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
  • Mandatory: No

  • Type: string

hermes-server.datamodel.data-type-name.sources.datasource-name.commit_all.vars

Facultative subsection containing some vars to pass to datasource plugin.

The var name as key, and its value as value. Each value may be a Jinja template.

Jinja vars available are:

  • REMOTE_ATTRIBUTES: the list of remote attribute names used in attrsmapping. May be useful to generate SQL queries with required data without using wildcards or manually typing the attribute list.
  • CACHED_VALUES: the cache of previous polling. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
  • FETCHED_VALUES: the fetched entries of current polling. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
hermes-server.datamodel.data-type-name.sources.datasource-name.attrsmapping

Mandatory subsection to set up attribute mapping. HERMES attributes as keys, REMOTE attributes (on datasource) as values. A list of several remote attributes can be defined as a convenience, their non-NULL values will be combined in a list. The NULL values and empty lists won’t be loaded.

A Jinja template could be set as value. If you do so, the whole value must be a template. You can’t set "{{ ATTRIBUTE.split('separator') }} SOME_NON_JINJA_ATTR". This is required to allow the software to collect the REMOTE_ATTRIBUTES

Jinja vars available are:

  • each remote attribute for current data type and datasource with its fetched value, only if its value is not NULL and not an empty list.
  • ITEM_CACHED_VALUES: the cache values of current item. A dictionary with attrname as key, and corresponding value as value.
hermes-server.datamodel.data-type-name.sources.datasource-name.secrets_attrs
  • Description: Define attributes that will contain sensitive data, like passwords.
    It will indicate Hermes to not cache them. The attribute names set here must exist as keys in attrsmapping. They’ll be sent to clients unless they’re defined in local_attrs too. As they’re not cached, they’ll be seen as added EACH TIME the server will be restarted, and the consecutive events will be sent.
  • Mandatory: No
  • Type: string[]
hermes-server.datamodel.data-type-name.sources.datasource-name.cacheonly_attrs
  • Description: Define attributes that will only be stored in cache.
    They won’t be sent in events, nor used to diff with cache. The attribute names set here must exist as keys in attrsmapping.
  • Mandatory: No
  • Type: string[]
hermes-server.datamodel.data-type-name.sources.datasource-name.local_attrs
  • Description: Define attributes that won’t be sent to clients, cached or used to diff with cache.
    They won’t be sent in events, nor used to diff with cache. The attribute names set here must exist as keys in attrsmapping.
  • Mandatory: No
  • Type: string[]
hermes-server.datamodel.data-type-name.sources.datasource-name.pkey_merge_constraint
  • Description: Constraints on primary keys during merge: will be applied during datasources merge.
    As merging will be processed in the datamodel source declaration order in config file, the first source constraint will be ignored (because it will be created and not merged). Then the first source data will be merged with the second source according to the second’s pkey_merge_constraint. Then the resulting data will be merged with the third source data according to the third’s pkey_merge_constraint, etc.
  • Mandatory: No
  • Type: string
  • Default value: noConstraint
  • Valid values:
    • noConstraint: don’t apply any merge constraint
    • mustNotExist: the primary key in current source must not exist in previous (in datasources declaration order), otherwise the data of current will be discarded
    • mustAlreadyExist: the primary key in current source must already exist in previous (in datasources declaration order), otherwise the data of current will be discarded
    • mustExistInBoth: the primary key in current source must already exist in previous (in datasources declaration order), otherwise the data of both sources will be discarded
hermes-server.datamodel.data-type-name.sources.datasource-name.merge_constraints
  • Description: Advanced merge constraints with Jinja rules.
    Warning

    Terribly slow, avoid using them as much as possible.

    Jinja vars available are:
    • _SELF: the data type item in current datasource being currently merged.
    • For each datasource declared in current data type:
      • datasource-name_pkeys: a set with every primary key of data type item in current datasource.
      • datasource-name: the fetched entries of current polling. A list of dictionaries, each dictionary is an entry with attrname as key, and corresponding value as value.
        Note

        if pkey_merge_constraint is defined, it will be enforced before merge_constraints, and Jinja vars will contains the resulting values.

  • Mandatory: No
  • Type: string[]

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

hermes-client

Settings shared by all clients.

Main subsections:


hermes-client.autoremediation

  • Description: Autoremediation policy to use in error queue for events concerning a same object.
    Warning

    Enabling this feature may break the regular processing order of events: if your data types are only linked by primary keys, it shouldn’t be problematic, but if the links between them are more complex, you really should consider what could go wrong before enabling it.

    e.g. with maximum policy, and trashbin enabled, the autoremediation will delete both events when an added event is followed by a removed event. Without error, the object would have been created and stored in trashbin, but in this case it won’t even be created.

    See how autoremediation works for more details.

  • Mandatory: No
  • Type: string
  • Default value: disabled
  • Valid values:
    • disabled: no autoremediation, events are stacked as is (default).
    • conservative: only merge added and modified events between them.
      • merge an added event with a following modified event.
      • merge two successive modified events.
    • maximum: merge every events that can be merged.
      • merge an added event with a following modified event.
      • merge two successive modified events.
      • delete both events when an added event is followed by a removed event.
      • merge a removed event followed by an added event in a modified event.
      • delete a modified event when it is followed by a removed event.

hermes-client.foreignkeys_policy

  • Description: Set up which event types that will be placed in the error queue if the object concerning them is the parent (by foreign key) of an object already present in the error queue.
    See Foreign keys for more details.
  • Mandatory: No
  • Type: string
  • Default value: on_remove_event
  • Valid values:
    • disabled: No event, policy is disabled.
    • on_remove_event: Only on removed events.
    • on_every_event: On every events types (added, modified, removed)

hermes-client.errorQueue_retryInterval

  • Description: Number of minutes between two attempts of re-processing events in error.
  • Mandatory: No
  • Type: integer
  • Default value: 60 (1 hour)
  • Valid values: 1 - 65535

hermes-client.trashbin_purgeInterval

  • Description: Number of minutes between two trashbin purge attempts.
  • Mandatory: No
  • Type: integer
  • Default value: 60 (1 hour)
  • Valid values: 1 - 65535
  • Ignored when: trashbin_retention is 0/unset

hermes-client.trashbin_retention

  • Description: Number of days to keep removed data in trashbin before permanently deleting it.
    0/unset disable the trashbin: data will be immediately deleted.
  • Mandatory: No
  • Type: integer
  • Default value: 0 (no trashbin)
  • Valid values: >= 0

hermes-client.updateInterval

  • Description: Number of seconds to sleep once no more events are available on message bus.
  • Mandatory: No
  • Type: integer
  • Default value: 5
  • Valid values: >= 0

hermes-client.useFirstInitsyncSequence

  • Description: If true, indicate to use the first/older initsync sequence available on message bus. If false, the latest/newer will be used.
  • Mandatory: No
  • Type: boolean
  • Default value: false

hermes-client.datamodel

Mandatory subsection used to configure client datamodel.

For each data types needed, a subsection with the desired data type name must be created and configured. The data type name MUST start with an alphanumerical character.

Obviously, at least one data type must be set up.

hermes-client.datamodel.data-type-name.hermesType

  • Description: Name of corresponding data type on hermes-server.
  • Mandatory: Yes
  • Type: string

hermes-client.datamodel.data-type-name.toString

  • Description: Jinja template to compose the way a data item will be represented in log files.
  • Mandatory: No
  • Type: string

hermes-client.datamodel.data-type-name.attrsmapping

Subsection to set up attribute mapping. CLIENT attributes as keys, REMOTE attributes (identified as HERMES attributes on hermes-server) as values.

A Jinja template could be set as value. If you do so, the value outside the templates will be used as raw string, and not as remote attribute name.

Jinja vars available are:

  • each remote attribute for current data type, only if its value is not NULL and not an empty list.
Note

If you won’t use their value, it is not necessary to declare a mapping for primary key(s). For some data types, you may omit the attrsmapping, which is equivalent to defining an empty one : therefore it will only contain its primary key(s).

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Plugins

Server and clients plugins

  • attributes: custom Jinja filters to transform data

Server plugins

Clients plugins

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Plugins

attributes plugins

  • crypto_RSA_OAEP: encrypt/decrypt strings with asymmetric RSA keys, using PKCS#1 OAEP, an asymmetric cipher based on RSA and the OAEP padding

  • ldapPasswordHash: generate LDAP hashes of specified formats from a clear text password string

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of attributes plugins

crypto_RSA_OAEP

Description

This plugin allows to encrypt/decrypt strings with asymmetric RSA keys, using PKCS#1 OAEP, an asymmetric cipher based on RSA and the OAEP padding.

Configuration

You can set up as many keys as you want in plugin settings. A key can be used to either encrypt or decrypt, but not both. The plugin will determine if it’s an encryption or a decryption operation upon the key type: decryption for private keys, and encryption for public keys.

hermes:
  plugins:
    attributes:
      crypto_RSA_OAEP:
        settings:
          keys:
            # Key name, you can set whatever you want
            encrypt_to_messagebus:
              # Hash type, when decrypting, you must obviously use the same value
              # that was used for encrypting
              hash: SHA3_512
              # Public RSA key used to encrypt
              # WARNING - THIS KEY IS WEAK AND PUBLIC, NEVER USE IT
              rsa_key: |-
                  -----BEGIN PUBLIC KEY-----
                  MCgCIQCy2W1bAPOa1JIeLuV8qq1Qg7h0jxpf8QCik11H9xZcfwIDAQAB
                  -----END PUBLIC KEY-----                  

            # Another key
            decrypt_from_messagebus:
              hash: SHA3_512
              # Private RSA key used to decrypt
              # WARNING - THIS KEY IS WEAK AND PUBLIC, NEVER USE IT
              rsa_key: |-
                  -----BEGIN RSA PRIVATE KEY-----
                  MIGrAgEAAiEAstltWwDzmtSSHi7lfKqtUIO4dI8aX/EAopNdR/cWXH8CAwEAAQIh
                  AKfflFjGNOJQwvJX3Io+/juxO+HFd7SRC++zBD9paZqZAhEA5OtjZQUapRrV/aC5
                  NXFsswIRAMgBtgpz+t0FxyEXdzlcTwUCEHU6WZ8M2xU7xePpH49Ps2MCEQC+78s+
                  /WvfNtXcRI+gJfyVAhAjcIWzHC5q4wzgL7psbPGy
                  -----END RSA PRIVATE KEY-----                  

Valid values for hash are:

  • SHA224
  • SHA256
  • SHA384
  • SHA512
  • SHA3_224
  • SHA3_256
  • SHA3_384
  • SHA3_512

Usage

crypto_RSA_OAEP(value: bytes | str, keyname: str)  str

Once everything is set up, you can encrypt data with encrypt_to_messagebus key like this in a Jinja filter:

password_encrypted: "{{ PASSWORD_CLEAR | crypto_RSA_OAEP('encrypt_to_messagebus') }}"
password_decrypted: "{{ PASSWORD_ENCRYPTED | crypto_RSA_OAEP('decrypt_from_messagebus') }}"

You can even decrypt and immediately re-encrypt data with another key like this:

password_reencrypted: "{{ PASSWORD_ENCRYPTED | crypto_RSA_OAEP('decrypt_from_datasource') | crypto_RSA_OAEP('encrypt_to_messagebus') }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

ldapPasswordHash

Description

This plugin allows to generate LDAP hashes of specified formats from a clear text password string.

Configuration

You can set up a facultative list of default hash types in plugin settings. This list will be used if hashtypes are not specified in filter arguments, otherwise the specified hashtypes will be used.

hermes:
  plugins:
    attributes:
      ldapPasswordHash:
        settings:
          default_hash_types:
            - SMD5
            - SSHA
            - SSHA256
            - SSHA512

Valid values for default_hash_types are:

  • MD5
  • SHA
  • SMD5
  • SSHA
  • SSHA256
  • SSHA512

Usage

ldapPasswordHash(password: str, hashtypes: None | str | list[str] = None)  list[str]

Once everything is set up, you can generate your hash list like this in a Jinja filter:

# Will contain a list of hashes of PASSWORD_CLEAR according to
# default_hash_types settings: SMD5, SSHA, SSHA256, SSHA512
ldap_password_hashes: "{{ PASSWORD_CLEAR | ldapPasswordHash }}"

# Will contain a list with only the SSHA512 hashes of PASSWORD_CLEAR
ldap_password_hashes: "{{ PASSWORD_CLEAR | ldapPasswordHash('SSHA512') }}"

# Will contain a list with only the SSHA256 and SSHA512 hashes of PASSWORD_CLEAR
ldap_password_hashes: "{{ PASSWORD_CLEAR | ldapPasswordHash(['SSHA256', 'SSHA512']) }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

datasources plugins

  • ldap: use a LDAP server as datasource

  • oracle: use an Oracle database as datasource

  • postgresql: use a PostgreSQL database as datasource

  • sqlite: use a SQLite database as datasource (testing only)

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of datasources plugins

ldap

Description

This plugin allows the use of an LDAP server as datasource.

Configuration

Connection settings are required in plugin configuration.

hermes:
  plugins:
    datasources:
      # Source name. Use whatever you want. Will be used in datamodel
      your_source_name:
        type: ldap
        settings:
          # MANDATORY: LDAP server URI
          uri: ldaps://ldap.example.com:636
          # MANDATORY: LDAP server credentials to use
          binddn: cn=account,dc=example,dc=com
          bindpassword: s3cReT_p4s5w0rD
          # MANDATORY: LDAP base DN
          basedn: dc=example,dc=com

          ssl: # Facultative
            # Path to PEM file with CA certs
            cafile: /path/to/INTERNAL-CA-chain.crt # Facultative
            # Path to file with PEM encoded cert for client cert authentication,
            # requires keyfile
            certfile: /path/to/client.crt # Facultative
            # Path to file with PEM encoded key for client cert authentication,
            # requires certfile
            keyfile: /path/to/client.pem # Facultative

          # Facultative. Default: false.
          # Since the client is not aware of the LDAP schema, it cannot know whether
          # an attribute is single-valued or multi-valued. By default, it will
          # return a single value in its base type, as if it were a single-valued
          # attribute, and multiple values in a list.
          # If this setting is enabled, all values will always be returned in a list.
          always_return_values_in_list: true

Usage

Usage differs according to specified operation type

fetch

Fetch entries from LDAP server.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: fetch
            vars:
              # Facultative: the basedn to use for 'fetch' operation.
              # If unset, setting basedn will be used
              base: "ou=exampleOU,dc=example,dc=com"
              # Facultative: the operation scope for 'fetch' operation
              # Valid values are:
              # - base: to search the "base" object itself
              # - one, onelevel: to search the "base" object’s immediate children
              # - sub, subtree: to search the "base" object and all its descendants
              # If unset, "subtree" will be used
              scope: subtree
              # Facultative: the LDAP filter to use for 'fetch' operation
              # If unset, "(objectClass=*)" will be used
              filter: "(objectClass=*)"
              # Facultative: the attributes to fetch, as a list of strings
              # If unset, all the attributes of each entry are returned
              attrlist: "{{ REMOTE_ATTRIBUTES }}"

add

Add entries to LDAP server.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: add
            vars:
              # Facultative: a list of entries to add.
              # If unset, an empty list will be used (and nothing will be added)
              addlist:
                  # MANDATORY: the DN of the entry. If not specified, the entry will
                  # be silently ignored
                - dn: uid=newentry1,ou=exampleOU,dc=example,dc=com
                  # Facultative: the attributes to add to the entry
                  add:
                    # Create attribute if it doesn't exist, and add "value" to it
                    "attrnameToAdd": "value",
                    # Create attribute if it doesn't exist, and add "value1" and
                    # "value2" to it
                    "attrnameToAddList": ["value1", "value2"],
                - dn: uid=newentry2,ou=exampleOU,dc=example,dc=com
                  # ...

delete

Delete entries from LDAP server.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: delete
            vars:
              # Facultative: a list of entries to delete.
              # If unset, an empty list will be used (and nothing will be deleted)
              dellist:
                  # MANDATORY: the DN of the entry. If not specified, the entry will
                  # be silently ignored
                - dn: uid=entryToDelete1,ou=exampleOU,dc=example,dc=com
                - dn: uid=entryToDelete2,ou=exampleOU,dc=example,dc=com
                  # ...

modify

Modify entries on LDAP server.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: modify
            vars:
              # Facultative: a list of entries to modify.
              # If unset, an empty list will be used (and nothing will be modified)
              modlist:
                  # MANDATORY: the DN of the entry. If not specified, the entry will
                  # be silently ignored
                - dn: uid=entryToModify1,ou=exampleOU,dc=example,dc=com

                  # Facultative: the attributes to add to the entry
                  add:
                    # Create attribute if it doesn't exist, and add "value" to it
                    attrnameToAdd: value
                    # Create attribute if it doesn't exist, and add "value1" and
                    # "value2" to it
                    attrnameToAddList: [value1, value2]

                  # Facultative: the attributes to modify in the entry
                  modify:
                    # Create attribute if it doesn't exist, and replace all its
                    # value by "value"
                    attrnameToModify: newvalue
                    # Create attribute if it doesn't exist, and replace all its
                    # value by "newvalue1" and "newvalue2"
                    attrnameToModifyList: [newvalue1, newvalue2]

                  # Facultative: the attributes to delete from the entry
                  delete:
                    # Delete specified attribute and all of its values
                    attrnameToDelete: null
                    # Delete "value" from specified attribute. Raise an error if
                    # value is missing
                    attrnameToDeleteValue: value
                    # Delete "value1" and "value2" from specified attribute. Raise
                    # an error if a value is missing
                    attrnameToDeleteValueList: [value1, value2]

                - dn: uid=entryToModify2,ou=exampleOU,dc=example,dc=com
                  # ...

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

oracle

Description

This plugin allows using an Oracle database as datasource.

Configuration

Connection settings are required in plugin configuration.

hermes:
  plugins:
    datasources:
      # Source name. Use whatever you want. Will be used in datamodel
      your_source_name:
        type: oracle
        settings:
          # MANDATORY: the database server DNS name or IP address
          server: dummy.example.com
          # MANDATORY: the database connection port
          port: 1234
          # MANDATORY: the database service name. Cannot be set if 'sid' is set
          service_name: DUMMY.example.com
          # MANDATORY: the database SID. Cannot be set if 'service_name' is set
          sid: DUMMY
          # MANDATORY: the database credentials to use
          login: HERMES_DUMMY
          password: "DuMmY_p4s5w0rD"

Usage

Specify a query. If you’d like to provide values from cache, you should provide them in a vars dict, and refer to them by specifying the column-prefixed : var key name in the query: this will automatically sanitize the query.

The example vars names are prefixed with sanitized_ only for clarity, it’s not a requirement.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM AN_ORACLE_TABLE              

          commit_one:
            type: modify
            query: >-
              UPDATE AN_ORACLE_TABLE
              SET
                valueToSet = :sanitized_valueToSet
              WHERE pkey = :sanitized_pkey              

            vars:
              sanitized_pkey: "{{ ITEM_FETCHED_VALUES.pkey }}"
              sanitized_valueToSet: "{{ ITEM_FETCHED_VALUES.valueToSet }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

postgresql

Description

This plugin allows using a PostgreSQL database as datasource.

Configuration

Connection settings are required in plugin configuration.

hermes:
  plugins:
    datasources:
      # Source name. Use whatever you want. Will be used in datamodel
      your_source_name:
        type: postgresql
        settings:
          # MANDATORY: the database server DNS name or IP address
          server: dummy.example.com
          # MANDATORY: the database connection port
          port: 1234
          # MANDATORY: the database name
          dbname: DUMMY
          # MANDATORY: the database credentials to use
          login: HERMES_DUMMY
          password: "DuMmY_p4s5w0rD"

Usage

Specify a query. If you’d like to provide values from cache, you should provide them in a vars dict, and refer to them by specifying the var key name encased in %()s in the query: this will automatically sanitize the query. See example below.

The example vars names are prefixed with sanitized_ only for clarity, it’s not a requirement.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM A_POSTGRESQL_TABLE              

          commit_one:
            type: modify
            query: >-
              UPDATE A_POSTGRESQL_TABLE
              SET
                valueToSet = %(sanitized_valueToSet)s
              WHERE pkey = %(sanitized_pkey)s              

            vars:
              sanitized_pkey: "{{ ITEM_FETCHED_VALUES.pkey }}"
              sanitized_valueToSet: "{{ ITEM_FETCHED_VALUES.valueToSet }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

sqlite

Description

This plugin allows using an SQLite database as datasource.

Configuration

Connection settings are required in plugin configuration.

hermes:
  plugins:
    datasources:
      # Source name. Use whatever you want. Will be used in datamodel
      your_source_name:
        type: sqlite
        settings:
          # MANDATORY: the database file path
          uri: /path/to/sqlite.db

Usage

Specify a query. If you’d like to provide values from cache, you should provide them in a vars dict, and refer to them by specifying the column-prefixed : var key name in the query: this will automatically sanitize the query.

The example vars names are prefixed with sanitized_ only for clarity, it’s not a requirement.

hermes-server:
  datamodel:
    oneDataType:
      sources:
        your_source_name: # 'your_source_name' was set in plugin settings
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM AN_SQLITE_TABLE              

          commit_one:
            type: modify
            query: >-
              UPDATE AN_SQLITE_TABLE
              SET
                valueToSet = :sanitized_valueToSet
              WHERE pkey = :sanitized_pkey              

            vars:
              sanitized_pkey: "{{ ITEM_FETCHED_VALUES.pkey }}"
              sanitized_valueToSet: "{{ ITEM_FETCHED_VALUES.valueToSet }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

messagebus producers plugins

  • kafka: Send produced events over an Apache Kafka server

  • sqlite: Send produced events over an SQLite database

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of messagebus producers plugins

kafka

Description

This plugin allows hermes-server to send produced events over an Apache Kafka server.

Configuration

It is possible to connect to Kafka server without authentication, or with SSL (TLS) authentication.

hermes:
  plugins:
    messagebus:
      kafka:
        settings:
          # MANDATORY: the Kafka server or servers list that can be used
          servers:
            - dummy.example.com:9093

          # Facultative: which Kafka API version to use. If unset, the
          # api version will be detected at startup and reported in the logs.
          # Don't set this directive unless you encounter some
          # "kafka.errors.NoBrokersAvailable: NoBrokersAvailable" errors raised
          # by a "self.check_version()" call.
          api_version: [2, 6, 0]

          # Facultative: Hard limit on the size of a message sent to Kafka.
          # You should set a higher value if your Kafka messages are likely to
          # exceed the default of 1MB or if you encountered the error
          #   "MessageSizeTooLargeError: The message is xxx bytes when
          #    serialized which is larger than the maximum request size you
          #    have configured with the max_request_size configuration".
          # Default: 1048576.
          max_request_size: 1048576

          # Facultative: enables SSL authentication. If set, the 3 options below
          # must be defined
          ssl:
            # MANDATORY: hermes-server cert file that will be used for
            # authentication
            certfile: /path/to/.hermes/dummy.crt
            # MANDATORY: hermes-server cert file private key
            keyfile: /path/to/.hermes/dummy.pem
            # MANDATORY: The PKI CA cert
            cafile: /path/to/.hermes/INTERNAL-CA-chain.crt

          # MANDATORY: the topic to send events to
          topic: hermes

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

sqlite

Description

This plugin allows hermes-server to send produced events over an SQLite database.

Configuration

To emulate the behavior of other message buses that delete messages once some conditions are met, retention_in_days can be set. It will delete messages older than the specified number of days.

hermes:
  plugins:
    messagebus:
      sqlite:
        settings:
          # MANDATORY:
          uri: /path/to/.hermes/bus.sqlite
          retention_in_days: 1

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

hermes-client plugins

The client plugins are grouped by categories serving the same goal over several target types. There is currently only one plugin category:

  • usersgroups: manage users, groups, userpasswords and groups membership

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of hermes-client plugins

usergroups

Manage users, groups, userpasswords and groups membership.

Available clients are:

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of usergroups

adpypsrp

Description

This client will handle Users, Groups and UserPasswords events, and store data into an Active Directory through Powershell commands across pypsrp.

The settings list standardAttributes contains available cmdlet parameters used for Users (New-ADUser / Set-ADUser) and Groups (New-ADGroup / Set-ADGroup). The settings list otherAttributes may contains available LDAP display name (ldapDisplayName) attributes to manage those that are not represented by cmdlet parameters for Users and Groups.

The local Datamodel keys MUST exist in standardAttributes or otherAttributes, and will be used as cmdlet parameters with associated values, allowing to handle every AD attributes.

The GroupsMembers will only associate a User with a Group. The SubGroupsMembers will only associate a Group with a Group, allowing to handle nested groups.

To avoid security issues and corner cases with trashbin, a complex random password is set when user is created. This unknown password will be overwritten by the next UserPassword event of the User. This avoids having an enabled account with no password.

The trashbin will only disable the account.

Configuration

hermes-client-usersgroups_adpypsrp:
  WinRM:  # For options details, you may look at https://pypi.org/project/pypsrp/ - "Connection"
    # MANDATORY: AD server URI and port
    host: radon1.in.insa-strasbourg.fr
    port: 5986
    # MANDATORY: AD server credentials
    login: administrator
    password: "s3cReT_p4s5w0rD"
    # Default: true
    ssl: true
    # Default: true
    ssl_cert_validation: false
    # Default: true
    credssp_disable_tlsv1_2: true
    # Default: "auto". Valid values are [auto, always, never]
    encryption: always
    # Default: "wsman"
    path: "wsman"
    # Default: "negotiate". Valid values are [basic, certificate, negotiate, ntlm, kerberos, credssp]
    auth: kerberos
    # Default: "WSMAN". Override the service part of the calculated SPN used when authenticating the server.
    # This is only valid if negotiate auth negotiated Kerberos or kerberos was explicitly set.
    # If you obtain an error "Server not found in Kerberos database", you may try to set HTTP here.
    negotiate_service: WSMAN

  AD_domain:
    # MANDATORY: AD domain name and DN
    name: in.insa-strasbourg.fr
    dn: DC=in,DC=insa-strasbourg,DC=fr
    # MANDATORY: OUs where Users and Groups will be stored
    users_ou: OU=INSA,OU=People,DC=in,DC=insa-strasbourg,DC=fr
    groups_ou: OU=INSA,OU=Groups,DC=in,DC=insa-strasbourg,DC=fr

  # Optional, allows to force each user to be added to the specified group list.
  # Group membership is only added when the user is created: any change to this parameter's value
  # will only impact users created subsequently
  Users_mandatory_groups:
    - MandatoryGroup1
    - MandatoryGroup2

  # Defines cmdlet parameters that can be set, and the valid type of the associated value
  # You really should set it as is.
  standardAttributes:
    Users:
      AccountExpirationDate: "<DateTime>"
      AccountNotDelegated: "<Boolean>"
      AllowReversiblePasswordEncryption: "<Boolean>"
      AuthenticationPolicy: "<ADAuthenticationPolicy>"
      AuthenticationPolicySilo: "<ADAuthenticationPolicySilo>"
      AuthType: "<ADAuthType>"
      CannotChangePassword: "<Boolean>"
      ChangePasswordAtLogon: "<Boolean>"
      City: "<String>"
      Company: "<String>"
      CompoundIdentitySupported: "<Boolean>"
      Country: "<String>"
      # Credential: "<PSCredential>" # Useless: Specifies the user account credentials to use to perform this task
      Department: "<String>"
      Description: "<String>"
      DisplayName: "<String>"
      Division: "<String>"
      EmailAddress: "<String>"
      EmployeeID: "<String>"
      EmployeeNumber: "<String>"
      Enabled: "<Boolean>"
      Fax: "<String>"
      GivenName: "<String>"
      HomeDirectory: "<String>"
      HomeDrive: "<String>"
      HomePage: "<String>"
      HomePhone: "<String>"
      KerberosEncryptionType: "<ADKerberosEncryptionType>"
      LogonWorkstations: "<String>"
      Manager: "<ADUser>"
      MobilePhone: "<String>"
      Office: "<String>"
      OfficePhone: "<String>"
      Organization: "<String>"
      OtherName: "<String>"
      PasswordNeverExpires: "<Boolean>"
      PasswordNotRequired: "<Boolean>"
      POBox: "<String>"
      PostalCode: "<String>"
      # PrincipalsAllowedToDelegateToAccount: "<ADPrincipal[]>" # Won't be set
      ProfilePath: "<String>"
      SamAccountName: "<String>"
      ScriptPath: "<String>"
      # Server: "<String>" # Useless: Specifies the Active Directory Domain Services instance to connect to
      SmartcardLogonRequired: "<Boolean>"
      State: "<String>"
      StreetAddress: "<String>"
      Surname: "<String>"
      Title: "<String>"
      # TrustedForDelegation: "<Boolean>" # Won't be set
      UserPrincipalName: "<String>"

    Groups:
      AuthType: "<ADAuthType>"
      # Credential: "<PSCredential>" # Useless: Specifies the user account credentials to use to perform this task
      Description: "<String>"
      DisplayName: "<String>"
      GroupCategory: "<ADGroupCategory>"
      GroupScope: "<ADGroupScope>"
      HomePage: "<String>"
      ManagedBy: "<ADPrincipal>"
      SamAccountName: "<String>"
      # Server: "<String>" # Useless: Specifies the Active Directory Domain Services instance to connect to

  # Defines LDAP display name (ldapDisplayName) to handle, that are not handled with standardAttributes.
  # You can set your desired values. The values below are just here for example.
  otherAttributes:
    Users:
      otherMobile: "<String[]>"
      otherTelephone: "<String[]>"
      url: "<String[]>"

  # Optional random password generation settings. Default: values specified below
  # Random password is generated to initialize a user whose password is not yet available,
  # or when the user password is removed but the user still exists
  random_passwords:
    # Password length
    length: 32
    # If true, the generated password may contains some upper case letters
    with_upper_letters: true
    # The generated password will contain at least this number of upper case letters
    minimum_number_of_upper_letters: 1
    # If true, the generated password may contains some lower case letters
    with_lower_letters: true
    # The generated password will contain at least this number of lower case letters
    minimum_number_of_lower_letters: 1
    # If true, the generated password may contains some numbers
    with_numbers: true
    # The generated password will contain at least this number of numbers
    minimum_number_of_numbers: 1
    # If true, the generated password may contains some special chars
    with_special_chars: true
    # The generated password will contain at least this number of special chars
    minimum_number_of_special_chars: 1
    # If true, the generated password won't contains the chars specified in 'ambigous_chars_dictionary'
    avoid_ambigous_chars: false
    # The dictionary of ambigous chars (case sensitive) that may be forbidden in password, even if some are present in other dictionnaries
    ambigous_chars_dictionary: "lIO01"
    # The dictionary of letters (case unsensitive) allowed in password
    letters_dictionary: "abcdefghijklmnopqrstuvwxyz"
    # The dictionary of special chars allowed in password
    special_chars_dictionary: "!@#$%^&*"

Datamodel

The following data types may be set up:

  • Users: requires the attribute SamAccountName to be set
  • UserPasswords: obviously requires Users, and requires the attribute user_pkey corresponding to the primary keys of Users, and the attribute password. All other attributes will be ignored
  • Groups: requires the attribute SamAccountName to be set
  • GroupsMembers: obviously requires Users and Groups, and requires the attributes user_pkey and group_pkey corresponding to the primary keys of Users and Groups. All other attributes will be ignored
  • SubGroupsMembers: obviously requires Groups, and requires that the subgroup_pkey and group_pkey attributes match the primary key of the subgroup to be assigned, and that of the assignment group, respectively. All other attributes will be ignored
  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        user_pkey: user_primary_key_on_server
        SamAccountName: login_on_server
        UserPrincipalName: "{{ login_on_server ~ '@YOU.AD.DOMAIN.TLD' }}"
        # Not mandatory, only for example:
        MobilePhone: "{{ (mobile | default([None]))[0] }}" # <String>
        otherMobile: "{{ (mobile | default([]))[1:]  }}" # <String[]>
        # ...

    UserPasswords:
      hermesType: your_server_UserPasswords_type_name
      attrsmapping:
        user_pkey: user_primary_key_on_server
        password: cleartext_password_on_server
        # ...

    Groups:
      hermesType: your_server_Groups_type_name
      attrsmapping:
        group_pkey: group_primary_key_on_server
        SamAccountName: group_name_on_server
        # ...

    GroupsMembers:
      hermesType: your_server_GroupsMembers_type_name
      attrsmapping:
        user_pkey: user_primary_key_on_server
        group_pkey: group_primary_key_on_server
        # ...

    SubGroupsMembers:
      hermesType: your_server_SubGroupsMembers_type_name
      attrsmapping:
        subgroup_pkey: subgroup_primary_key_on_server
        group_pkey: group_primary_key_on_server
        # ...

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

bsspartage

Description

This client will handle Users, UserPasswords, Groups, GroupsMembers, GroupsSenders and Ressources events, and store data into the PARTAGE dashboard through its API, handled by libPythonBssApi.

To avoid security issues, if no hash is available at user creation, a complex random password will be set. This unknown password will be changed when a userPassword attribute will be set to the User or to the UserPassword. This avoids having an enabled account with no password.

The trashbin will only disable the account.

Configuration

You have to configure an authentication mapping containing all domains managed by this client as keys, and their API key as values.

hermes-client-usersgroups_bsspartage:
  authentication:
    example.com: "Secret_API_key_of_example.com"
    subdomain.example.com: "Secret_API_key_of_subdomain.example.com"
  
  # When an attribute has no more value, the default behavior is to keep its latest value in place.
  # This setting allow to override this behaviour for the specified attributes, with the replacement values.
  # Please note that it is forbidden to set Users.userPassword, as the default behavior is to generate a new random password.
  # It is also forbidden to set null values, as this reverts to the default behavior. In this case, simply remove the affected attribute from this list.
  #
  # The values set below are the default values used if default_removed_values is not set
  default_removed_values:
    Users:
      co: ""
      company: ""
      description: ""
      displayName: ""
      facsimileTelephoneNumber: ""
      givenName: ""
      homePhone: ""
      initials: ""
      l: ""
      mobile: ""
      name: ""
      pager: ""
      postalCode: ""
      st: ""
      street: ""
      telephoneNumber: ""
      title: ""
      zimbraNotes: ""
      zimbraPrefMailForwardingAddress: ""
      zimbraMailCanonicalAddress: ""
      zimbraPrefFromDisplay: ""
      zimbraMailQuota: 0
    Groups:
      # Values should be set to empty strings, but a bug in API is ignoring them.
      # This bug has been notified to PARTAGE's team.
      description: "-" 
      displayName: "-"
      zimbraNotes: "-"
    Resources:
      co: ""
      description: ""
      l: ""
      postalCode: ""
      st: ""
      street: ""
      zimbraCalResBuilding: ""
      zimbraCalResContactEmail: ""
      zimbraCalResContactName: ""
      zimbraCalResContactPhone: ""
      zimbraCalResFloor: ""
      zimbraCalResLocationDisplayName: ""
      zimbraCalResRoom: ""
      zimbraCalResSite: ""
      zimbraNotes: ""
      zimbraCalResCapacity: "-1"

  # Optional random password generation settings. Default: values specified below
  # Random password is generated to initialize a user whose password is not yet available
  random_passwords:
    # Password length
    length: 32
    # If true, the generated password may contains some upper case letters
    with_upper_letters: true
    # The generated password will contain at least this number of upper case letters
    minimum_number_of_upper_letters: 1
    # If true, the generated password may contains some lower case letters
    with_lower_letters: true
    # The generated password will contain at least this number of lower case letters
    minimum_number_of_lower_letters: 1
    # If true, the generated password may contains some numbers
    with_numbers: true
    # The generated password will contain at least this number of numbers
    minimum_number_of_numbers: 1
    # If true, the generated password may contains some special chars
    with_special_chars: true
    # The generated password will contain at least this number of special chars
    minimum_number_of_special_chars: 1
    # If true, the generated password won't contains the chars specified in 'ambigous_chars_dictionary'
    avoid_ambigous_chars: false
    # The dictionary of ambigous chars (case sensitive) that may be forbidden in password, even if some are present in other dictionnaries
    ambigous_chars_dictionary: "lIO01"
    # The dictionary of letters (case unsensitive) allowed in password
    letters_dictionary: "abcdefghijklmnopqrstuvwxyz"
    # The dictionary of special chars allowed in password
    special_chars_dictionary: "!@#$%^&*"

Datamodel

The following data types may be set up:

  • Users: for users accounts. Requires the attribute name and sn to be set, a facultative aliases attribute may bet set, and the others are attributes as defined and used by libPythonBssApi and are facultative. Note that zimbraAllowFromAddress, zimbraFeatureContactsEnabled and zimbraMailForwardingAddress attributes are not supported by libPythonBssApi.
  • UserPasswords: obviously require Users, and requires that its primary keys are corresponding to the primary keys of Users, and requires the attribute userPassword that have to contain a valid LDAP hash. All other attributes will be ignored. As the userPassword attribute can also be managed by Users, you have to choose: either you manage it by Users, or by UserPasswords, but in no case should you use both at the same time for obvious reasons.
  • Groups: for groups and distribution lists. Requires the attribute name and zimbraMailStatus to be set, a facultative aliases attribute may bet set, and the others are attributes as defined and used by libPythonBssApi and are facultative.
  • GroupsMembers: to add users as group members. Obviously require Users and Groups, and requires the attributes user_pkey and group_pkey corresponding to the primary keys of Users and Groups. All other attributes will be ignored.
  • GroupsSenders: to add users as group senders. Obviously require Users and Groups, and requires the attributes user_pkey and group_pkey corresponding to the primary keys of Users and Groups. All other attributes will be ignored.
  • Resources: for resources. Requires the attribute name, zimbraCalResType and displayName to be set, and the others are attributes as defined and used by libPythonBssApi and are facultative.
Warning

If you’re setting the Users.zimbraCOSId, you should avoid setting COS-managed attributes in your datamodel, as overriding the COS default value may lead to unexpected behaviours.

Warning

Since the API does not allow renaming Groups and Resources, this operation is done by deleting the old instance and recreating the new one in the process. However, this can cause loss of links and information (e.g. resource calendars), and it is probably best to avoid these renames.

Tip

To handle Users.zimbraCOSId, it is likely that your data source provides a name rather than the COSId. It is possible to declare a mapping table in Jinja directly in your configuration:

  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        # ...
        zimbraCOSId: >-
          {{
              {
                'name_of_cos1': '11111111-1111-1111-1111-111111111111',
                'name_of_cos2': '22222222-2222-2222-2222-222222222222',
                'name_of_cos3': '33333333-3333-3333-3333-333333333333',
              }[zimbraCOSName_value_from_server | default('name_of_cos1') | lower]
              | default('11111111-1111-1111-1111-111111111111')
          }}          
        # ...
  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        # User primary email address <Valid email address>
        name: name_value_from_server
        # User last name <String>
        sn: sn_value_from_server

        # List of aliases for this user <String[]>
        aliases: aliases_value_from_server
        # User EPPN number <String>
        carLicense: carLicense_value_from_server
        # Country name <String>
        co: co_value_from_server
        # Company or institution name <String>
        company: company_value_from_server
        # Account description <String>
        description: description_value_from_server
        # Name displayed in emails <String>
        displayName: displayName_value_from_server
        # User fax <String>
        facsimileTelephoneNumber: facsimileTelephoneNumber_value_from_server
        # User first name <String>
        givenName: givenName_value_from_server
        # User home phone <String>
        homePhone: homePhone_value_from_server
        # Initial (Mr. or Mrs.) <String>
        initials: initials_value_from_server
        # User city <String>
        l: l_value_from_server
        # User mobile number <String>
        mobile: mobile_value_from_server
        # User shortcut number <String>
        pager: pager_value_from_server
        # Postal code <String>
        postalCode: postalCode_value_from_server
        # User state <String>
        st: st_value_from_server
        # User street <String>
        street: street_value_from_server
        # User phone <String>
        telephoneNumber: telephoneNumber_value_from_server
        # User title <String>
        title: title_value_from_server
        # Password hash <String>
        userPassword: userPassword_value_from_server
        # Account status (default active) <String(active, closed, locked)>
        zimbraAccountStatus: zimbraAccountStatus_value_from_server
        # Class of service Id <String>
        zimbraCOSId: zimbraCOSId_value_from_server
        # Briefcase tab <String (TRUE, FALSE)>
        zimbraFeatureBriefcasesEnabled: zimbraFeatureBriefcasesEnabled_value_from_server
        # Calendar tab <String (TRUE, FALSE)>
        zimbraFeatureCalendarEnabled: zimbraFeatureCalendarEnabled_value_from_server
        # Mail tab <String (TRUE, FALSE)>
        zimbraFeatureMailEnabled: zimbraFeatureMailEnabled_value_from_server
        # Allow user to specify forward address <String (TRUE, FALSE)>
        zimbraFeatureMailForwardingEnabled: zimbraFeatureMailForwardingEnabled_value_from_server
        # Options tab <String (TRUE, FALSE)>
        zimbraFeatureOptionsEnabled: zimbraFeatureOptionsEnabled_value_from_server
        # Tasks tab <String (TRUE, FALSE)>
        zimbraFeatureTasksEnabled: zimbraFeatureTasksEnabled_value_from_server
        # Hide in GAL <String (TRUE, FALSE)>
        zimbraHideInGal: zimbraHideInGal_value_from_server
        # 0=unlimited <Integer (bytes)>
        zimbraMailQuota: zimbraMailQuota_value_from_server
        # Free notes <String>
        zimbraNotes: zimbraNotes_value_from_server
        # Must change password at next login <String (TRUE, FALSE)>
        zimbraPasswordMustChange: zimbraPasswordMustChange_value_from_server
        # Forward address entered by user <Valid email address>
        zimbraPrefMailForwardingAddress: zimbraPrefMailForwardingAddress_value_from_server
        # Do not keep a copy of mails on the local client <String (TRUE, FALSE)>
        zimbraPrefMailLocalDeliveryDisabled: zimbraPrefMailLocalDeliveryDisabled_value_from_server
        # Email address visible for outgoing messages <String>
        zimbraMailCanonicalAddress: zimbraMailCanonicalAddress_value_from_server
        # Display name visible for outgoing messages <String>
        zimbraPrefFromDisplay: zimbraPrefFromDisplay_value_from_server

    UserPasswords:
      hermesType: your_server_UserPasswords_type_name
      attrsmapping:
        # Password hash <String>
        userPassword: userPassword_value_from_server

    Groups:
      hermesType: your_server_Groups_type_name
      attrsmapping:
        # Group primary email address <Valid email address>
        name: name_value_from_server
        # Discriminant distribution list/group <String (enabled, disabled)>
        zimbraMailStatus: zimbraMailStatus_value_from_server
        
        # List of aliases for this group <String[]>
        aliases: aliases_value_from_server
        # Group description <String>
        description: description_value_from_server
        # Display name <String>
        displayName: displayName_value_from_server
        # Report available shares to new members <String (TRUE, FALSE)>
        zimbraDistributionListSendShareMessageToNewMembers: zimbraDistributionListSendShareMessageToNewMembers_value_from_server
        # Hide group in GAL <String (TRUE, FALSE)>
        zimbraHideInGal: zimbraHideInGal_value_from_server
        # Free notes <String>
        zimbraNotes: zimbraNotes_value_from_server

    GroupsMembers:
      hermesType: your_server_GroupsMembers_type_name
      attrsmapping:
        user_pkey: user_pkey_value_from_server
        group_pkey: group_pkey_value_from_server

    GroupsSenders:
      hermesType: your_server_GroupsSenders_type_name
      attrsmapping:
        user_pkey: user_pkey_value_from_server
        group_pkey: group_pkey_value_from_server
    
    Resources:
      hermesType: your_server_Resources_type_name
      attrsmapping:
        # Resource primary email address <Valid email address>
        name: name_value_from_server
        # Display name <String>
        displayName: displayName_value_from_server
        # Resource type <String (Location, Equipment)>
        zimbraCalResType: zimbraCalResType_value_from_server
        
        # Country name <String>
        co: co_value_from_server
        # Description <String>
        description: description_value_from_server
        # Resource city <String>
        l: l_value_from_server
        # Postal code <String>
        postalCode: postalCode_value_from_server
        # Resource state <String>
        st: st_value_from_server
        # Resource street <String>
        street: street_value_from_server
        # Password hash <String>
        userPassword: userPassword_value_from_server
        # Resource status (default active) <String (active, closed)>
        zimbraAccountStatus: zimbraAccountStatus_value_from_server
        # Automatically accept or decline invitations <String (TRUE, FALSE)>
        zimbraCalResAutoAcceptDecline: zimbraCalResAutoAcceptDecline_value_from_server
        # Automatically decline invitations if there is a risk of conflict <String (TRUE, FALSE)>
        zimbraCalResAutoDeclineIfBusy: zimbraCalResAutoDeclineIfBusy_value_from_server
        # Automatically decline recurring invitations <String (TRUE, FALSE)>
        zimbraCalResAutoDeclineRecurring: zimbraCalResAutoDeclineRecurring_value_from_server
        # Building <String>
        zimbraCalResBuilding: zimbraCalResBuilding_value_from_server
        # Capacity <Integer>
        zimbraCalResCapacity: zimbraCalResCapacity_value_from_server
        # Contact email address <String>
        zimbraCalResContactEmail: zimbraCalResContactEmail_value_from_server
        # Contact name <String>
        zimbraCalResContactName: zimbraCalResContactName_value_from_server
        # Contact phone <String>
        zimbraCalResContactPhone: zimbraCalResContactPhone_value_from_server
        # Floor <String>
        zimbraCalResFloor: zimbraCalResFloor_value_from_server
        # Name of the displayed location <String>
        zimbraCalResLocationDisplayName: zimbraCalResLocationDisplayName_value_from_server
        # Room <String>
        zimbraCalResRoom: zimbraCalResRoom_value_from_server
        # Site <String>
        zimbraCalResSite: zimbraCalResSite_value_from_server
        # Free notes <String>
        zimbraNotes: zimbraNotes_value_from_server
        # Forward calendar invitations to this address <Array>
        zimbraPrefCalendarForwardInvitesTo: zimbraPrefCalendarForwardInvitesTo_value_from_server

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

flatfiles_emails_of_groups

Description

This client will generate a flat txt file by Groups, containing the e-mail addresses of its members (one by line).

Configuration

hermes-client-usersgroups_flatfiles_emails_of_groups:
  # MANDATORY
  destDir: "/path/where/files/are/stored"

  # Facultative: if set, will generate a file only for the specified group names in list
  onlyTheseGroups:
    - group1
    - group2

Datamodel

The following data types must be set up:

  • Users, requires the following attribute names:
    • user_pkey: the user primary key
    • mail: the user email address
  • Groups, requires the following attribute names:
    • group_pkey: the group primary key
    • name: the group name, that will be compared to those in onlyTheseGroups, and used to name the destination file “groupName.txt”
  • GroupsMembers, requires the following attribute names:
    • user_pkey: the user primary key
    • group_pkey: the group primary key
  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        user_pkey: user_pkey_on_server
        mail: mail_on_server

    Groups:
      hermesType: your_server_Groups_type_name
      attrsmapping:
        group_pkey: group_pkey_on_server
        name: group_name_on_server

    GroupsMembers:
      hermesType: your_server_GroupsMembers_type_name
      attrsmapping:
        user_pkey: user_pkey_on_server
        group_pkey: group_pkey_on_server

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

kadmin_heimdal

Description

This client will handle Users and UserPassword, and store data in an Heimdal Kerberos server.

Configuration

hermes-client-usersgroups_kadmin_heimdal:
  # MANDATORY: Principal with required rights to manage users and passwords in kadmin
  kadmin_login: root/admin
  # MANDATORY: Password of principal above
  kadmin_password: "s3cReT_p4s5w0rD"
  # MANDATORY: Name of Kerberos realm
  kadmin_realm: KERBEROS_REALM

  # Service principal name to get ticket for. Default: kadmin/admin
  kinit_spn: kadmin/admin
  # kinit command to use. Default: kinit.heimdal
  kinit_cmd: kinit.heimdal
  # kadmin command to use. Default: kadmin.heimdal
  kadmin_cmd: kadmin.heimdal
  # kdestroy command to use. Default: kdestroy.heimdal
  kdestroy_cmd: kdestroy.heimdal

  # kadmin additional args to use when adding a user. Must be a list of strings. Default:
  #   - "--max-ticket-life=1 day"
  #   - "--max-renewable-life=1 week"
  #   - "--attributes="
  #   - "--expiration-time=never"
  #   - "--policy=default"
  #   - "--pw-expiration-time=never"
  kadmin_user_add_additional_options:
    - "--max-ticket-life=1 day"
    - "--max-renewable-life=1 week"
    - "--attributes="
    - "--expiration-time=never"
    - "--policy=default"
    - "--pw-expiration-time=never"
  
  # Set to true to start with an already filled Kerberos database. Default: false
  dont_fail_on_existing_user: false

  # Optional random password generation settings. Default: values specified below
  # Random password is generated to initialize a user whose password is not yet available,
  # or when the user password is removed but the user still exists
  random_passwords:
    # Password length
    length: 32
    # If true, the generated password may contains some upper case letters
    with_upper_letters: true
    # The generated password will contain at least this number of upper case letters
    minimum_number_of_upper_letters: 1
    # If true, the generated password may contains some lower case letters
    with_lower_letters: true
    # The generated password will contain at least this number of lower case letters
    minimum_number_of_lower_letters: 1
    # If true, the generated password may contains some numbers
    with_numbers: true
    # The generated password will contain at least this number of numbers
    minimum_number_of_numbers: 1
    # If true, the generated password may contains some special chars
    with_special_chars: true
    # The generated password will contain at least this number of special chars
    minimum_number_of_special_chars: 1
    # If true, the generated password won't contains the chars specified in 'ambigous_chars_dictionary'
    avoid_ambigous_chars: false
    # The dictionary of ambigous chars (case sensitive) that may be forbidden in password, even if some are present in other dictionnaries
    ambigous_chars_dictionary: "lIO01"
    # The dictionary of letters (case unsensitive) allowed in password
    letters_dictionary: "abcdefghijklmnopqrstuvwxyz"
    # The dictionary of special chars allowed in password
    special_chars_dictionary: "!@#$%^&*"

Datamodel

The following data types must be set up:

  • Users, requires the following attribute names:
    • login: the user login, that will be used as principal
  • UserPasswords, requires the following attribute names:
    • password: the password of the user

Obviously, the primary keys of Users and UserPasswords must match to be able to link login with password.

  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        login: login_on_server

    UserPasswords:
      hermesType: your_server_UserPasswords_type_name
      attrsmapping:
        password: password_on_server

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

ldap

Description

This client will handle Users, Groups and UserPasswords events, and store data in an LDAP directory.

The local Datamodel keys will be used as LDAP attributes names, without any constraints, and it is possible to specify some Datamodel keys to ignore (typically the primary keys) that won’t be stored in LDAP directory with the attributesToIgnore setting.

The GroupMembers will only store data (typically LDAP member attribute) in LDAP group entries as it is possible to use LDAP overlays (dynlist or the deprecated memberOf) to dynamically generate the corresponding data in user entries. You should consider reading the propagateUserDNChangeOnGroupMember setting documentation.

LDAP password hashes generation

If you need to generate LDAP password hashes, you may consider looking at ldapPasswordHash attribute plugin.

Configuration

hermes-client-usersgroups_ldap:
    # MANDATORY: LDAP server URI
    uri: ldaps://ldap.example.com:636
    # MANDATORY: LDAP server credentials to use
    binddn: cn=account,dc=example,dc=com
    bindpassword: s3cReT_p4s5w0rD
    # MANDATORY: LDAP base DN
    basedn: dc=example,dc=com
    users_ou: ou=users,dc=example,dc=com
    groups_ou: ou=groups,dc=example,dc=com

    ssl: # Facultative
      # Path to PEM file with CA certs
      cafile: /path/to/INTERNAL-CA-chain.crt # Facultative
      # Path to file with PEM encoded cert for client cert authentication, requires keyfile
      certfile: /path/to/client.crt # Facultative
      # Path to file with PEM encoded key for client cert authentication, requires certfile
      keyfile: /path/to/client.pem # Facultative

    # MANDATORY: Name of DN attribute for Users, UserPasswords and Groups
    # You have to set up values for the three, even if you don't use some of the types
    dnAttributes:
      Users: uid
      UserPasswords: uid
      Groups: cn

    # Depending on group and group membership settings in LDAP, you may use another
    # attribute than the default 'member' attribute to store the DN of group member
    # Facultative. Default value: "member"
    groupMemberAttribute: member

    # Depending on group and group membership settings in LDAP, you usually may want
    # to propagate a user DN change on group member attributes. But sometimes, it
    # may be handled by an overlay, e.g. with memberOf overlay and the
    # memberof-refint/olcMemberOfRefint setting to TRUE
    # If set to true, it requires 'groupsObjectclass' to be defined
    # Facultative. Default value: true
    propagateUserDNChangeOnGroupMember: true

    # If you've set 'propagateUserDNChangeOnGroupMember' to true,
    # you MUST indicate your group objectClass that will be used to search
    # your groups entries
    # Mandatory only if 'propagateUserDNChangeOnGroupMember' is true
    groupsObjectclass: groupOfNames

    # It is possible to set a default value for some attributes for Users, UserPasswords and Groups
    # The default value will be set on added and modified events if the local attribute has no value
    defaultValues:
      Groups:
        member: "" # Hack to allow creation of an empty group, because of the "MUST member" in schema

    # The local attributes listed here won't be stored in LDAP for Users, UserPasswords and Groups
    attributesToIgnore:
      Users:
        - user_pkey
      UserPasswords:
        - user_pkey
      Groups:
        - group_pkey

Datamodel

The following data types may be set up:

  • Users
  • UserPasswords: obviously require Users, and requires the following attribute names user_pkey corresponding to the primary keys of Users
  • Groups
  • GroupsMembers: obviously require Users and Groups, and requires the following attribute names user_pkey group_pkey corresponding to the primary keys of Users and Groups
  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        user_pkey:  user_primary_key_on_server
        uid: login_on_server
        # ...

    UserPasswords:
      hermesType: your_server_UserPasswords_type_name
      attrsmapping:
        user_pkey:  user_primary_key_on_server
        userPassword:  ldap_pwd_hash_list_on_server
        # ...

    Groups:
      hermesType: your_server_Groups_type_name
      attrsmapping:
        group_pkey:  group_primary_key_on_server
        cn:  group_name_on_server
        # ...

    GroupsMembers:
      hermesType: your_server_GroupsMembers_type_name
      attrsmapping:
        user_pkey:  user_primary_key_on_server
        group_pkey:  group_primary_key_on_server
        # ...

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

null

Description

This client will handle Users, Groups and UserPasswords events, but does nothing but logging.

Configuration

Nothing to configure for the plugin.

hermes-client-usersgroups_null:

Datamodel

The following data types may be set up, without any specific constraint as nothing will be processed.

  • Users
  • UserPasswords
  • Groups
  • GroupsMembers
  datamodel:
    Users:
      hermesType: your_server_Users_type_name
      attrsmapping:
        attr1_client:  attr1_server
        # ...

    UserPasswords:
      hermesType: your_server_UserPasswords_type_name
      attrsmapping:
        attr1_client:  attr1_server
        # ...

    Groups:
      hermesType: your_server_Groups_type_name
      attrsmapping:
        attr1_client:  attr1_server
        # ...

    GroupsMembers:
      hermesType: your_server_GroupsMembers_type_name
      attrsmapping:
        attr1_client:  attr1_server
        # ...

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

messagebus consumers plugins

  • kafka: Receive events from an Apache Kafka server

  • sqlite: Receive events from an SQLite database

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of messagebus consumers plugins

kafka

Description

This plugin allows hermes-client to receive events from an Apache Kafka server.

Configuration

It is possible to connect to Kafka server without authentication, or with SSL (TLS) authentication.

hermes:
  plugins:
    messagebus:
      kafka:
        settings:
          # MANDATORY: the Kafka server or servers list that can be used
          servers:
            - dummy.example.com:9093

          # Facultative: which Kafka API version to use. If unset, the
          # api version will be detected at startup and reported in the logs.
          # Don't set this directive unless you encounter some
          # "kafka.errors.NoBrokersAvailable: NoBrokersAvailable" errors raised
          # by a "self.check_version()" call.
          api_version: [2, 6, 0]

          # Facultative: enables SSL authentication. If set, the 3 options below
          # must be defined
          ssl:
            # MANDATORY: hermes-client cert file that will be used for
            # authentication
            certfile: /path/to/.hermes/dummy.crt
            # MANDATORY: hermes-client cert file private key
            keyfile: /path/to/.hermes/dummy.pem
            # MANDATORY: The PKI CA cert
            cafile: /path/to/.hermes/INTERNAL-CA-chain.crt

          # MANDATORY: the topic to send events to
          topic: hermes
          # MANDATORY: the group_id to assign client to. Set what you want here.
          group_id: hermes-grp

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

sqlite

Description

This plugin allows hermes-client to receive events from an SQLite database.

Configuration

hermes:
  plugins:
    messagebus:
      sqlite:
        settings:
          # MANDATORY:
          uri: /path/to/.hermes/bus.sqlite

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Run

You may start any hermes app (server, server-cli, client, client-cli) directly with the hermes.py launcher, specifying the app name as first argument, or by a symlink.

In either case, the configuration will be searched in the current working directory.

Running from launcher

# Server
/path/to/hermes.py server
# Server CLI
/path/to/hermes.py server-cli

# Client usersgroups_null
/path/to/hermes.py client-usersgroups_null
# Client usersgroups_null CLI
/path/to/hermes.py client-usersgroups_null-cli

If you prefer to avoid using hermes app name as first argument, you may symlink hermes.py to hermes-appname, e.g.:

ln -s hermes.py hermes-server
ln -s hermes.py hermes-server-cli
ln -s hermes.py hermes-client-usersgroups_null
ln -s hermes.py hermes-client-usersgroups_null-cli
# ...

and running them with:

# Server
/path/to/hermes-server
# Server CLI
/path/to/hermes-server-cli

# Client usersgroups_null
/path/to/hermes-client-usersgroups_null
# Client usersgroups_null CLI
/path/to/hermes-client-usersgroups_null-cli

Commands arguments

The server and clients don’t take any arguments, as they’re designed to be controlled over the CLI.

Once the server or client is started, you may ask for available CLI commands with -h or --help option.

For server:

$ ./hermes.py server-cli -h
usage: hermes-server-cli [-h] {initsync,update,quit,pause,resume,status} ...

Hermes Server CLI

positional arguments:
  {initsync,update,quit,pause,resume,status}
                        Sub-commands
    initsync            Send specific init message containing all data but passwords. Useful to fill new client
    update              Force update now, ignoring updateInterval
    quit                Stop server
    pause               Pause processing until 'resume' command is sent
    resume              Resume processing that has been paused with 'pause'
    status              Show server status

options:
  -h, --help            show this help message and exit

For a client:

$ ./hermes.py client-usersgroups_null-cli -h
usage: hermes-client-usersgroups_null-cli [-h] {quit,pause,resume,status} ...

Hermes client hermes-client-usersgroups_null CLI

positional arguments:
  {quit,pause,resume,status}
                        Sub-commands
    quit                Stop hermes-client-usersgroups_null
    pause               Pause processing until 'resume' command is sent
    resume              Resume processing that has been paused with 'pause'
    status              Show hermes-client-usersgroups_null status

options:
  -h, --help            show this help message and exit

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Chapter 3

Maintenance

This section details common operating procedures.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Maintenance

Server datamodel update

A data model is not fixed in time, it can evolve and therefore be updated, whether from the server or on one or more clients.

Each time the datamodel is modified on the server, its new version is propagated to the clients with its “public” data: each data type is included, with its primary key, the list of its attributes, and the list of its secret attributes. Then some consecutive events are emitted.

Add an attribute to an existing data type

  1. 👱 Add attribute to server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the added attribute and its value
    • 💻 Processing of dataschema event by clients: updating their schema. Processing incoming “modified” events: as the attribute is not declared yet in their datamodel, its value is ignored but stored in the complete cache
  2. 👱 Add attribute to clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of “modified” local events from the complete cache

or

  1. 👱 Add attribute to clients datamodel so that they can process it when it will be added to the server datamodel, reload clients: ⚠️ datamodel warning “remote attributes don’t exist in current Dataschema
  2. 👱 Add attribute to server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the added attribute and its value
  3. 💻 Processing of dataschema event by clients: updating their schema. ✅ No more datamodel warning. Processing incoming “modified” events

Remove an attribute from a data type

  1. 👱 Remove attribute from clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of consecutive “modified” local events
  2. 👱 Remove attribute from server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the removed attribute. They’ll be ignored by clients
  3. 💻 Processing of dataschema event by clients: updating their schema

or

  1. 👱 Remove attribute from server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the removed attribute
  2. 💻 Processing of dataschema event by clients: updating their schema. ⚠️ datamodel warning “remote attributes don’t exist in current Dataschema”. Processing incoming “modified” events
  3. 👱 Remove attribute from clients datamodel, reload clients: ✅ No more datamodel warning

Modify the value of an attribute (by changing its Jinja filter, or its remote attribute from the data source)

  1. 👱 Modify attribute in server datamodel, reload server
    • 💻 Emission of “modified” events for the concerned entries, with the modified attribute new values
  2. 💻 Processing incoming “modified” events

Add an existing attribute of a data type to secrets_attrs

  1. 👱 Modify secrets_attrs in server datamodel, reload server
    • 💻 Purging attribute from server cache
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the “added” attribute and its values
  2. 💻 Processing of dataschema event by clients: updating their schema, purging attribute from their cache
    • 💻 Processing incoming “modified” events

Remove an existing attribute of a data type from secrets_attrs

  1. 👱 Modify secrets_attrs in server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “modified” events for the concerned entries, with the “added” attribute and its values
  2. 💻 Processing of dataschema event by clients: updating their schema
    • 💻 Processing incoming “modified” events

Add a new data type

  1. 👱 Add data type to server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “added” events for each entry of added data type
    • 💻 Processing of dataschema event by clients: updating their schema. Processing incoming “added” events: as the data type is not declared yet in their datamodel, its entries are ignored but stored in the complete cache
  2. 👱 Add data type to clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of “added” local events from the complete cache

or

  1. 👱 Add data type to clients datamodel so that they can process it when it will be added to the server datamodel, reload clients: ⚠️ datamodel warning “remote types don’t exist in current Dataschema
  2. 👱 Add data type to server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “added” events for each entry of added data type
  3. 💻 Processing of dataschema event by clients: updating their schema. ✅ No more datamodel warning. Processing incoming “added” events

Remove an existing data type

  1. 👱 Remove data type from clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of consecutive “removed” local events
    • 💻 Purging local cache files of removed data type
  2. 👱 Remove data type from server datamodel, reload server
    • 💻 Emission of “removed” events for each entry of removed data type
    • 💻 Purging cache files of removed data type
    • 💻 Emission of a dataschema event by the server
  3. 💻 Processing incoming “removed” events by clients: all are ignored
    • 💻 Processing of dataschema event by clients: updating their schema
    • 💻 Purging remote cache files of removed data type

or

  1. 👱 Remove data type from server datamodel, reload server
    • 💻 Emission of “removed” events for each entry of removed data type
    • 💻 Purging cache files of removed data type
    • 💻 Emission of a dataschema event by the server
  2. 💻 Processing incoming “removed” events by clients
    • 💻 Processing of dataschema event by clients: updating their schema. ⚠️ datamodel warning “remote types don’t exist in current Dataschema
    • 💻 Purging remote cache files of removed data type
  3. 👱 Remove data type from clients datamodel, reload clients: ✅ No more datamodel warning
    • 💻 Purging local cache files of removed data type

Change the primary key attribute of a data type

DANGER - Here be dragons

This is the riskiest datamodel update, as there may be links between data types, using the primary key as a foreign key.
This means that you’ll need to update every data type at once, without missing anything.

You should really consider doing this update on a test environment before doing it in production, because if something fails, your clients could be permanently broken.

Prerequisites

The attribute(s) to use as new primary key must already exist in your server datamodel, and their value must already have been propagated and exists in clients cache.

Trashbin retention may delay the Datamodel update

The new primary key MUST exist in every entry of its data type before updating the datamodel. If trashbin is enabled on some of your clients, the new primary key attribute might be missing from trashed entries.

The safest way to handle this is to add the attribute to your server datamodel and delay the primary key change at least to one day + as many days as the highest trashbin_retention value of all your clients.

If you don’t handle this this way, the client will purge all trashed entries that doesn’t contains the value of the new primary key attribute(s) as if the trashbin_retention delay was expired.

Updating

  1. 👱 Update all data types in server datamodel, reload server
    • 💻 Updating changed primary keys in cache files on the server
    • 💻 Emission of a dataschema event by the server
    • 💻 Processing of dataschema event by clients: purging trashed entries that are missing the new primary key, updating their schema, updating changed primary keys in cache files and error queue

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Client datamodel update

A data model is not fixed in time, it can evolve and therefore be updated, whether from the server or on one or more clients.

Each time the datamodel is modified on a client, the client will generate appropriate local events to reflect the data changes on targets.
It may notify about datamodel warnings if some remote data type or attributes are set in its datamodel, but doesn’t exist in current dataschema received from hermes-server.

Add an attribute to an existing data type

  1. 👱 Add attribute to clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of “modified” local events from the complete cache

Remove an attribute from a data type

  1. 👱 Remove attribute from clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of consecutive “modified” local events

Modify the value of an attribute (by changing its Jinja filter, or its remote attribute from the data source)

  1. 👱 Modify attribute in clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of consecutive “modified” local events

Add a new data type

If its hermesType already exists in dataschema

  1. 👱 Add data type to clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of “added” local events from the complete cache

If its hermesType doesn’t exists in dataschema yet

  1. 👱 Add data type to clients datamodel so that they can process it when it will be added to the server datamodel, reload clients: ⚠️ datamodel warning “remote types don’t exist in current Dataschema
  2. 👱 Add data type to server datamodel, reload server
    • 💻 Emission of a dataschema event by the server
    • 💻 Emission of “added” events for each entry of added data type
  3. 💻 Processing of dataschema event by clients: updating their schema. ✅ No more datamodel warning. Processing incoming “added” events

Remove an existing data type

  1. 👱 Remove data type from clients datamodel, reload clients
    • 💻 Local datamodel update processing by clients: generation and processing of consecutive “removed” local events
    • 💻 Purging local cache files of removed data type

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Chapter 4

Examples

This section contains some examples of different use cases, and their config files.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Examples

01. Single datasource

Context

In this example, we have a unique Datasource (an Oracle database) that we’ll use to convert typical users, password, groups and group membership data to fill an LDAP server.

Oracle schema

classDiagram
    direction BT
    ORA_USERPASSWORDS <-- ORA_USERS
    ORA_GROUPSMEMBERS <-- ORA_USERS
    ORA_GROUPSMEMBERS <-- ORA_GROUPS
    class ORA_USERS{
      USER_ID - NUMBER, NOT NULL
      LOGIN - VARCHAR2
      FIRSTNAME - VARCHAR2
      LASTNAME - VARCHAR2
      EMAIL - VARCHAR2
    }
    class ORA_USERPASSWORDS{
      USER_ID - NUMBER, NOT NULL
      PASSWORD_ENCRYPTED - RAW
      LDAP_HASHES - VARCHAR2
    }
    class ORA_GROUPS{
      GROUP_ID - NUMBER, NOT NULL
      GROUP_NAME - VARCHAR2
      GROUP_DESC - VARCHAR2
    }
    class ORA_GROUPSMEMBERS{
      USER_ID - NUMBER, NOT NULL
      GROUP_ID - NUMBER, NOT NULL
    }

hermes-server-config

hermes:
  cache:
    dirpath: /path/to/.hermes/hermes-server/cache
    enable_compression: true
    backup_count: 1
  cli_socket:
    path: /path/to/.hermes/hermes-server.sock # Facultative, required to use cli
    owner: user_login # Facultative
    group: group_name # Facultative
    # Facultative, '0600' will be used by default.
    # The value MUST be prefixed by a 0 to indicate that it's an octal integer
    mode: 0660
  logs:
    logfile: /path/to/.hermes/hermes-server/logs/hermes-server.log
    backup_count: 31 # 1 month
    verbosity: info
  mail:
    server: dummy.example.com
    from: Hermes Server <no-reply@example.com>
    to:
      - user@example.com
  plugins:
    # Attribute transform plugins (jinja filters)
    attributes:
      ldapPasswordHash:
        settings:
          default_hash_types:
            - SMD5
            - SSHA
            - SSHA256
            - SSHA512

      crypto_RSA_OAEP:
        settings:
          keys:
            decrypt_from_datasource:
              hash: SHA256
              # WARNING - THIS KEY IS WEAK AND PUBLIC, NEVER USE IT
              rsa_key: |-
                -----BEGIN RSA PRIVATE KEY-----
                MIGrAgEAAiEAstltWwDzmtSSHi7lfKqtUIO4dI8aX/EAopNdR/cWXH8CAwEAAQIh
                AKfflFjGNOJQwvJX3Io+/juxO+HFd7SRC++zBD9paZqZAhEA5OtjZQUapRrV/aC5
                NXFsswIRAMgBtgpz+t0FxyEXdzlcTwUCEHU6WZ8M2xU7xePpH49Ps2MCEQC+78s+
                /WvfNtXcRI+gJfyVAhAjcIWzHC5q4wzgL7psbPGy
                -----END RSA PRIVATE KEY-----                

    # SERVER ONLY - Sources used to fetch data. At lease one must be defined
    datasources:
      datasource_of_example1: # Source name. Use whatever you want. Will be used in datamodel
        type: oracle # Source type. A datasource plugin with this name must exist
        settings: # Settings of current source
          login: HERMES_DUMMY
          password: "DuMmY_p4s5w0rD"
          port: 1234
          server: dummy.example.com
          sid: DUMMY

    messagebus:
      kafka:
        settings:
          servers:
            - dummy.example.com:9093
          ssl:
            certfile: /path/to/.hermes/dummy.crt
            keyfile: /path/to/.hermes/dummy.pem
            cafile: /path/to/.hermes/INTERNAL-CA-chain.crt
          topic: hermes

hermes-server:
  updateInterval: 60 # Interval between two data update, in seconds

  # The declaration order of data types is important:
  # - add/modify events will be processed in the declaration order
  # - remove events will be processed in the reversed declaration order
  datamodel:
    SRVGroups: # Settings for SRVGroups data type
      primarykeyattr: srv_group_id # Attribute name that will be used as primary key
      # Facultative template of object string representation that will be used in logs
      toString: "<SRVGroups[{{ srv_group_id }}, {{ srv_group_name | default('#UNDEF#') }}]>"
      sources: # datasource(s) to use to fetch data. Usually one, but several could be used
        datasource_of_example1: # The source name set in hermes.plugins.datasources
          # The query to fetch data.
          # 'type' is mandatory and indicate to plugin which flavor of query to proceed
          #   Possible 'type' values are 'add', 'delete', 'fetch' and 'modify'
          # 'query' is the query to send
          # 'vars' is a dict with vars to use (and sanitize !) in query
          #
          # According to source type, 'query' and 'vars' may be facultative.
          # A Jinja template can be inserted in 'query' and 'vars' values to avoid wildcards
          # and manually typing the attribute list, or to filter the query using a cached value.
          #
          # Jinja vars available are [REMOTE_ATTRIBUTES, CACHED_VALUES].
          # See documentation for details:
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.fetch
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_GROUPS              
          attrsmapping:
            srv_group_id: GROUP_ID
            srv_group_name: GROUP_NAME
            srv_group_desc: GROUP_DESC

    SRVUsers: # Settings for SRVUsers data type
      primarykeyattr: srv_user_id # Attribute name that will be used as primary key
      # Facultative template of object string representation that will be used in logs
      toString: "<SRVUsers[{{ srv_user_id }}, {{ srv_login | default('#UNDEF#') }}]>"
      sources: # datasource(s) to use to fetch data. Usually one, but several could be used
        datasource_of_example1: # The source name set in hermes.plugins.datasources
          # The query to fetch data.
          # 'type' is mandatory and indicate to plugin which flavor of query to proceed
          #   Possible 'type' values are 'add', 'delete', 'fetch' and 'modify'
          # 'query' is the query to send
          # 'vars' is a dict with vars to use (and sanitize !) in query
          #
          # According to source type, 'query' and 'vars' may be facultative.
          # A Jinja template can be inserted in 'query' and 'vars' values to avoid wildcards
          # and manually typing the attribute list, or to filter the query using a cached value.
          #
          # Jinja vars available are [REMOTE_ATTRIBUTES, CACHED_VALUES].
          # See documentation for details:
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.fetch
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_USERS              

          attrsmapping:
            srv_user_id: USER_ID
            srv_login: LOGIN
            # Ensure first letter of each names is uppercase, and other are lowercase
            srv_firstname: "{{ FIRSTNAME | title}}"
            srv_lastname: "{{ LASTNAME | title}}"
            srv_mail: MAIL

    SRVUserPasswords: # Settings for SRVUserPasswords data type
      primarykeyattr: srv_user_id # Attribute name that will be used as primary key

      # Integrity constraints between datamodel type, in Jinja.
      # WARNING: could be very slow, keep it as simple as possible, and focused upon
      # primary keys
      # Jinja vars available are '_SELF': the current object, and every types declared
      # For each "typename" declared, two vars are available:
      # - typename_pkeys: a set with every primary keys
      # - typename: a list of dict containing each entries
      # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.integrity_constraints
      integrity_constraints:
        - "{{ _SELF.srv_user_id in SRVUsers_pkeys }}"
      
      sources: # datasource(s) to use to fetch data. Usually one, but several could be used
        datasource_of_example1: # The source name set in hermes.plugins.datasources
          # The query to fetch data.
          # 'type' is mandatory and indicate to plugin which flavor of query to proceed
          #   Possible 'type' values are 'add', 'delete', 'fetch' and 'modify'
          # 'query' is the query to send
          # 'vars' is a dict with vars to use (and sanitize !) in query
          #
          # According to source type, 'query' and 'vars' may be facultative.
          # A Jinja template can be inserted in 'query' and 'vars' values to avoid wildcards
          # and manually typing the attribute list, or to filter the query using a cached value.
          #
          # Jinja vars available are [REMOTE_ATTRIBUTES, CACHED_VALUES].
          # See documentation for details:
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.fetch
          fetch:
            type: fetch
            query: >-
              SELECT p.{{ REMOTE_ATTRIBUTES | join(', p.') }}
              FROM ORA_USERPASSWORDS p              

          # For each entry successfully processed, we'll remove PASSWORD_ENCRYPTED
          # and store the freshly computed LDAP_HASHES.
          #
          # Facultative. The query to run each time an item of current data have been processed
          # without errors.
          # 'type' is mandatory and indicate to plugin which flavor of query to proceed
          #   Possible 'type' values are 'add', 'delete', 'fetch' and 'modify'
          # 'query' is the query to send
          # 'vars' is a dict with vars to use (and sanitize !) in query
          #
          # According to source type, 'query' and 'vars' may be facultative.
          # A Jinja template can be inserted in 'query' and 'vars' values to avoid wildcards
          # and manually typing the attribute list, or to filter the query using a cached value.
          #
          # Jinja vars available are [REMOTE_ATTRIBUTES, ITEM_CACHED_VALUES, ITEM_FETCHED_VALUES].
          # See documentation for details:
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one
          commit_one:
            type: modify
            query: >-
              UPDATE ORA_USERPASSWORDS
              SET
                PASSWORD_ENCRYPTED = NULL,
                LDAP_HASHES = :ldap_hashes
              WHERE USER_ID = :user_id              

            vars:
              user_id: "{{ ITEM_FETCHED_VALUES.srv_user_id }}"
              ldap_hashes: "{{ ';'.join(ITEM_FETCHED_VALUES.srv_password_ldap) }}"

          attrsmapping:
            srv_user_id: USER_ID
            # Decipher PASSWORD_ENCRYPTED value to generate the LDAP hashes.
            srv_password_ldap: >-
              {{
                (
                  PASSWORD_ENCRYPTED
                  | crypto_RSA_OAEP('decrypt_from_datasource')
                  | ldapPasswordHash
                )
                | default(None if LDAP_HASHES is None else LDAP_HASHES.split(';'))
              }}              

    SRVGroupsMembers:
      # Attribute names that will be used as primary key: here is is a tuple
      primarykeyattr: [srv_group_id, srv_user_id]
      # Foreign keys declaration between data types
      # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.foreignkeys
      foreignkeys:
        srv_group_id:
          from_objtype: SRVGroups
          from_attr: srv_group_id
        srv_user_id:
          from_objtype: SRVUsers
          from_attr: srv_user_id
      # Integrity constraints between datamodel type, in Jinja.
      # WARNING: could be very slow, keep it as simple as possible, and focused upon
      # primary keys
      # Jinja vars available are '_SELF': the current object, and every types declared
      # For each "typename" declared, two vars are available:
      # - typename_pkeys: a set with every primary keys
      # - typename: a list of dict containing each entries
      # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.integrity_constraints
      integrity_constraints:
        - "{{ _SELF.srv_user_id in SRVUsers_pkeys and _SELF.srv_group_id in SRVGroups_pkeys }}"
      sources: # datasource(s) to use to fetch data. Usually one, but several could be used
        datasource_of_example1: # The source name set in hermes.plugins.datasources
          # The query to fetch data.
          # 'type' is mandatory and indicate to plugin which flavor of query to proceed
          #   Possible 'type' values are 'add', 'delete', 'fetch' and 'modify'
          # 'query' is the query to send
          # 'vars' is a dict with vars to use (and sanitize !) in query
          #
          # According to source type, 'query' and 'vars' may be facultative.
          # A Jinja template can be inserted in 'query' and 'vars' values to avoid wildcards
          # and manually typing the attribute list, or to filter the query using a cached value.
          #
          # Jinja vars available are [REMOTE_ATTRIBUTES, CACHED_VALUES].
          # See documentation for details:
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.fetch
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_GROUPSMEMBERS              
          attrsmapping:
            srv_user_id: USER_ID
            srv_group_id: GROUP_ID
hermes:
  cache:
    dirpath: /path/to/.hermes/hermes-server/cache
  cli_socket:
    path: /path/to/.hermes/hermes-server.sock
  logs:
    logfile: /path/to/.hermes/hermes-server/logs/hermes-server.log
    verbosity: info
  mail:
    server: dummy.example.com
    from: Hermes Server <no-reply@example.com>
    to:
      - user@example.com
  plugins:
    attributes:
      ldapPasswordHash:
        settings:
          default_hash_types:
            - SMD5
            - SSHA
            - SSHA256
            - SSHA512

      crypto_RSA_OAEP:
        settings:
          keys:
            decrypt_from_datasource:
              hash: SHA256
              # WARNING - THIS KEY IS WEAK AND PUBLIC, NEVER USE IT
              rsa_key: |-
                -----BEGIN RSA PRIVATE KEY-----
                MIGrAgEAAiEAstltWwDzmtSSHi7lfKqtUIO4dI8aX/EAopNdR/cWXH8CAwEAAQIh
                AKfflFjGNOJQwvJX3Io+/juxO+HFd7SRC++zBD9paZqZAhEA5OtjZQUapRrV/aC5
                NXFsswIRAMgBtgpz+t0FxyEXdzlcTwUCEHU6WZ8M2xU7xePpH49Ps2MCEQC+78s+
                /WvfNtXcRI+gJfyVAhAjcIWzHC5q4wzgL7psbPGy
                -----END RSA PRIVATE KEY-----                

    datasources:
      datasource_of_example1:
        type: oracle
        settings:
          login: HERMES_DUMMY
          password: "DuMmY_p4s5w0rD"
          port: 1234
          server: dummy.example.com
          sid: DUMMY

    messagebus:
      kafka:
        settings:
          servers:
            - dummy.example.com:9093
          ssl:
            certfile: /path/to/.hermes/dummy.crt
            keyfile: /path/to/.hermes/dummy.pem
            cafile: /path/to/.hermes/INTERNAL-CA-chain.crt
          topic: hermes

hermes-server:
  # The declaration order of data types is important:
  # - add/modify events will be processed in the declaration order
  # - remove events will be processed in the reversed declaration order
  datamodel:
    SRVGroups:
      primarykeyattr: srv_group_id
      toString: "<SRVGroups[{{ srv_group_id }}, {{ srv_group_name | default('#UNDEF#') }}]>"
      sources:
        datasource_of_example1:
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_GROUPS              
          attrsmapping:
            srv_group_id: GROUP_ID
            srv_group_name: GROUP_NAME
            srv_group_desc: GROUP_DESC

    SRVUsers:
      primarykeyattr: srv_user_id
      toString: "<SRVUsers[{{ srv_user_id }}, {{ srv_login | default('#UNDEF#') }}]>"
      sources:
        datasource_of_example1:
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_USERS              

          attrsmapping:
            srv_user_id: USER_ID
            srv_login: LOGIN
            # Ensure first letter of each names is uppercase, and other are lowercase
            srv_firstname: "{{ FIRSTNAME | title}}"
            srv_lastname: "{{ LASTNAME | title}}"
            srv_mail: MAIL

    SRVUserPasswords:
      primarykeyattr: srv_user_id

      # Integrity constraints between datamodel type, in Jinja.
      # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.integrity_constraints
      integrity_constraints:
        - "{{ _SELF.srv_user_id in SRVUsers_pkeys }}"
      
      sources:
        datasource_of_example1:
          fetch:
            type: fetch
            query: >-
              SELECT p.{{ REMOTE_ATTRIBUTES | join(', p.') }}
              FROM ORA_USERPASSWORDS p              

          # For each entry successfully processed, we'll remove PASSWORD_ENCRYPTED
          # and store the freshly computed LDAP_HASHES.
          # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.sources.datasource-name.commit_one
          commit_one:
            type: modify
            query: >-
              UPDATE ORA_USERPASSWORDS
              SET
                PASSWORD_ENCRYPTED = NULL,
                LDAP_HASHES = :ldap_hashes
              WHERE USER_ID = :user_id              

            vars:
              user_id: "{{ ITEM_FETCHED_VALUES.srv_user_id }}"
              ldap_hashes: "{{ ';'.join(ITEM_FETCHED_VALUES.srv_password_ldap) }}"

          attrsmapping:
            srv_user_id: USER_ID
            # Decipher PASSWORD_ENCRYPTED value to generate the LDAP hashes.
            srv_password_ldap: >-
              {{
                (
                  PASSWORD_ENCRYPTED
                  | crypto_RSA_OAEP('decrypt_from_datasource')
                  | ldapPasswordHash
                )
                | default(None if LDAP_HASHES is None else LDAP_HASHES.split(';'))
              }}              

    SRVGroupsMembers:
      # The primary key is a tuple
      primarykeyattr: [srv_group_id, srv_user_id]
      foreignkeys:
        srv_group_id:
          from_objtype: SRVGroups
          from_attr: srv_group_id
        srv_user_id:
          from_objtype: SRVUsers
          from_attr: srv_user_id
      # Integrity constraints between datamodel type, in Jinja.
      # https://hermes.insa-strasbourg.fr/en/setup/configuration/hermes-server/#hermes-server.datamodel.data-type-name.integrity_constraints
      integrity_constraints:
        - "{{ _SELF.srv_user_id in SRVUsers_pkeys and _SELF.srv_group_id in SRVGroups_pkeys }}"
      sources:
        datasource_of_example1:
          fetch:
            type: fetch
            query: >-
              SELECT {{ REMOTE_ATTRIBUTES | join(', ') }}
              FROM ORA_GROUPSMEMBERS              
          attrsmapping:
            srv_user_id: USER_ID
            srv_group_id: GROUP_ID

hermes-client-usersgroups_ldap-config

hermes:
  cache:
    dirpath: /path/to/.hermes/hermes-client-usersgroups_ldap/cache
  cli_socket:
    path: /path/to/.hermes/hermes-client-usersgroups_ldap.sock
  logs:
    logfile: /path/to/.hermes/hermes-client-usersgroups_ldap/logs/hermes-client-usersgroups_ldap.log
    verbosity: info
  mail:
    server: dummy.example.com
    from: hermes-client-usersgroups_ldap <no-reply@example.com>
    to:
      - user@example.com
  plugins:
    messagebus:
      kafka:
        settings:
          servers:
            - dummy.example.com:9093
          ssl:
            certfile: /path/to/.hermes/dummy.crt
            keyfile: /path/to/.hermes/dummy.pem
            cafile: /path/to/.hermes/INTERNAL-CA-chain.crt
          topic: hermes
          group_id: hermes-grp

hermes-client-usersgroups_ldap:
    uri: ldaps://ldap.example.com:636
    binddn: cn=account,dc=example,dc=com
    bindpassword: s3cReT_p4s5w0rD
    basedn: dc=example,dc=com
    users_ou: ou=users,dc=example,dc=com
    groups_ou: ou=groups,dc=example,dc=com
    
    # MANDATORY: Name of DN attribute for Users, UserPasswords and Groups
    # You have to set up values for the three, even if you don't use some of the types
    dnAttributes:
      Users: uid
      UserPasswords: uid
      Groups: cn
    
    propagateUserDNChangeOnGroupMember: true
    groupsObjectclass: groupOfNames

    # It is possible to set a default value for some attributes for Users,
    # UserPasswords and Groups. The default value will be set on added and modified
    # events if the local attribute has no value
    defaultValues:
      # Hack to allow creation of an empty group, because of the "MUST member" in schema
      Groups:
        member: ""

    # The local attributes listed here won't be stored in LDAP for Users,
    # UserPasswords and Groups
    attributesToIgnore:
      Users:
        - user_pkey
      UserPasswords:
        - user_pkey
      Groups:
        - group_pkey

hermes-client:
  # Autoremediation policy to use in error queue for events concerning a same object
  # - "disabled" : no autoremediation, events are stacked as is (default)
  # - "conservative" :
  #   - merge an added event with a following modified event
  #   - merge two successive modified events
  # - "maximum" :
  #   - merge an added event with a following modified event
  #   - merge two successive modified events
  #   - delete both events when an added event is followed by a removed event
  #   - merge a removed event followed by an added event in a modified event
  #   - delete a modified event when it is followed by a removed event
  autoremediation: conservative

  datamodel:
    Users:
      hermesType: SRVUsers
      # Facultative template of object string representation that will be used in logs
      toString: "<Users[{{ user_pkey }}, {{ uid | default('#UNDEF#') }}]>"
      attrsmapping:
        user_pkey: srv_user_id
        uid: srv_login
        givenname: srv_firstname
        sn: srv_lastname
        mail: srv_mail
        # Compose the displayname with two other attributes
        displayname: "{{ srv_firstname ~ ' ' ~  srv_lastname }}"
        #
        # Static values
        # Defining them here instead of in default values will allow changes
        # propagation on each entry
        #
        objectclass: "{{ ['person', 'inetOrgPerson', 'eduPerson'] }}"

    UserPasswords:
      hermesType: SRVUserPasswords
      attrsmapping:
        user_pkey: srv_user_id
        userPassword: srv_password_ldap

    Groups:
      hermesType: SRVGroups
      toString: "<Groups[{{ group_pkey }}, {{ cn | default('#UNDEF#') }}]>"
      attrsmapping:
        group_pkey: srv_group_id
        cn: srv_group_name
        description: srv_group_desc
        #
        # Static values
        # Defining them here instead of in default values will allow changes
        # propagation on each entry
        #
        objectclass: "{{ ['groupOfNames'] }}"

    GroupsMembers:
      hermesType: SRVGroupsMembers
      attrsmapping:
        # 'user_pkey' and 'group_pkey' keys can't be renamed
        user_pkey: srv_user_id
        group_pkey: srv_group_id

Attributes flow

flowchart LR
  subgraph Oracle
    direction LR
    ORA_GROUPS
    ORA_USERS
    ORA_USERPASSWORDS
    ORA_GROUPSMEMBERS
  end

  subgraph ORA_GROUPS
    direction LR
    ORA_GROUPS_GROUP_ID["GROUP_ID"]
    ORA_GROUPS_GROUP_NAME["GROUP_NAME"]
    ORA_GROUPS_GROUP_DESC["GROUP_DESC"]
  end

  subgraph ORA_USERS
    direction LR
    ORA_USERS_USER_ID["USER_ID"]
    ORA_USERS_LOGIN["LOGIN"]
    ORA_USERS_FIRSTNAME["FIRSTNAME"]
    ORA_USERS_LASTNAME["LASTNAME"]
    ORA_USERS_EMAIL["EMAIL"]
  end

  subgraph ORA_USERPASSWORDS
    direction LR
    ORA_USERPASSWORDS_USER_ID["USER_ID"]
    ORA_USERPASSWORDS_PASSWORD_ENCRYPTED["PASSWORD_ENCRYPTED"]
    ORA_USERPASSWORDS_LDAP_HASHES["LDAP_HASHES"]
  end

  subgraph ORA_GROUPSMEMBERS
    direction LR
    ORA_GROUPSMEMBERS_USER_ID["USER_ID"]
    ORA_GROUPSMEMBERS_GROUP_ID["GROUP_ID"]
  end



  subgraph hermes-server
    direction LR
    SRVGroups
    SRVUsers
    SRVUserPasswords
    SRVGroupsMembers
  end

  subgraph SRVGroups
    direction LR
    SRVGroups_srv_group_id["srv_group_id"]
    SRVGroups_srv_group_name["srv_group_name"]
    SRVGroups_srv_group_desc["srv_group_desc"]
  end
  ORA_GROUPS_GROUP_ID --> SRVGroups_srv_group_id
  ORA_GROUPS_GROUP_NAME --> SRVGroups_srv_group_name
  ORA_GROUPS_GROUP_DESC --> SRVGroups_srv_group_desc

  subgraph SRVUsers
    direction LR
    SRVUsers_srv_user_id["srv_user_id"]
    SRVUsers_srv_login["srv_login"]
    SRVUsers_srv_firstname["srv_firstname"]
    SRVUsers_srv_lastname["srv_lastname"]
    SRVUsers_srv_mail["srv_mail"]
  end
  ORA_USERS_USER_ID --> SRVUsers_srv_user_id
  ORA_USERS_LOGIN --> SRVUsers_srv_login
  ORA_USERS_FIRSTNAME -->|'title' Jinja filter| SRVUsers_srv_firstname
  ORA_USERS_LASTNAME -->|'title' Jinja filter| SRVUsers_srv_lastname
  ORA_USERS_EMAIL --> SRVUsers_srv_mail

  subgraph SRVUserPasswords
    direction LR
    SRVUserPasswords_srv_user_id["srv_user_id"]
    SRVUserPasswords_srv_password_ldap["srv_password_ldap"]
  end
  ORA_USERPASSWORDS_USER_ID --> SRVUserPasswords_srv_user_id
  ORA_USERPASSWORDS_PASSWORD_ENCRYPTED -->|"'crypto_RSA_OAEP | ldapPasswordHash' Jinja filter"| SRVUserPasswords_srv_password_ldap
  ORA_USERPASSWORDS_LDAP_HASHES <-->|LDAP_HASHED is filled by, or provide its value| SRVUserPasswords_srv_password_ldap

  subgraph SRVGroupsMembers
    direction LR
    SRVGroupsMembers_srv_user_id["srv_user_id"]
    SRVGroupsMembers_srv_group_id["srv_group_id"]
  end
  ORA_GROUPSMEMBERS_USER_ID --> SRVGroupsMembers_srv_user_id
  ORA_GROUPSMEMBERS_GROUP_ID --> SRVGroupsMembers_srv_group_id



  subgraph hermes-client-usersgroups_ldap
    direction LR
    ClientGroups
    ClientUsers
    ClientUserPasswords
    ClientGroupsMembers
  end

  subgraph ClientGroups
    direction LR
    ClientGroups_group_pkey["group_pkey"]
    ClientGroups_cn["cn"]
    ClientGroups_description["description"]
    ClientGroups_objectclass["objectclass"]
  end
  SRVGroups_srv_group_id --> ClientGroups_group_pkey
  SRVGroups_srv_group_name --> ClientGroups_cn
  SRVGroups_srv_group_desc --> ClientGroups_description
  
  subgraph ClientUsers
    direction LR
    ClientUsers_user_pkey["user_pkey"]
    ClientUsers_uid["uid"]
    ClientUsers_givenname["givenname"]
    ClientUsers_sn["sn"]
    ClientUsers_mail["mail"]
    ClientUsers_displayname["displayname"]
    ClientUsers_objectclass["objectclass"]
  end
  SRVUsers_srv_user_id --> ClientUsers_user_pkey
  SRVUsers_srv_login --> ClientUsers_uid
  SRVUsers_srv_firstname --> ClientUsers_givenname
  SRVUsers_srv_firstname --> ClientUsers_displayname
  SRVUsers_srv_lastname --> ClientUsers_displayname
  SRVUsers_srv_lastname --> ClientUsers_sn
  SRVUsers_srv_mail --> ClientUsers_mail
  
  subgraph ClientUserPasswords
    direction LR
    ClientUserPasswords_user_pkey["user_pkey"]
    ClientUserPasswords_userPassword["userPassword"]
  end
  SRVUserPasswords_srv_user_id --> ClientUserPasswords_user_pkey
  SRVUserPasswords_srv_password_ldap --> ClientUserPasswords_userPassword


  subgraph ClientGroupsMembers
    direction LR
    ClientGroupsMembers_user_pkey["user_pkey"]
    ClientGroupsMembers_group_pkey["group_pkey"]
  end
  SRVGroupsMembers_srv_user_id --> ClientGroupsMembers_user_pkey
  SRVGroupsMembers_srv_group_id --> ClientGroupsMembers_group_pkey




  subgraph LDAP
    direction LR
    LDAPGroups
    LDAPUsers
  end

  subgraph LDAPGroups
    direction LR
    LDAPGroups_cn["cn"]
    LDAPGroups_description["description"]
    LDAPGroups_objectclass["objectclass"]
    LDAPGroups_member["member"]
  end
  ClientGroups_cn --> LDAPGroups_cn
  ClientGroups_description --> LDAPGroups_description
  ClientGroups_objectclass --> LDAPGroups_objectclass
  ClientGroupsMembers_user_pkey -->|converted to user DN| LDAPGroups_member
  ClientGroupsMembers_group_pkey -->|converted to group DN| LDAPGroups_member

  subgraph LDAPUsers
    direction LR
    LDAPUsers_uid["uid"]
    LDAPUsers_givenname["givenname"]
    LDAPUsers_displayname["displayname"]
    LDAPUsers_displayname["displayname"]
    LDAPUsers_sn["sn"]
    LDAPUsers_mail["mail"]
    LDAPUsers_objectclass["objectclass"]
    LDAPUsers_userPassword["userPassword"]
  end
  ClientUsers_uid --> LDAPUsers_uid
  ClientUsers_givenname --> LDAPUsers_givenname
  ClientUsers_displayname --> LDAPUsers_displayname
  ClientUsers_sn --> LDAPUsers_sn
  ClientUsers_mail --> LDAPUsers_mail
  ClientUsers_objectclass --> LDAPUsers_objectclass
  ClientUserPasswords_userPassword --> LDAPUsers_userPassword

  classDef global fill:#fafafa,stroke-dasharray: 5 5
  class Oracle,hermes-server,hermes-client-usersgroups_ldap,LDAP global

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Chapter 5

Development

This section contains the documentation to get started with plugin development and Hermes “core” contribution.

Logging

A Logger instance is available through the variable “__hermes__.logger”. As this var is declared as builtin, it is always available and doesn’t require any import or call to logging.getLogger().

Contributing

Before submitting a pull request to merge some code in Hermes, you should ensure that:

  1. it provides docstrings and type hints
  2. it has been formatted with black
  3. it is compliant with Flake8
  4. your code doesn’t break the test suite

tox may be used to validate the last three conditions, by running one of the commands below :

# Testing sequentially (slow but more verbose) only on default python version available on your system
tox run -e linters,tests
# Testing in parallel (faster, but without details) only on default python version available on your system
tox run-parallel -e linters,tests

# Testing sequentially (slow but more verbose) on all compatible python versions - they must be available on your system
tox run
# Testing in parallel (faster, but without details) on all compatible python versions - they must be available on your system
tox run-parallel
Tip

tox >= 4 must be installed but is probably available in your distribution’s repositories

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Development

Plugins

Whatever its type, a plugin is always a folder named ‘plugin_name’ containing at least the following 4 files:

Plugin source code

Hermes will try to import the plugin_name.py file. It is possible to split the plugin code into several files and folders, but the plugin will always be imported from this file.

For details about plugin API, please consult the following sections:

Tip

Some helpers modules are available in helpers:

  • helpers.command: to run local commands on client’s host
  • helpers.ldaphashes: to compute LDAP hashes from plaintext passwords
  • helpers.randompassword: to generate random passwords with specific constraints

Plugin configuration schema

Depending on the plugin type, the configuration schema file slightly differs.

Plugin configuration schema for clients plugins

Hermes will try to validate the plugin settings with a Cerberus validation schema specified in a YAML file: config-schema-client-plugin_name.yml.

The clients plugins validation file must be empty or contains only one top level key that must be the plugin name prefixed by hermes-client-.

Example for plugin name usersgroups_flatfiles_emails_of_groups:

# https://docs.python-cerberus.org/validation-rules.html

hermes-client-usersgroups_flatfiles_emails_of_groups:
  type: dict
  required: true
  empty: false
  schema:
    destDir:
      type: string
      required: true
      empty: false
    onlyTheseGroups:
      type: list
      required: true
      nullable: false
      default: []
      schema:
        type: string

Plugin configuration schema for other plugin types

Hermes will try to validate the plugin settings with a Cerberus validation schema specified in a YAML file: config-schema-plugin-plugin_name.yml.

Even if the plugin doesn’t require any configuration, it still requires an empty validation file.

Example for plugin name ldapPasswordHash:

# https://docs.python-cerberus.org/validation-rules.html

default_hash_types:
  type: list
  required: false
  nullable: false
  empty: true
  default: []
  schema:
    type: string
    allowed:
      - MD5
      - SHA
      - SMD5
      - SSHA
      - SSHA256
      - SSHA512

Plugin README.md

The documentation should be written in README.md and should contains the following sections:

# `plugin_name` attribute plugin

## Description

## Configuration

## Usage
Only for `attributes` and `datasources` plugins.

## Datamodel
Only for `clients` plugins.

Plugin requirements.txt

Even if the plugin has no Python requirements, please create a pip requirements.txt file starting with a comment containing the plugin path and ending with an empty line.

Example:

# plugins/attributes/crypto_RSA_OAEP
pycryptodomex==3.21.0
 

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Subsections of Plugins

Attributes

Description

An attribute plugin is simply an AbstractAttributePlugin subclass designed to implement a Jinja filter.

Requirements

Here is a commented minimal plugin implementation that won’t do anything.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass AbstractAttributePlugin
from lib.plugins import AbstractAttributePlugin

# Required to use the Jinja Undefined state
from jinja2 import Undefined

# Required for type hints
from typing import Any

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "MyPluginClassName"

class MyPluginClassName(AbstractAttributePlugin):
    def __init__(self, settings: dict[str, any]):
        # Instantiate new plugin and store a copy of its settings dict in self._settings
        super().__init__(settings)
        # ... plugin init code

    def filter(self, value: Any | None | Undefined) -> Any:
        # Filter that does nothing
        return value

filter method

You should consider reading the official Jinja documentation about custom filters.

The filter() method always takes at least one value parameter, and may have some other.

Its generic prototype is:

def filter(self, value: Any | None | Undefined, *args: Any, **kwds: Any) -> Any:

In Jinja, it is called with:

"{{ value | filter }}"
"{{ value | filter(otherarg1, otherarg2) }}"
"{{ value | filter(otherarg1=otherarg1_value, otherarg2=otherarg2_value) }}"

The above expressions are replaced by the filter return value.

Example: the datetime_format attribute plugin

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass AbstractAttributePlugin
from lib.plugins import AbstractAttributePlugin

# Required to use the Jinja Undefined state
from jinja2 import Undefined

# Required for type hints
from typing import Any

from datetime import datetime

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "DatetimeFormatPlugin"

class DatetimeFormatPlugin(AbstractAttributePlugin):
    def filter(self, value:Any, format:str="%H:%M %d-%m-%y") -> str | Undefined:
        if isinstance(value, Undefined):
            return value

        if not isinstance(value, datetime):
            raise TypeError(f"""Invalid type '{type(value)}' for datetime_format value: must be a datetime""")

        return value.strftime(format)

This filter can now be called with:

"{{ a_datetime_attribute | datetime_format }}"
"{{ a_datetime_attribute | datetime_format('%m/%d/%Y, %H:%M:%S') }}"
"{{ a_datetime_attribute | datetime_format(format='%m/%d/%Y') }}"

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Clients

Description

A client plugin is simply a GenericClient subclass designed to implement simple events handlers, and to split their tasks in atomic subtasks to ensure consistent error reprocessing.

Requirements

Here is a commented minimal plugin implementation that won’t do anything, as it doesn’t implement any event handlers yet.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass GenericClient
from clients import GenericClient

# Required for event handlers method type hints
from lib.config import HermesConfig # only if the plugin implement an __init__() method
from lib.datamodel.dataobject import DataObject
from typing import Any

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "MyPluginClassName"

class MyPluginClassName(GenericClient):
    def __init__(self, config: HermesConfig):
        # The 'config' var must not be used nor modified by the plugin
        super().__init__(config)
        # ... plugin init code

Handlers methods

Event handlers

For each data type set up in the client datamodel, the plugin may implement a handler for each of the 5 possible event types:

  • added: when an object is added
  • recycled: when an object is restored from trashbin (will never be called if trashbin is disabled)
  • modified: when an object is modified
  • trashed: when an object is put in trashbin (will never be called if trashbin is disabled)
  • removed: when an object is deleted

If an event is received by a client, but its handler isn’t implemented, it will silently be ignored.

Each handler must be named on_datatypename_eventtypename.

Example for a Mydatatype data type:

    def on_Mydatatype_added(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        newobj: DataObject,
    ):
        pass

    def on_Mydatatype_recycled(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        newobj: DataObject,
    ):
        pass

    def on_Mydatatype_modified(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        newobj: DataObject,
        cachedobj: DataObject,
    ):
        pass

    def on_Mydatatype_trashed(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        cachedobj: DataObject,
    ):
        pass

    def on_Mydatatype_removed(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        cachedobj: DataObject,
    ):
        pass

Event handlers arguments

  • objkey: the primary key of the object affected by the event

  • eventattrs: a dictionary containing the new object attributes. Its content depends upon the event type:

    • added / recycled events: contains all object attributes names as key, and their respective values as value
    • modified event: always contains three keys:
      • added: attributes that were previously unset, but now have a value. Attribute names as key, and their respective values as value
      • modified: attributes that were previously set, but whose value has changed. Attribute names as key, and their respective new values as value
      • removed: attributes that were previously set, but now don’t have a value anymore. Attribute names as key, and None as value
    • trashed / removed events: always an empty dict {}
  • newobj: a DataObject instance containing all the updated values of the object affected by the event (see DataObject instances below)

  • cachedobj: a DataObject instance containing all the previous (cached) values of the object affected by the event (see DataObject instances below)

DataObject instances

Each data type object can be used intuitively through a DataObject instance. Let’s use a simple example with this User object values (without a mail) from datamodel below:

{
    "user_pkey": 42,
    "uid": "jdoe",
    "givenname": "John",
    "sn": "Doe"
}
hermes-client:
  datamodel:
    Users:
      hermesType: SRVUsers
      attrsmapping:
        user_pkey: srv_user_id
        uid: srv_login
        givenname: srv_firstname
        sn: srv_lastname
        mail: srv_mail

Now, if this object is stored in a newobj DataObject instance:

>>> newobj.getPKey()
42

>>> newobj.user_pkey
42

>>> newobj.uid
'jdoe'

>>> newobj.givenname
'John'

>>> newobj.sn
'Doe'

>>> newobj.mail
AttributeError: 'Users' object has no attribute 'mail'

>>> hasattr(newobj, 'sn')
True

>>> hasattr(newobj, 'mail')
False

Error handling

Any unhandled exception raised in an event handler will be managed by GenericClient, that will append the event to its error queue. GenericClient will then try to process the event regularly until it succeeds, and therefore call the event handler.

But sometimes, a handler must process several operations on target. Imagine a handler like this:

    def on_Mydatatype_added(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        newobj: DataObject,
    ):
        if condition:
            operation1()  # condition is False, operation1() is not called
        operation2()  # no error occurs
        operation3()  # this one raises an exception

At each retry the operation2() function will be called again, but this is not necessarily desirable.

It is possible to divide a handler in steps by using the currentStep attribute inherited from GenericClient, to resume the retries at the failed step.

currentStep always starts at 0 on normal event processing. Its new values are then up to plugin implementations.

When an error occurs, the currentStep value is stored in the error queue with the event.
The error queue retries will always restore the currentStep value before calling the event handler.

So by implementing it like below, operation2() will only be called once.

    def on_Mydatatype_added(
        self,
        objkey: Any,
        eventattrs: "dict[str, Any]",
        newobj: DataObject,
    ):
        if self.currentStep == 0:
            if condition:
                operation1()  # condition is False, operation1() is not called
                # Declare that changes have been propagated on target
                self.isPartiallyProcessed = True
            self.currentStep += 1
        
        if self.currentStep == 1:
            operation2()  # no error occurs
            # Declare that changes have been propagated on target
            self.isPartiallyProcessed = True
            self.currentStep += 1

        if self.currentStep == 2:
            operation3()  # this one raises an exception
            # Declare that changes have been propagated on target
            self.isPartiallyProcessed = True
            self.currentStep += 1
Understanding isPartiallyProcessed attribute

The isPartiallyProcessed attribute inherited from GenericClient indicates if the current event processing has already propagated some changes on target. Therefore, it must be set to True as soon as the slightest modification has been propagated to the target.
It allows autoremediation to merge events whose currentStep is different from 0 but whose previous steps have not modified anything on the target.

isPartiallyProcessed is always False on normal event processing. Its value change is up to plugin implementations.

With the implementation example above, and an exception raised by operation3(), the autoremediation would not try to merge this partially processed event with possible subsequent events, as isPartiallyProcessed is True.

With the implementation example above, but an exception raised by operation2(), the autoremediation would try to merge this unprocessed event with possible subsequent events, as isPartiallyProcessed is still False.

on_save handler

A special handler may be implemented when hermes just have saved its cache files: once some events have been processed and no event is waiting on the message bus, or before ending.

Warning

As this handler isn’t a standard event handler, GenericClient can’t handle exceptions for it, and process to a retry later.

Any unhandled exception raised in this handler will immediately terminate the client.

It’s up to the implementation to avoid errors.

    def on_save(self):
        pass

GenericClient properties and methods

Properties

  • currentStep: int

    Step number of current event processed. Allow clients to resume an event where it has failed.

  • isPartiallyProcessed: bool

    Indicates if the current event processing has already propagated some changes on target.
    Must be set to True as soon as the slightest modification has been propagated to the target.
    It allows autoremediation to merge events whose currentStep is different from 0 but whose previous steps have not modified anything on the target.

  • isAnErrorRetry: bool

    Read-only attribute that can let client plugin handler know if the current event is being processed as part of an error retry. This can be useful for example to perform additional checks when a library happens to throw exceptions even though it has correctly processed the requested changes, as python-ldap sometimes does.

  • config: dict[str, Any]

    Dict containing the client plugin configuration.

Methods

  • def getDataobjectlistFromCache(objtype: str) -> DataObjectList

    Returns cache of specified objtype, by reference. Raise IndexError if objtype is invalid

    Warning

    Any modification of the cache content will mess up your client!!!

  • def getObjectFromCache(objtype: str, objpkey: Any ) -> DataObject

    Returns a deepcopy of an object from cache. Raise IndexError if objtype is invalid, or if objpkey is not found

  • def mainLoop() -> None

    Client main loop

    Warning

    Called by Hermes, to start the client. Must never be called nor overridden

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Datasources

Description

A datasource plugin is simply a AbstractDataSourcePlugin subclass designed to link hermes-server with any datasource.

It requires methods to connect and disconnect to datasource, and to fetch, add, modify and delete data.

Requirements

Here is a commented minimal plugin implementation that won’t do anything.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass AbstractDataSourcePlugin
from lib.plugins import AbstractDataSourcePlugin

# Required for type hints
from typing import Any

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "MyDatasourcePluginClassName"

class MyDatasourcePluginClassName(AbstractDataSourcePlugin):
    def __init__(self, settings: dict[str, Any]):
        # Instantiate new plugin and store a copy of its settings dict in self._settings
        super().__init__(settings)
        # ... plugin init code

    def open(self):
        """Establish connection with datasource"""

    def close(self):
        """Close connection with datasource"""

    def fetch(
        self,
        query: str | None,
        vars: dict[str, Any],
    ) -> list[dict[str, Any]]:
        """Fetch data from datasource with specified query and optional queryvars.
        Returns a list of dict containing each entry fetched, with REMOTE_ATTRIBUTES
        as keys, and corresponding fetched values as values"""

    def add(self, query: str | None, vars: dict[str, Any]):
        """Add data to datasource with specified query and optional queryvars"""

    def delete(self, query: str | None, vars: dict[str, Any]):
        """Delete data from datasource with specified query and optional queryvars"""

    def modify(self, query: str | None, vars: dict[str, Any]):
        """Modify data on datasource with specified query and optional queryvars"""

Methods

Connection methods

As they don’t take any arguments, the open and close methods should rely on plugin settings. For stateless datasources, they may do nothing.

fetch method

This method is called to fetch some data and provide it to hermes-server.

Depending on the plugin implementation, it may rely on the query argument or the vars argument, or both.

The result must be returned as a list of dict. Each list item is a fetched entry stored in a dict, with attribute name as key, and its corresponding value. The value must be of one of the following Python types:

  • None
  • int
  • float
  • str
  • datetime.datetime
  • bytes

Allowed iterable types are:

  • list
  • dict

Values ​​must be of one of the types mentioned above. All other types are invalid.

add, delete, and modify methods

These methods are used to modify the datasource, when possible.

Depending on the technical constraints of the data source, they can all be implemented in the same way or not.

Depending on the plugin implementation, they may rely on the query argument or the vars argument, or both.

Error handling

No exception should be caught, to allow Hermes error handling to function properly.

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Messagebus consumers

Description

A messagebus consumer plugin is simply a AbstractMessageBusConsumerPlugin subclass designed to link hermes-client with any message bus.

It requires methods to connect and disconnect to message bus, and to consume available events.

Features required from message bus

  • Allow to specify a message key/category (producers) and to filter message of a specified key/category (consumers)
  • Allow to consume a same message more than once
  • Implementing a message offset, allowing consumers to seek the next required message. As it will be stored in clients cache, this offset must be of one of the Python types below:
    • int
    • float
    • str
    • bytes

Requirements

Here is a commented minimal plugin implementation that won’t do anything.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass AbstractMessageBusConsumerPlugin
from lib.plugins import AbstractMessageBusConsumerPlugin
# Required to return Event
from lib.datamodel.event import Event

# Required for type hints
from typing import Any, Iterable

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "MyMessagebusConsumerPluginClassName"

class MyMessagebusConsumerPluginClassName(AbstractMessageBusConsumerPlugin):
    def __init__(self, settings: dict[str, Any]):
        # Instantiate new plugin and store a copy of its settings dict in self._settings
        super().__init__(settings)
        # ... plugin init code

    def open(self) -> Any:
        """Establish connection with messagebus"""

    def close(self):
        """Close connection with messagebus"""

    def seekToBeginning(self):
        """Seek to first (older) event in message bus queue"""

    def seek(self, offset: Any):
        """Seek to specified offset event in message bus queue"""

    def setTimeout(self, timeout_ms: int | None):
        """Set timeout (in milliseconds) before aborting when waiting for next event.
        If None, wait forever"""

    def findNextEventOfCategory(self, category: str) -> Event | None:
        """Lookup for first message with specified category and returns it,
        or returns None if none was found"""

    def __iter__(self) -> Iterable:
        """Iterate over message bus returning each Event, starting at current offset.
        When every event has been consumed, wait for next message until timeout set with
        setTimeout() has been reached"""

Methods to implement

Connection methods

As they don’t take any arguments, the open and close methods should rely on plugin settings.

seekToBeginning method

Seek to first (older) event in message bus queue.

seek method

Seek to specified offset event in message bus queue.

setTimeout method

Set timeout (in milliseconds) before aborting when waiting for next event. If None, wait forever.

findNextEventOfCategory method

Lookup for first message with specified category and returns it, or returns None if none was found.

As this method will browse the message bus, the current offset will be modified.

__iter__ method

Returns an Iterable that will yield all events available on message bus, starting from current offset.

Those unserializable attributes of Event instance must be defined before yielding it:

  • offset (int | float | str | bytes): offset of the event in message bus
  • timestamp (dattime.datetime): timestamp of the event

Event properties and methods

Methods

  • @staticmethod
    def from_json(jsondata: str | dict[Any, Any]) -> Event

    Deserialize a json string or dict to a new Event instance, and returns it

Boris Lechner 2025-05-05 e022507882f1c7d53ec4dc72b08922261dfdd25f

Messagebus producers

Description

A messagebus producer plugin is simply a AbstractMessageBusProducerPlugin subclass designed to link hermes-server with any message bus.

It requires methods to connect and disconnect to message bus, and to produce (send) events over it.

Features required from message bus

  • Allow to specify a message key/category (producers) and to filter message of a specified key/category (consumers)
  • Allow to consume a same message more than once
  • Implementing a message offset, allowing consumers to seek the next required message. As it will be stored in clients cache, this offset must be of one of the Python types below:
    • int
    • float
    • str
    • bytes

Requirements

Here is a commented minimal plugin implementation that won’t do anything.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Required to subclass AbstractMessageBusProducerPlugin
from lib.plugins import AbstractMessageBusProducerPlugin

# Required for type hints
from lib.datamodel.event import Event
from typing import Any

# Required to indicate to hermes which class it has to instantiate
HERMES_PLUGIN_CLASSNAME = "MyMessagebusProducerPluginClassName"

class MyMessagebusProducerPluginClassName(AbstractMessageBusProducerPlugin):
    def __init__(self, settings: dict[str, Any]):
        # Instantiate new plugin and store a copy of its settings dict in self._settings
        super().__init__(settings)
        # ... plugin init code

    def open(self) -> Any:
        """Establish connection with messagebus"""

    def close(self):
        """Close connection with messagebus"""

    def _send(self, event: Event):
        """Send specified event to message bus"""

Methods to implement

Connection methods

As they don’t take any arguments, the open and close methods should rely on plugin settings.

_send method

Note

Be careful to overload the _send() method and not the send() one.

The send() method is a wrapper that handles exceptions while calling _send().

Send a message containing the specified event.

The consumer will require the following properties:

  • evcategory (str): Key/category of the event (stored in the Event)
  • timestamp (dattime.datetime): timestamp of the event
  • offset (int | float | str | bytes): offset of the event in message bus

See Event properties and methods below.

Event properties and methods

Properties

  • evcategory: str

    Key/category to apply to the message

Methods

  • def to_json() -> str

    Serialize event to a json string that can be used later to be deserialized in a new Event instance