Show Table of Contents
11.24. Solving Common Replication Conflicts
Multi-master replication uses a loose consistency replication model. This means that the same entries can be changed on different servers. When replication occurs between the two servers, the conflicting changes need to be resolved. Mostly, resolution occurs automatically, based on the timestamp associated with the change on each server. The most recent change takes precedence.
However, there are some cases where change conflicts require manual intervention in order to reach a resolution. Entries that have a change conflict that cannot be resolved automatically by the replication process contain a conflict marker attribute
nsds5ReplConflict. The nsds5ReplConflict attribute is an operational attribute which is indexed for presence and equality, so it is simple to search for entries that contain this attribute. For example:
ldapsearch -D adminDN -W
-b "dc=example,dc=com" "nsds5ReplConflict=*" \* nsds5ReplConflict
The
nsds5ReplConflict attribute is already indexed for presence and equality, but for performance reasons, if there are many conflicting entries every day, index the nsds5ReplConflict attribute in other indexes. For information on indexing, see Chapter 9, Managing Indexes.
11.24.1. Solving Naming Conflicts
When two entries are created with the same DN on different servers, the automatic conflict resolution procedure during replication renames the last entry created, including the entry's unique identifier in the DN. Every directory entry includes a unique identifier given by the operational attribute
nsuniqueid. When a naming conflict occurs, this unique ID is appended to the non-unique DN.
For example, the entry
uid=adamss,ou=people,dc=example,dc=com is created on Server A at time t1 and on Server B at time t2, where t2 is greater (or later) than t1. After replication, Server A and Server B both hold the following entries:
uid=adamss,ou=people,dc=example,dc=com(created at timet1)nsuniqueid=66446001-1dd211b2+uid=adamss,dc=example,dc=com(created at timet2)
The second entry needs to be renamed in such a way that it has a unique DN. The renaming procedure depends on whether the naming attribute is single-valued or multi-valued.
11.24.1.1. Renaming an Entry with a Multi-Valued Naming Attribute
To rename an entry that has a multi-valued naming attribute:
- Rename the entry using a new value for the naming attribute, and keep the old RDN. For example:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: nsuniqueid=66446001-1dd211b2+uid=adamss,dc=example,dc=com changetype: modrdn newrdn: uid=NewValue deleteoldrdn: 0
- Remove the old RDN value of the naming attribute and the conflict marker attribute. For example:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: uid=NewValue,dc=example,dc=com changetype: modify delete: uid uid: adamss - delete: nsds5ReplConflict -
Note
The unique identifier attribute
nsuniqueid cannot be deleted.
The Console does not support editing multi-valued RDNs. For example, if there are two servers in a multi-master mode, an entry can be created on each server with the same user ID, and then the new entries' RDN changed to the
nsuniqueid uid value. Attempting to modify this entry from the Console returns the error Changes cannot be saved for entries with multi-valued RDNs.
Opening the entry in the advanced mode shows that the naming attribute has been set to
nsuniqueid uid. However, the entry cannot be changed or corrected by changing the user ID and RDN values to something different. For example, if jdoe was the user ID and it should be changed to jdoe1, it cannot be done from the Console. Instead, use the ldapmodify command:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: cn=John Doe changetype: modify replace: uid uid: jdoe dn: cn=John Doe changetype: modrdn newrdn: uid=jdoe1 deleteoldrdn: 1
11.24.1.2. Renaming an Entry with a Single-Valued Naming Attribute
To rename an entry that has a single-valued naming attribute:
- Rename the entry using a different naming attribute, and keep the old RDN. For example:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: nsuniqueid=66446001-1dd211b2+dc=pubs,dc=example,dc=com changetype: modrdn newrdn: cn=TempValue deleteoldrdn: 0
- Remove the old RDN value of the naming attribute and the conflict marker attribute. For example:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: cn=TempValue,dc=example,dc=com changetype: modify delete: dc dc: pubs - delete: nsds5ReplConflict -
Note
The unique identifier attributensuniqueidcannot be deleted. - Rename the entry with the intended attribute-value pair. For example:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x dn: cn=TempValue,dc=example,dc=com changetype: modrdn newrdn: dc=NewValue deleteoldrdn: 1
Setting the value of thedeleteoldrdnattribute to1deletes the temporary attribute-value paircn=TempValue. To keep this attribute, set the value of thedeleteoldrdnattribute to0.
11.24.2. Solving Orphan Entry Conflicts
When a delete operation is replicated and the consumer server finds that the entry to be deleted has child entries, the conflict resolution procedure creates a
glue entry to avoid having orphaned entries in the directory.
In the same way, when an add operation is replicated and the consumer server cannot find the parent entry, the conflict resolution procedure creates a glue entry representing the parent so that the new entry is not an orphan entry.
Glue entries are temporary entries that include the object classes
glue and extensibleObject. Glue entries can be created in several ways:
- If the conflict resolution procedure finds a deleted entry with a matching unique identifier, the glue entry is a resurrection of that entry, with the addition of the
glueobject class and thensds5ReplConflictattribute.In such cases, either modify the glue entry to remove theglueobject class and thensds5ReplConflictattribute to keep the entry as a normal entry or delete the glue entry and its child entries. - The server creates a minimalistic entry with the
glueandextensibleObjectobject classes.
In such cases, modify the entry to turn it into a meaningful entry or delete it and all of its child entries.
11.24.3. Solving Potential Interoperability Problems
For reasons of interoperability with applications that rely on attribute uniqueness, such as a mail server, it may be necessary to restrict access to the entries which contain the
nsds5ReplConflict attribute. If access is not restricted to these entries, then the applications requiring one attribute only pick up both the original entry and the conflict resolution entry containing the nsds5ReplConflict, and operations will fail.
To restrict access, modify the default ACI that grants anonymous read access:
ldapmodify -D "cn=directory manager" -W -p 389 -h server.example.com -x
dn: dc=example,dc=com
changetype: modify
delete: aci
aci: (target ="ldap:///dc=example,dc=com")(targetattr
!="userPassword")(version 3.0;acl "Anonymous read-search
access";allow (read, search, compare)(userdn = "ldap:///anyone");)
-
add: aci
aci: (target="ldap:///dc=example,dc=com")(targetattr!="userPassword")
(targetfilter="(!(nsds5ReplConflict=*))")(version 3.0;acl
"Anonymous read-search access";allow (read, search, compare)
(userdn="ldap:///anyone");)
-
The new ACI filters out all entries that contain the
nsds5ReplConflict attribute from search results.
11.24.4. Resolving Errors for Obsolete/Missing Suppliers
Information about the replication topology — all of the suppliers which are supplying updates to each other and other replicas within the same replication group — is contained in a set of metadata called the replica update vector (RUV). The RUV contains information about the supplier like its ID and URL, its latest change state number for changes made on the local server, and the CSN of the first change. Both suppliers and consumers store RUV information, and they use it to control replication updates.
When one supplier is removed from the replication topology, it may remain in another replica's RUV. When the other replica is restarted, it can record errors in its log that the replication plug-in does not recognize the (removed) supplier.
[09/Sep/2017:09:03:43 -0600] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not
contain element [{replica 55 ldap://server.example.com:389} 4e6a27ca000000370000 4e6a27e8000000370000]
which is present in RUV [database RUV]
......
[09/Sep/2017:09:03:43 -0600] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: for replica
dc=example,dc=com there were some differences between the changelog max RUV and the database RUV. If
there are obsolete elements in the database RUV, you should remove them using the CLEANRUV task. If they
are not obsolete, you should check their status to see why there are no changes from those servers in the changelog.
When the supplier is permanently removed from the topology, then any lingering metadata about that supplier should be purged from every other supplier's RUV entry.
There are three ways to do this:
- Removed from all suppliers in the topology using the
CLEANALLRUVreplication task. - Removed from a single supplier in the topology (because of local errors) in using the
CLEANRUVreplication task. - Removed from all suppliers in the topology using the directory task.
11.24.4.1. Removing an Obsolete Replica from a Single Supplier
If a server is offline or unavailable when a supplier is removed from the topology, it may not receive any updates that the other supplier was removed. In that case, the supplier's RUV would still contain entries about the missing supplier and would return missing element errors.
This can be cleaned up on a single supplier using the
CLEANRUV replication task. The nsds5Task attribute identifies a replication-related task in a replica configuration entry. This attribute is generally added and removed automatically by the server as it performs regular replication tasks. However, the attribute can be added to a replica configuration entry to manually initiate a task, and it is used to clean obsolete supplier data from the local server's RUV.
To purge an obsolete supplier from the RUV:
- Running the
CLEANRUVreplication task requires that old replica configuration DN and the old replica ID.- Get the replica configuration entry DN by checking for replica entries in the
cn=mapping tree,cn=configentry.ldapsearch -xLLL -D "cn=directory manager" -W -s sub -b cn=config objectclass=nsds5replica dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config cn: replica ...
- Get the replica ID. This is in the
nsds5replicaidattribute in the configuration entry. The ID is also in the error message about the server being unable to find the replica, identified in the element [{replica ID URL} uniqueId] line. For example:[09/Sep/2011:09:03:43 -0600] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica55ldap://server.example.com:389} 4e6a27ca000000370000 4e6a27e8000000370000] ...
- Use
ldapmodifyto replace thensds5Taskattribute in the configuration entry withCLEANRUVand the replica ID, in the formCLEANRUV#. For example, for a replica with the ID of 55, thensds5Taskvalue isCLEANRUV55:ldapmodify -x -D "cn=directory manager" -W dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config changetype: modify replace: nsds5task nsds5task: CLEANRUV55
11.24.4.2. Removing an Obsolete/Missing Supplier from All Servers in the Topology
If a supplier is taken offline without cleaning up its RUV entries, then all suppliers and hubs in the topology can register missing element errors in their replication logs.
This can be cleaned up on a single supplier using the
CLEANALLRUV replication task. The nsds5Task attribute identifies a replication-related task in a replica configuration entry. This attribute is generally added and removed automatically by the server as it performs regular replication tasks. However, the attribute can be added to a replica configuration entry to manually initiate a task, and it is used to clean obsolete supplier data from all RUV stores in the topology.
Note
The
CLEANALLRUV task is replicated to all suppliers and hubs in the replication topology.
- Running the
CLEANALLRUVreplication task requires that old replica configuration DN and the old replica ID.- Get the replica configuration entry DN by checking for replica entries in the
cn=mapping tree,cn=configentry.ldapsearch -xLLL -D "cn=directory manager" -W -s sub -b cn=config objectclass=nsds5replica dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config cn: replica ...
- Get the replica ID. This is in the
nsds5replicaidattribute in the configuration entry.ldapsearch -xLLL -D "cn=directory manager" -W -s sub -b cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config objectclass=nsds5replica nsds5replicaid dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config nsds5replicaid: 55 ...
The ID is also in the error message about the server being unable to find the replica, identified in the element [{replica ID URL} uniqueId] line. For example:[09/Sep/2011:09:03:43 -0600] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica55ldap://server.example.com:389} 4e6a27ca000000370000 4e6a27e8000000370000] ...
- Use
ldapmodifyto add thensds5Taskattribute in the configuration entry with a value ofCLEANALLRUVand the replica ID, in the formCLEANALLRUV#. For example, for a replica with the ID of 55, thensds5Taskvalue isCLEANALLRUV55:ldapmodify -x -D "cn=directory manager" -W dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config changetype: modify replace: nsds5task nsds5task: CLEANALLRUV55
11.24.4.3. Removing an Obsolete/Missing Supplier Using a Task Operation
If a supplier is taken offline without cleaning up its RUV entries, then all suppliers and hubs in the topology can register missing element errors in their replication logs.
There may be times when it is preferable to launch a directory task operation rather than a replication task. This can be done by creating an instance of the
cn=cleanallruv,cn=tasks,cn=config task.
Note
As with the
CLEANALLRUV replication task, this cn=cleanruv,cn=tasks operation is replicated to all suppliers and hubs in the replication topology.
- Obtain the old replica configuration DN and the old replica ID.
- Get the replica configuration entry DN by checking for replica entries in the
cn=mapping tree,cn=configentry.ldapsearch -xLLL -D "cn=directory manager" -W -s sub -b cn=config objectclass=nsds5replica dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config cn: replica ...
- Get the replica ID. This is in the
nsds5replicaidattribute in the configuration entry.ldapsearch -xLLL -D "cn=directory manager" -W -s sub -b cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config objectclass=nsds5replica nsds5replicaid dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config nsds5replicaid: 55 ...
- Use
ldapmodifyto create thecn=cleanallruv,cn=tasks,cn=configentry. This task requires information on the replication configuration:- The base DN of the replicated database (
replica-base-dn). - The replica ID (
replica-id). - Whether to catch up to the max change state number (CSN) from the missing supplier or just remove all RUV entries and miss any updates (
replica-force-cleaning); setting this tonomeans that the task catches up with all changes first, and then removes the RUV.
ldapmodify
-a-D "cn=directory manager" -W -p 389 -h server.example.com -x dn: cn=clean 55,cn=cleanallruv,cn=tasks,cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: 55 replica-force-cleaning: no cn: clean 55
This task replicates to all servers in the topology. This task can also be aborted, since it can take several minutes to run, and the abort task is also propagated to all suppliers.
ldapmodify -a -D "cn=directory manager" -W -p 389 -h server.example.com -x
dn: cn=abort 55,cn=abort cleanallruv,cn=tasks,cn=config
objectclass: extensibleObject
cn: abort 55
replica-base-dn: dc=example,dc=com
replica-id: 55
replica-certify-all: yes
The
replica-certify-all attribute sets whether to wait for the task to be sent to all servers before completing on the local server.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.