Thursday, 20 October 2011

Eventual Consistency - detecting conflicts




In my previous posts I introduced two new conflict detection functions, NDB$EPOCH and NDB$EPOCH_TRANS without explaining how these functions actually detect conflicts? To simplify the explanation I'll initially consider two circularly replicating MySQL Servers, A and B, rather than two replicating Clusters, but the principles are the same.

Commit ordering

Avoiding conflicts requires that data is only modified on one Server at a time. This can be done by defining Master/Slave roles or Active/Passive partitions etc. Where this is not done, and data can be modified anywhere, there can be conflicts. A conflict occurs when the same data is modified at both Servers concurrently, but what does concurrently mean? On a single server, modifications to the same data are serialised by locking or MVCC mechanisms, so that there is a defined order between them. e.g. two modifications MX and MY are committed either in order {MX, MY} or {MY, MX}.

For the purposes of replication, two modifications MX and MY on the same data are concurrent if the order of commit is different at different servers in the system. Each server will choose one order, but if they don't all choose the same order then there is a conflict. Having a different order means that the last modification on each server is different, and therefore the final state of the data can be different on different servers.

One way to avoid conflicts is to get all servers to agree on a commit order before processing an operation - this ensures that all replicas process operations in the same order, waiting if necessary for missing operations to arrive to ensure no commit-order variance.

Note that commit-ordering is only important between modifications affecting the same data - modifications which do not overlap in their data footprint are unrelated and can be committed in any order. A system which totally orders commits may be less efficient than one which only orders conflicting commits.

Happened before

For the NDB$EPOCH asynchronous conflict detection functions, commit orders are monitored to detect when two modifications to the same data have been committed in different orders.

Given two modifications MX and MY to the same data, each server will decide a happened before (denoted ->) relationship between them :

  1. MX -> MY (MX happened before MY)

    or

  2. MY -> MX (MY happened before MX)

If all servers agree on order 1, or all servers agree on order 2 then there is no conflict. If there is any disagreement then there is a conflict.

In practice, disagreement arises because the same data is modified at both Server A and Server B before the Server A modification is replicated to B and/or vice-versa.

Sometimes when reading about commit ordering, the reason why commit orders should not diverge is lost - the only reason to care about commit ordering is because it is related to conflicting modifications and the potential for data divergence.

Determining happened before from the Binlog


We assume a steady start state, where both Server A and Server B agree about the state of their data, and no modifications are in-flight. If a client of Server A then commits modification MA1 to row X, then from Server A's point of view, MA1 happened before any future modification to row X.

MA1 -> M*

If a client of Server B commits modification MB1 to row X around the same time (before, or after, or thereabouts), from Server B's point of view, MB1 happened before any future modification to row X.

MB1 -> M*

Both Servers are correct, and content with their world view. Note that in general, when committing a modification Mj, a server naturally asserts that from its point of view the modification happened before any as-yet-unseen modification Mk.

Some time will pass and the replication mechanisms will pull Binlogged changes across and apply them. When Server B pulls and applies Server A's Binlogged changes, modification MA1 will be applied to row X. Server B will then naturally be of the opinion that :

MB1 -> MA1

Independently, Server A will pull Server B's binlogged changes and apply modification MB1 to row X, and will come to the certain opinion that :

MA1 -> MB1

These happened before relationships are contradictory so there is a conflict. If nothing is done then A and B will have diverged, with Server A storing the outcome of MB1, and Server B storing the outcome of MA1.

Note that if the --log-slave-updates server option were on, then Server A's Binlog would have recorded {...MA1...MB1...}, whereas Server B's Binlog would have recorded {...MB1...MA1...}. By recording when the Slave applies replicated updates in the Binlog, we record the commit order of the replicated updates relative to other local updates, and encode the happened before relationship in the relative positions of events in the Binlog.

The Binlog is of course transferred between servers, so in a circular replication setup, Server A can become aware of the happened before information from Server B and vice-versa by examining the received Binlogs. The Slave SQL thread examines Binlogs as it applies them, so can be extended to extract happened before information, and use it to detect conflicts.

Recall that Server A asserts that its committed modification to row X (MA1) happened before any as-yet-unseen replicated modification :

MA1 -> M*

Therefore, to detect a conflict, Server A only needs to detect the case where the incoming Binlog from Server B infers that some modification MB* to row X happened before server A's already committed modification MA1.

If Server B Binlog implies MB* -> MA1 then there has been a conflict

This is in essence how the NDB$EPOCH functions work - the Binlog is used to capture happened before relationships which are checked to determine whether conflicting concurrent modifications have occurred.


Conflict Windows

In the previous example, Server A commits MA1 modifying row X, and Server B commits MB1 also modifying row X. From Server A's point of view, as soon as it commits MA1, there is potential for a replicated modification from B such as MB1 to be found in-conflict with MA1. We say that from Server A's point of view a window of potential conflict on row X has opened when MA1 was committed. Server A monitors Server B's Binlog as it is applied and when it reaches the point where the commit of MA1 at Server B is recorded, Server A knows that any further MB* recorded in Server B's Binlog after this cannot have happened before MA1, therefore the window of potential conflict on row X has closed.

We define the window of potential conflict on a row X as the time between the commit of a modification M1, and the Slave processing of an event in a replicated Binlog indicating that modification M1 has been applied on the other server(s) in the replication loop.

Any incoming replicated modification M2 also affecting row X while it has an open conflict window is in conflict with M1, as it must appear to have happened-before M1 to the server which committed it.

Observations about the window of potential conflict :
  • It is defined per committed modification per disjoint data set
  • It can be extended by further modifications to the same data from the same server
    The window does not close all further modifications have been fully replicated
  • Window duration is dependent on the replication round-trip delay
    Which can vary greatly
  • Once it closes, further modifications to the same data from anywhere are safe, but will each open their own window of potential conflict.
  • From the point of view of one Server, conflicts can occur at any time until the conflict window is closed
  • From the point of view of one Server, the duration of the window of potential conflict is similar to

    Replication Propagation Delay A to B + Replication Propagation Delay B to A

    These delays may not be symmetric.
  • From the point of view of an external observer/actor, the system will detect two modifications MA1 and MB1 committed at times tMA1 and tMA2 as in-conflict if

    tMB1 - tMA1 < Replication Propagation Delay A to B

    ( A before B, but not by enough to avoid conflict )

    or

    tMA1 - tMB1 < Replication Propagation Delay B to A

    ( B before A, but not by enough to avoid conflict )
  • The window of potential conflict can only be as short as the replication propagation delay between systems, which can tend towards, but never reach zero.

Tracking conflict windows with a logical clock

A row's conflict window opens when a modification is committed to it, and closes when the Slave processes an event indicating that the modification was committed on the other server(s). How can we track all of these independent conflict windows? If only we had a database :)

This is solved by maintaining a per-server logical clock, which increments periodically. Each modification to a row sets a hidden metacolumn of the row to the current value of the server's logical clock. This gives each row a kind of coarse logical timestamp. When the logical clock increments, an event is included in the Binlog to record the transition. Further, all row events for modifications with logical clock value X are stored in the Binlog before any row events for modifications with logical clock value X+1.

 Server A Binlog events    ClockVal stored in DB
by Modification

...
MA1 39
MA2 39
MA3 39
ClockVal_A = 40
MA4 40
MA5 40
ClockVal_A = 41
MA6 41


When a Slave applies the Binlog, the ClockVal events are passed through into its Binlog, and are then made available to the original server in a circular configuration.

 Server B Binlog events

...
MB1
MB2
ClockVal_A = 40
MB3
MB4
ClockVal_B = 234
MB5
MB6
ClockVal_A = 41
MB7
...



Using the Binlog ordering, we can see that ClockVal_A = 40 happened before MB3 and MB4 at Server B. This implies that MA1, MA2 and MA3 happened before MB3 and MB4 at server B.

When applying Server B's Binlog to Server A, the Slave at Server A maintains a maximum replicated clock value, which increases as it observes its ClockVal_A events returned. When applying a row event originating from Server B, the affected row's stored clock value is first compared to the maximum replicated clock value to determine whether the row event from B conflicts with the latest committed change to the row at Server A.

The two modifications are in conflict if the stored row's clock value is greater than or equal to the maximum replicated clock value.

in_conflict = row_clockval >= maximum_replicated_clockval

Using a logical clock to track conflict windows has the following benefits :
  • Automatic update on commit of row modification, opening conflict window
  • Automatic extension of conflict window on further modification on row with open conflict window.
  • Automatic closure of conflict window on maximum replicated clock value exceeding row's stored value
  • Efficient storage cost per row - one clock value.
  • Efficient runtime processing cost - inequality comparison between maximum replicated clock value and row's stored clock value.

As you might have guessed, NDB$EPOCH uses the MySQL Cluster epoch values as a logical clock to detect conflicts. The details of this will have to wait for yet another post. In my first two posts on this subject I thought, 'one more post and I can finish describing this', but here I am at three posts and still not finished. Hopefully the next will get more concrete and finally describe the mysterious workings of NDB$EPOCH. We're getting closer, honest.

Edit 23/12/11 : Added index

Wednesday, 12 October 2011

Some MySQL projects I think are cool - Shard-Query

I've already described Justin Swanhart's Flexviews project as something I think is cool. Since then Justin appears to have been working more on Shard-Query which I also think is cool, perhaps even more so than Flexviews.

On the page linked above, Shard-Query is described using the following statements :

"Shard-Query is a distributed parallel query engine for MySQL"
"ShardQuery is a PHP class which is intended to make working with a partitioned dataset easier"
"ParallelPipelining - MPP distributed query engines runs fragments of queries in parallel, combining the results at the end. Like map/reduce except it speaks SQL directly."

The things I like from the above description :
  • Distributed
  • Parallel
  • MySQL
  • Partitioned
  • Fragments
  • Map/Reduce
  • SQL
The things that scare me :
My fear of PHP is most likely groundless, based on experiences circa 1998. I suspect it runs much of the real money-earning web, and perhaps brings my scribblings to you. However, the applicability of Shard-Query seems so general, that to actually (or apparently) limit it to web-oriented use cases seems a shame. In any case I am not hipster enough to know which language would be better - OCaml? Dart? Only joking. I suppose that if the MySQL Proxy could do something along these lines then the language debate would be moot.

I am likely to fall foul of the lack-of-original-content test if I quote too much from the Shard-Query website, but the How-it-works section seems relevant here.

How it works

  • The query is parsed using http://code.google.com/p/php-sql-parser
  • A modified version of the query is executed on each shard.
  • The queries are executed in parallel using http://gearman.org
  • The results from each shard are combined together
  • A version of the original query is then executed over the combined results
  • All aggregation is done on the slaves (pushed down)
  • Queries with inlists can be made into parallel queries.
  • A callback can be used for QueryRouting. You provide a partition column, and a callback which returns information pointing to the correct shard. The most convenient way to do this is with Shard-Key-Mapper

Query rewriting rules

The core of Shard-Query are the query rewriting rules, which Justin introduces in a blog post entitled 8 substitution rules for running SQL in parallel. These transforms and substitutions allow Shard-Query to execute a user supplied query across multiple database shards. A single query (SELECT) can be mapped into a query to be applied to some, or all shards, and further queries to be used to merge the results of the per-shard queries into a final result.

Compared to a single system query, the consistency of the view that the sharded query executes against is less well defined, but this may well be acceptable for some applications.

On a single server

A single MySQL instance offers inter-query parallelism, but currently has very limited intra-query parallelism, Shard-Query can circumvent this by splitting a single query into multiple sharded sub-queries which can run in different MySQLD threads (as they are each submitted by different clients) to give intra-query parallelism. To me this seems more of a cool side effect and proof of reasonable implementation efficiency, than a real reason to use Shard-Query. Perhaps someone out there has the perfect use case for this.

Across multiple servers

The big-name MySQL users scale-out with MySQL, storing subsets of data on separate MySQL instances. Shard-Query allows SQL queries spanning all shards to be executed. This is what scaled-out MySQL has been waiting for.

I don't think it would be a good idea to run heavy traffic through Shard-Query to access a set of sharded MySQL instances yet, but Shard-Query gives a great way to perform occasional queries across all shards. This could be great for reporting and perhaps some light mining for patterns, trends etc. The ability to query across live real time data may be a real gain.

Loose coupling and availability


Scaling out via sharding standalone database servers has many difficulties, but the independence of each shard can benefit availability relative to a more tightly coupled distributed system. The loose coupling of the MySQL instances means that it's far less likely that the failure of one shard will drag others down with it, increasing system availability. Shard-Query can give the loosely coupled shards a smoother facade. The limited set of capabilities that Shard-Query gives over the set of shards may well be more than good enough. Note that 'good enough' is a recurring theme in this 'MySQL projects I think are cool' series. Often 'best' results in expensive or unnecessary compromises as a side-effect of trying to please everybody all the time.

Yet another MySQL sharded scaleout design

Looking at Shard-Query, and the recent MySQL-NoSQL Api developments, it seems like a modern MySQL sharded scaleout design might make use of :
  • MySQL SQL Apis (PHP, JDBC, ODBC, Ruby, Python, ....)
  • NoSQL access mechanisms
    (HandlerSocket, Memcached(1,2))
  • ShardQuery for SQL reporting / analysis
Per-instance efficiency can be maximised by using the NoSQL access Apis, single-instance SQL is still available if required for the application, and a global SQL view is also available.

This combination of scalability, efficiency and SQL query-ability could be a sweet spot in the increasingly confusing multi-dimensional space of high throughput distributed databases.

Monday, 10 October 2011

Eventual consistency with transactions




In my last post I described the motivation for the new NDB$EPOCH conflict detection function in MySQL Cluster. This function detects when a row has been concurrently updated on two asynchronously replicating MySQL Cluster databases, and takes steps to keep the databases in alignment.

With NDB$EPOCH, conflicts are detected and handled on a row granularity, as opposed to column granularity, as this is the granularity of the epoch metadata used to detect conflicts. Dealing with conflicts on a row-by-row basis has implications for schema and application design. The NDB$EPOCH_TRANS function extends NDB$EPOCH, giving stronger consistency guarantees and reducing the impact on applications and schemas.

Concurrency control in a single synchronous system

MySQL Cluster is a relational system. Data is stored in tables with defined schemas of typed columns. As with any relational system, real-world concepts can be modelled in a number of ways with different trade offs. One such consideration is the level of normalisation applied to a data model. Transactions and concurrency control ensure that some data modelled using multiple tables, rows and columns, appears to any external observer to move instantaneously between stable, self consistent states. This is a powerful simplification, and eases the complexity burden on application writers. Each transaction provides the illusion of serialised access to the database. Multiple transactions can execute in parallel, so long as they do not interfere by accessing the same data. Where transactions do interfere, some real serialisation can occur. In practice, applications depend on the serialisation and atomicity guarantees given by transactions, often in ways not fully made explicit or understood by the application designers.

Concurrency control in independent, asynchronously replicated systems

Asynchronously replicating writes between two independent systems erodes the guarantees given by single system concurrency control. Each system maintains its transactional guarantees in parallel, and incorporates modifications from the other system asynchronously, at some time after they were originally committed. Where the same row is modified on both systems concurrently, two versions of the same row are produced, and there is no longer a single history of values for the given row. This can cause replicas to diverge. Note that the window of 'concurrency', or 'potential conflict' is related to the time taken for a committed update to be applied on all replicas. This is similar, or equivalent to the commit delay experienced by a synchronous 2-phase commit system.

Conflicts can be detected using some form of conflict detection. On detecting a conflict, steps can then be taken to avoid divergence, and resolve any unwanted effects of the concurrent writes.

Replica divergence,
external effects and cascading impacts

Divergence can be avoided if conflicting writes can be merged in some way. Some write conflicts may be equivalent, associative or otherwise mergeable, especially if the operations are replicated rather than their resulting states. However merging requires specific schema and application knowledge to determine how to merge conflicting writes.

More generally, divergence can be avoided by rejecting one or both conflicting writes. This is the approach we have taken, with handling of rejected writes delegated to the application, where the knowledge exists to handle them via the exceptions table mechanism.

However write conflicts are handled, it is important to consider :
  1. Cascading impacts on dependent operations
    Operations based on the results of conflicting operations may themselves require handling to avoid divergence.
  2. Real world / other system effects based on conflicting writes
    Maintaining database consistency does not guarantee that real world effects have been correctly compensated.

A database system does not exist in a vacuum. Operations are performed to reflect external world, or external system events. When the effects of an operation are later reverted, the real world effects may also require some compensating actions. These external world compensating actions are beyond the scope of any DBMS system and are application specific. In a real application of this technology, this is probably the most important part of the design.

Any particular conflict originates between two concurrent operations, but once a conflicting operation is committed, other operations can read its results, and commit their own, expanding the impact of the original conflict. Conflicts are discovered asynchronously, some time after the original operations are committed, so there can be a large number of subsequent operations in the replication pipeline which depend on the conflicting operations at the point they are discovered. All of the invalidated subsequent operations must be handled.

Row based conflict detection and data shearing

Using row-based conflict detection and re-alignment can counteract data divergence so that rows become consistent eventually, but this comes at the cost of eroding the atomicity of committed transactions. For example, a committed transaction which writes to three rows may, after conflict handling, have none, one, two or all three row changes reverted.

Within a single system, the two potentially visible states were :
  1. Before transaction (All rows at version 1 : Av1, Bv1, Cv1)
  2. After transaction (All rows at version 2 : Av2, Bv2, Cv2)

With row based conflict detection, ignoring the row variants we're actually conflicting with, there could be :

Before transaction : (All rows at version 1 : Av1, Bv1, Cv1)

After transaction
  1. Av2, Bv2, Cv2 (All rows at version 2)
  2. Av2, Bv2, Cv1 (Cv2 reverted)
  3. Av2, Bv1, Cv2 (Bv2 reverted)
  4. Av2, Bv1, Cv1 (Bv2, Cv2 reverted)
  5. Av1, Bv2, Cv2 (Av2 reverted)
  6. Av1, Bv2, Cv1 (Av2, Cv2 reverted)
  7. Av1, Bv1, Cv2 (Av2, Bv2 reverted)
  8. Av1, Bv1, Cv1 (Av2, Bv2, Cv2 reverted)

Depending on the concept that the distinct rows A,B,C represented, this can vastly increase the complexity of understanding the data. If A, B and C model entirely separate entities, which just happened to be transactionally updated together then there may be no problem if they fare differently in conflict detection. If they model portions of the state of a larger entity then reasoning about the state of that entity becomes complex.

This potential chopping up of changes committed in a transaction can be described as shearing of the data model represented by the schema. In practice, the potential for shearing between rows implies that for tables with conflicts handled on a row basis, cross row consistency is not available. This in turn implies that the schema must be modified to ensure that data items which cannot tolerate relative shear are placed in the same row so that they share the same fate and remain self-consistent. This single-row limit to consistency is native and natural to some NoSQL / key-value / wide column store products, but is a weakening of the normal guarantees in a transactional system.

Requiring that schemas and applications using conflict detection can tolerate shear between any two rows is quite a heavy burden to place on applications, especially those not written with eventual consistency in mind. Is there some way to support optimistic conflict detection without breaking up committed transactions, and shearing rows?

Transaction based conflict detection


One way to avoid inter-row shearing is to perform conflict detection on a row-by-row basis, but on discovering a conflict, take action on a transaction basis. More concretely, when a row conflict is discovered, any other rows written as part of the same transaction should also be considered in-conflict by implication. This reduces the set of stable states back to the original case - all rows at version 1 or all rows at version 2.

Where a row is found to be in-conflict with some replicated row operation, a further replicated row operation on the same row should also be found to be in-conflict, until the conflict condition has been cleared. This property is implicitly implemented in the existing row based conflict detection functions.

When the scope of a conflict is extended to include all row modifications in a transaction, this implies that all following replicated row operations which affect the same rows, must also be in conflict by implication. To avoid row shearing, these implied-in-conflict rows must implicate the other rows in their transactions, and those rows may in-turn implicate other rows. The overall effect is that a single row conflict must cause its transaction, and all dependent transactions to be considered to be in conflict.

From our database centric point of view, transactions can only become dependent on each other through the data they access in the database. If transaction X updates rows A and B, and transaction Y then reads row B and updates row C, then we can say that transaction B has a read-write dependency on transaction A via row B. We cannot tell whether there is some other out-of-band communication between transactions.

By tracking this transaction 'footprint' information, and looking for row overlaps, we can determine transaction dependencies. This is how the new NDB$EPOCH_TRANS function provides transactional conflict detection.

NDB$EPOCH_TRANS conflict detection function

The NDB$EPOCH_TRANS conflict function uses the same mechanism as the NDB$EPOCH function to detect concurrent updates to the same row across two clusters. However, once a row conflict has been detected in an operation which is part of a replicated transaction, all other operations in that replicated transaction are considered to be in conflict. Furthermore, any transactions found to be dependent on that transaction are also considered in conflict. Once the full set of in conflict transactions has been determined, the set of affected rows are handled in the same way as in NDB$EPOCH.

Specifically :
  • The replicated operations are not applied
  • The exceptions table(s) are populated with the affected primary keys
  • The affected row epochs are updated
  • Realignment Binlog events are generated to (eventually) realign the Secondary cluster

As with NDB$EPOCH, NDB$EPOCH_TRANS is asymmetric, so the Primary Cluster always wins when a conflict is detected. As with NDB$EPOCH, this allows applications needing pessimistic properties to obtain them by accessing the Primary Cluster. Applications which can handle the relaxed consistency of optimism can access either Cluster. With NDB$EPOCH_TRANS, transactions committed on the Secondary Cluster are guaranteed to be atomic, whether or not they are later found to be in conflict. Each committed transaction will either be unaffected by conflict detection, or be completely reverted. There will be no row shear.

This slightly stronger optimistic consistency guarantee may ease the implementation of relaxed consistency / eventually consistent applications. For example, where some concept is modelled by a number of different rows in different tables, any transactional modification will either be atomically applied, or not applied at all, so the relationships between the rows affected by a transaction will preserved. The need to flatten a schema into single-row entities is reduced, although careful design is still required to get a good understanding of transaction boundaries, and the behaviour of the overall system when transactions are reverted.

Transaction dependency tracking


NDB$EPOCH_TRANS is built in to the MySQL Cluster Storage Engine. It is active in the normal MySQL Slave SQL thread, as part of the normal table handler calls made when applying a replicated Binlog. The NDB$EPOCH_TRANS code in the Ndb storage engine tracks transaction dependencies based on the primary keys accessed by row events in the Binlog, and their transaction ids. If two row events have the same table and primary key values, then they affect the same row. If two events affect the same row, and are in different transactions, then the second transaction depends on the first. In this way, a transaction dependency graph is built by the MySQL Cluster Storage Engine as row events are applied by the Slave from a replicated Binlog. This graph is then used to find dependencies when a conflict is detected.

A Binlog only contains WRITE_ROW, UPDATE_ROW and DELETE_ROW events. This means that we only detect dependencies between transactions which write the same rows. We do not currently track dependencies between writers and readers. For example :

Transaction A : {Write row X, Write row Y}
Transaction B : {Read row Y, Write row Z}

Binlog : {{Tx A : Wr X, Wr Y}, {Tx B : Wr Z}}

In this example, the dependency of Transaction B on Transaction A is not recorded in the Binlog, and so the Slave is not aware of it. This would result in the write to row Z not being considered in conflict, when it should be.

A future improvement is to add selective tracking of reads to the Binlog, so that Write -> Read dependencies will implicate reading transactions when a conflict is discovered.

There's more to come


Another long dry post, best consumed with your favourite drink in hand. As I mentioned last time, these functions are pushed, and available in the latest releases of MySQL Cluster. I'd be happy to hear from anyone who wants to try them out and give feedback. I've been deliberately light with implementation details thus far, as I'm saving those for yet another posting. I think that some of the implementation details are interesting from a replication point of view, even if you're not interested in these particular conflict detection algorithms. You may disagree :)

Edit 23/12/11 : Added index

Monday, 3 October 2011

Eventual consistency with MySQL




tl;dr : New 'automatic' optimistic conflict detection functions available giving the best of both optimistic and pessimistic replication on the same data

MySQL replication supports a number of topologies, and one of the most interesting is an active-active, or master-master topology, where two or more Servers accept read and write traffic, with asynchronous replication between them.

This topology has a number of attractions, including :
  • Potentially higher availability
  • Potentially low impact on read/write latency
  • Service availability insensitive to replication failures
  • Conceptually simple

However, data consistency is hard to maintain in this environment. Data, and access to it, must usually be partitioned or otherwise controlled, so that the consistency of reads is acceptable, and to avoid lost writes or badly merged concurrent writes. Implementing a distributed data access partitioning scheme which can safely handle communication failures is not simple.

Relaxed read consistency

Relaxed read consistency is a fairly well understood concept, with many Master-Slave topologies deployed where some read traffic is routed to the Slave to offload the Master and get 'read scaling'.
Generally this is acceptable as :
  1. A Read-only Slave's state is self-consistent. It is a state which, at least logically, existed on the Master at some time in the past.
  2. The reading application can tolerate some level of read-staleness w.r.t. the most recently committed writes to the Master

A surprisingly large number of applications can manage with a stale view as long as it is self-consistent.

Read-your-writes


Applications requiring 'read your writes' consistency (or session consistency) must either read from the Master, or wait until the Slave has replicated up to at least the point in time where the application's last write committed on the Master before reading from it. It is simpler and less delay-prone to just read from the Master, but this increases the load on the Master, reducing the ability of a system to read-scale. When the Master is unavailable, some sort of failover is required, and therefore, some sort of recovery process is also required.

Partitioned Active-Active/ Balanced Master-Slave

Rather than treating a whole replica as either Master or Slave, we can have each replica be both a Master and a Slave. The partitioning could be on database level, table level, or some function of the rows contained in tables, perhaps some key prefix which maps to application entities. Balancing the Master/Slave role in this way allows the request load to be balanced, reducing issues with a single system providing the Master 'role' having to do more work.

In this configuration, rather than talking about Master and Slave, it makes more sense to talk about some partition of data being 'Active' on one replica, and 'Backup' on the others. Read requests routed to the Active replica will be guaranteed to get the latest state, whereas the Backup replicas can potentially return stale states. Write requests should always be routed to the Active replica to avoid potential races between concurrent writes.

Implementing a partitioned replicated system like this generally requires application knowledge to choose a partitioning scheme where expected transaction footprints align with the partitioning scheme, and cross-partition transactions are rare/non-existant. Additionally, it requires application modification, or a front-end routing mechanism to ensure that requests are correctly routed. The routing system must also be designed to re-route in cases of communication or system failure, to ensure availability, and avoid data divergence. After a failure, recovery must take care to ensure replicas are resynchronised before restoring Active status to partitions in a recovered replica.

Implementing a partitioned replicated system with request routing, failover and recovery is a complex undertaking. Additionally, it can be considered a pessimistic system. For embarassingly parallel applications, with constrained behaviours, most transactions are non-overlapping in their data footprint in space and (reasonable lengths of) time. Enforced routing of requests to a primary replica adds cost and complexity that is most often unnecessary. Is it possible to take a more optimistic approach?

Optimistic Active-Active replication

An optimistic active-active replication system assumes that conflicting operations are rare, and prefers to handle conflicts after they happen, than to make conflicts impossible, by mapping them to delays or overheads all of the time. The one-time cost of recovering from a conflict after it happens may be higher than the one-time cost of preventing a conflict, but this can be a win if conflicts are rare enough.

Practically, optimistic active-active replication involves allowing transactions to execute and commit at all replicas, and asynchronously propagating their effects between replicas. When applying replicated changes to a replica, checks are made to determine whether any conflicts have occurred.

Benefits of optimism include :
  • Local reads - low latency, higher availability
  • Local writes - low latency, higher availability
  • No need to route requests, failover, recover
    Recovery from network failure is the same as for normal async replication - catch up the backlog.

A pessimist is never disappointed, as they always expect the worst, but an optimist is occasionally (often?) disappointed. With active-active replication, this disappointment can include reading stale data, as with relaxed read consistency, or having committed writes later rejected due to a conflict. This is the price of optimism. Not all applications are suited to the slings and arrows inherent in optimism. Some prefer the safety of a pessimistic outlook.

Benefits of pessimism include :
  • Only durable data returned by reads
  • Committed writes are durable
MySQL Cluster replication has supported symmetric optimistic conflict detection functions since the 6.3 release. These provide detection of conflicts for optimistic active-active replication, allowing data to be written on any cluster, and write-write conflicts to be detected for handling. The functions use an application defined comparison value to determine when a conflict has occurred, and optionally, which change should 'win'. This is very flexible, but can be difficult to understand, and requires application and schema changes to be made use of.

When presented with an either-or decision, why not ask for both? Is it possible to have the benefits of both optimistic and pessimistic replication? Can we have them both on the same data at the same time?


Asymmetric optimistic Active-Active replication

I have recently been working on new asymmetric conflict detection functions for MySQL Cluster replication. These functions do not require schema or application modifications. They are asymmetric in that one data replica is regarded as the Active replica. However, unlike a pessimistic partitioned replicated system, writes can be made at Active or Backup replicas - they do not have to be routed to the Active replica. Writes made at the Backup replica will asynchronously propagate to the Active replica and be applied, but only if they do not conflict with writes made concurrently at the Active replica.

Having a first class Active replica and a second class Backup replica may seem like a weakness. However, it allows optimistic and pessimistic replication to be mixed, on the same data for different use-cases.

Where a pessimistic approach is required, requests can be routed to the Active replica. At the Active replica, they will be guaranteed to read durable data, and once committed, writes will not be rejected later.

Where an optimistic approach is acceptable, requests can also be routed to the Backup replica. At the Backup replica, committed writes may later be rejected, and reads may return data which will later be rejected. The potential for disappointment is there, and applications must be able to cope with that, but in return, they can read and write locally, with latency and availability independent of network conditions between replicas.

A well understood application and schema can use pessimistic replication, with request routing, where appropriate, and write-anywhere active-active where the application and schema can cope with the relaxed consistency.

New conflict functions - NDB$EPOCH, NDB$EPOCH_TRANS

The new NDB$EPOCH function implements asymmetic conflict detection, on a row basis. One replica of a table is considered Active (or Primary), and the other(s) are Backup (or Secondary). Writes originating from the Backup replica are checked at the Active replica to ensure that they don't conflict with concurrent writes originating at the Active replica. If they do conflict, then they are rejected, and the Backup is realigned to the Active replica's state. In this way, data divergence is avoided, and the replicated system eventually becomes consistent.

The conflict detection, and realignment to give eventual consistency all occur asynchronously as part of the normal MySQL replication mechanisms.

As with the existing conflict detection functions, an exceptions table can be defined which will be populated with the primary keys of rows which have experienced a conflict. This can be used to take application specific actions when a conflict is detected.

Unlike the existing conflict detection functions, no schema changes or application changes are required. However, as with any optimistic replication system, applications must be able to cope with the relaxed consistency on offer. Applications which cannot cope, can still access the data, but should route their requests to Active replicas only, as with a more traditional pessimistic system.

As these functions build on the existing MySQL Cluster asynchronous replication, the existing features are all available :
  • Slave batching performance optimisations
  • High availability - redundant replication channels
  • Transactional replication and progress tracking
  • Normal MySQL replication features : DDL replication, Binlog, replicate to other engines etc..
Ok, that's long enough for one post - I'll describe NDB$EPOCH_TRANS and its motivations in a follow-up. If you're interested in trying this out, then download the latest versions of MySQL Cluster. If you're interested in the optimistic replication concept in general, I recommend reading Saito and Shapiro's survey.

Edit 23/12/11 : Added index