I recently read an article by a DataStax solution architect stating that to get the best insert performance using unlogged BatchStatements, all statements in the batch should belong to the same partition so that the coordinator node doesn’t have to re-route statements for partitions it doesn’t own. There is a problem with this advice however (as shown in my comments at the bottom of that article) in that as of this writing, the DataStax C# driver uses RoundRobinPolicy as the default load balancing policy (in contrast to the Java driver that uses TokenAwarePolicy wrapping a DCAwareRoundRobinPolicy). This is now acknowledged in this bug.

From https://github.com/datastax/csharp-driver/blob/c891b57c9f3cf52a75ccb888bf76fe0dad452afd/src/Cassandra/Policies/Policies.cs:

/// <summary>
///  The default load balancing policy. <p> The default load balancing policy is
///  <link>RoundRobinPolicy</link>.</p>
/// </summary>
public static ILoadBalancingPolicy DefaultLoadBalancingPolicy
{
	get
	{
		return new RoundRobinPolicy();
	}
}

From https://github.com/datastax/java-driver/blob/2.1/driver-core/src/main/java/com/datastax/driver/core/policies/Policies.java:

/**
 * The default load balancing policy.
 * <p>
 * The default load balancing policy is {@link DCAwareRoundRobinPolicy} with token
 * awareness (so {@code new TokenAwarePolicy(new DCAwareRoundRobinPolicy())}).
 *
 * @return the default load balancing policy.
 */
public static LoadBalancingPolicy defaultLoadBalancingPolicy() {
	// Note: balancing policies are stateful, so we can't store that in a static or that would screw thing
	// up if multiple Cluster instance are started in the same JVM.
	return new TokenAwarePolicy(new DCAwareRoundRobinPolicy());
}

The problem here is that even if you do group batches by partition, the RoundRobinPolicy does nothing to ensure the coordinator actually owns that partition and the effort will be wasted. Thankfully the C# driver includes a TokenAwarePolicy, but how to use it, and especially how to set the routing keys of BatchStatements seems to be completely undocumented. I had to trace through code to find out how it works, so hopefully this article will save somebody else the trouble until DataStax brings the C# driver in line with the Java driver.

First step, set the policy and connect:

ILoadBalancingPolicy childPolicy = Policies.DefaultPolicies.LoadBalancingPolicy;
ILoadBalancingPolicy tokenAwarePolicy = new TokenAwarePolicy(childPolicy);
Cluster cluster = Cluster.Builder().AddContactPoint("host").WithLoadBalancingPolicy(tokenAwarePolicy).Build();
ISession session = cluster.Connect("keyspace");

Create an unlogged BatchStatement and fill it with data for a single partition:

PreparedStatement stmt = session.Prepare("...");
BatchStatement batch = new BatchStatement();
batch.SetBatchType(BatchType.Unlogged);
batch.Add(stmt.Bind(...));
...

Create and set the routing keys. This example assumes a compound partition key consisting of a string and an int, and uses my own ‘Util’ class:

RoutingKey[] routingKey = new RoutingKey[] {
  new RoutingKey() { RawRoutingKey = Util.StringToBytes(keyPart1) },
  new RoutingKey() { RawRoutingKey = Util.Int32ToBytes(keyPart2) }
};
batch.SetRoutingKey(routingKey);

For information on how to properly encode different types for keys, have a look in TypeCodec.cs – here are some examples to get you started:

  • Int32 is encoded as a big-endian byte array:
    public static byte[] Int32ToBytes(int value)
    {
    	return new[]
    	{
    		(byte) ((value & 0xFF000000) >> 24),
    		(byte) ((value & 0xFF0000) >> 16),
    		(byte) ((value & 0xFF00) >> 8),
    		(byte) (value & 0xFF)
    	};
    }
    
  • Strings are encoded as a UTF-8 byte array:
    public static byte[] StringToBytes(string value)
    {
    	return Encoding.UTF8.GetBytes(value);
    }
    

It’s a bit hard to tell if your logic is working correctly, especially when you have compound keys of different types. I ended up putting a breakpoint inside TokenAwarePolicy.cs/NewQueryPlan() and validating the chosen node against the output of the nodetool getendpoints command, and I’d suggest you do the same:

nodetool getendpoints keyspace table keyPart1:keyPart2

Whether or not any of this is worth the effort is a whole other question, and my personal experience tells me that it probably isn’t. From my time on a large Cassandra cluster, I consistently get better performance using larger batches of heterogeneous data than smaller batches of partitioned data. In the future I hope to write a whole article on the topic of Cassandra insert performance and back it up with real numbers.

That’s all for now, I hope somebody found it useful. If you have any comments, please leave them below.