Cloud Bigtable Schema Tips: Key Salting

 Cloud Bigtable is a low-latency, high-throughput NoSQL database. With NoSQL, you want to design a schema that can scale and adapt to your business growth. When working with large sets of data in the real world, it’s possible there will be access pattern outliers with significantly more activity that requires a bit more planning. In this article, we are going to learn how to optimize a Bigtable schema to increase performance on highly active rows on an otherwise well-balanced schema.

Row key design refresher

Bigtable performs best when the throughput is evenly distributed across the entire row key space and can spread across all the nodes. Bigtable rows are physically stored in tablets containing groups of contiguous row keys, and each tablet is distributed to the available nodes. If rows on the same tablet are receiving a disproportionately large percentage of requests compared to other tablets in that node, that can impact performance.

Typically, row keys are designed to be optimized for particular queries. For example, to have queries centered around individual users you may put a user id at the beginning, like so: user_id-xxx-yyy. When some users are significantly more active than others, such as the case for celebrity accounts, writes and reads from their rows could cause hotspotting by putting too much pressure on specific nodes.

If we can distribute the logical row by physically dividing it amongst multiple tablets, then the rows can get balanced across the available nodes and reduce the hotspot.

Key prefix salting

A well-distributed user id would typically work as a row key prefix, so we can use this as the starting point for our row key design:
user_id-xxx-yyy

One strategy to distribute this unbalanced throughput across all Bigtable nodes is to prepend an additional value to the row key design:
01-user_id-xxx-yyy
02-user_id-xxx-yyy

This example has two physical rows corresponding to one logical row which divides the throughput in half. This will distribute all the rows for a particular user id across the rest of the keyspace. Since their prefix is different, they should be able to live on different tablets and are more likely to be hosted on multiple nodes. Note that it is possible for both prefixes to be in the same node or for one prefix to be split into multiple nodes since this setup’s goal is to provide more options to the load balancing mechanism.

Choosing a prefix

Choosing a prefix that doesn’t add much complexity for requests is important to consider. If we used random prefixes, each get request would turn into multiple get requests to ensure the correct row was located. If the prefix is deterministic from the row key, then it allows for minimal changes to single-row read and write requests.

If we would like N divisions, we can take modulo N of the hash of the entire existing row key. We will also refer to N as our salt range.

int prefix = rowKey.hashCode() % saltRange;
String saltedRowKey = prefix + "-" + rowKey;

A point lookup and write will still work as the physical key can be computed from the logical key. Salting won’t eliminate the hotspots, but it spreads them into N hotspots of strength 1/N. These less severe hotspots can be more easily processed by individual nodes.

Prefix options

If you have common scans over prefixes that you would like to stay intact, you can also hash just part of the row key rather than the entire row key.

For a row key of the format user_id-site-timestamp, you might want efficient scans over user_id and site combinations. Here, we can leave off the timestamp when creating the hash, so the time-series data for those combinations will always be grouped together.

String rowKeyBase = rowKey.substring(0, rowKey.lastIndexOf("-"));
int prefix = rowKeyBase.hashCode() % saltRange;
String saltedRowKey = prefix + "-" + rowKey;

Keys with the same logical prefix that is often scanned can still be efficiently scanned.

This strategy is less resistant to hotspots—the same problem that the salting strategy is supposed to mitigate can come up again if individual user_id, site combinations get significant access.

Implementation

To implement this in your code, you’ll need to change the areas where you are making requests to Bigtable data. You can view the full source code example on Github.

Writing

Using this new technique, if you want to write data, follow these steps:

  1. Take the row key you intend to write to
  2. Compute the prefix using your hash function
  3. Construct the salted row key by concatenating the prefix and row key
  4. Then use the salted row key for writing your data

You will need to ensure that you integrate this flow to anywhere you are writing data.

In Java, it would look something like this:

String saltedRowKey = getSaltedRowKey(rowKey, SALT_RANGE);
RowMutation rowMutation = RowMutation.create(tableId, saltedRowKey)
                              .setCell(....

Reading

Gets

To read individual rows in a table with salted keys, you would follow the same initial steps in writing the data like:

  1. Take the row key you intend to read
  2. Compute the prefix using your hash function
  3. Construct the salted row key by concatenating the prefix and row key
  4. Then use the salted row key for reading your data

Since the physical row key is computed deterministically from the logical row key, only one read needs to be issued for each logical key.

In Java, it would look something like this:

Row row = dataClient.readRow(tableId, getSaltedRowKey(rowKey, SALT_RANGE));

Scans

You can follow these steps for each scan:

  1. Take the row key prefix you intend to scan
  2. For 0 to N (each potential salt option)
    1. Construct the salted row key by concatenating the prefix and row key
    2. Then use the salted row key for your prefix scan
    3. Issue this scan in parallel
  3. Combine the results of all the scans

Let’s look at an example. Say you wanted to get all the data for a user and one subcategory; you would do a prefix scan on “user_id-xxx-“. If you’re working with salted rows, you would need to prefix scans based on how large your hash size is. If our hash size is 4, then we would do 4 prefix scans:

  • 01-user_id-xxx-
  • 02-user_id-xxx-
  • 03-user_id-xxx-
  • 04-user_id-xxx-

For the best performance you would want to issue each scan in parallel rather than sending all the prefixes into one request. Since the requests are done in parallel, the rows may not be returned in sorted order. If row order is important you will have to do some additional sorting once the results are received.

Because the physical row keys are no longer a contiguous range, these scans may consume more Bigtable CPU which is an important consideration for choosing a salting factor with a scan-heavy workload. Large scans, however, may be more performant as more resources can be used in parallel to serve the request.

In Java, it would look something like this:

List<Query> queries = new ArrayList<>();
for (int i = 0; i < SALT_RANGE; i++) {
  queries.add(Query.create(tableId).prefix(i + "-" + prefix));
}

List<ApiFuture<List<Row>>> futures = new ArrayList<>();
for (Query q : queries) {
  futures.add(dataClient.readRowsCallable().all().futureCall(q));
}

List<Row> rows = new ArrayList<>();
for (ApiFuture<List<Row>> future : futures) {
  rows.addAll(future.get());
}

for (Row row : rows) {
  // Access your row data here.
}

Forward looking migrations

It can be difficult to make a large change to existing datasets, so one way to migrate is only applying the salt moving forward. If you have timestamps at the end of the key, change the code to salt row keys past a certain fixed point in time, and just use an unsalted key for old/existing keys.

Next steps

By: Greg Colella (Software Engineer, Cloud Bigtable) and Billy Jacobson (Developer Advocate, Cloud Bigtable)
Source: Google Cloud Blog



For enquiries, product placements, sponsorships, and collaborations, connect with us at hello@globalcloudplatforms.com. We'd love to hear from you!


Our humans need coffee too! Your support is highly appreciated, thank you!

Total
0
Shares
Previous Article

Top Recommendations For Building Real-Time Intelligence On Google Cloud

Next Article

Using Envoy To Create Cross-Region Replicas For Cloud Memorystore

Related Posts