Load balancing (connecting to multiple nodes)

Qbox clusters consist of 1 or more nodes, with each node running on an isolated VM (or server) in your chosen data center. Each node gets a unique public IP address and a private IP address—if available. Hostnames resolve to the public IP and are also assigned to each of the nodes using a shared prefix, like so:

  • 522c0fae........000.qbox.io (node 1)
  • 522c0fae........001.qbox.io (node 2)
  • 522c0fae........002.qbox.io (node 3)
  • ...
  • 522c0fae........019.qbox.io (node 20)

You can use any endpoint to communicate with the cluster, which will return responses for the cluster as a whole. For example, if indexing requests are sent to node 2, it will not necessarily be the case that node 2 will be the destination for the data. Elasticsearch will route the data to an appropriate node. Similarly, search requests that are sent specifically to node 4 will return results for the entire cluster—not only the matching data on node 4.

We realize that it's possible to use a load balancer to distribute requests to endpoints, but we don't recommend it. Even if the load balancer is on the local (data center) network, a remote load balancer is another point of network indirection for Elasticsearch requests.

Most Elasticsearch clients will configure for client-side load balancing and accept an array of hosts when initializing (that is, listing the endpoint for each node in your application code). This is generally much more efficient in comparison to a hosted load balancer, since it prevents additional network transit time on each request.

Here's an important consideration: For Java clients only, you can start the client as node client, a node client is aware of the cluster and maintains a copy of cluster state (the routing table). This means that the client will communicate directly with the nodes in which the shard is present to avoid unnecessary "double hops" on indexing. So, for a Java client, you can configure it as a node client and easily implement an inexpensive, highly updated load balancer.

For some, the need for a remote load balancer may be the ability to detect downed nodes and attempt retries. Qbox implements a similar solution internally, and you can read more about failover with multiple nodes


Note: Many hosted datastore solutions will employ a "shared" architecture — users share computational resources that are found in larger host machines. This was our original approach some years ago: abstracting nodes from the user and providing a single endpoint. Many services still use this approach. For many reasons and with extensive experience, we at Qbox have come to learn that a single-tenant approach is far more appropriate for Elasticsearch.

Since most cloud infrastructure is built upon hypervisors, some may astutely note that we're technically maintaining a "shared" architecture. While this may be true, it is still true that many hosted datastore solutions will add another layer of (potentially performance-impeding) virtualization (i.e. OS containers or shared processes).

Have more questions? Submit a request

Comments

Powered by Zendesk