Change Password

Please enter the password.
Please enter the password. Between 8-64 characters. Not identical to your email address. Contain at least 3 of: uppercase, lowercase, numbers, and special characters.
Please enter the password.
Submit

Change Nickname

Current Nickname:
Submit

Apply New License

License Detail

Please complete this required field.

  • Ultipa Blaze (v4)
  • Ultipa Powerhouse (v5)

Standalone

learn more about the four main severs in the architecture of Ultipa Powerhouse (v5) , click

here

Please complete this required field.

Please complete this required field.

Please complete this required field.

Please complete this required field.

Leave it blank if an HDC service is not required.

Please complete this required field.

Leave it blank if an HDC service is not required.

Please complete this required field.

Please complete this required field.

Mac addresses of all servers, separated by line break or comma.

Please complete this required field.

Please complete this required field.

Cancel
Apply
ID
Product
Status
Cores
Maximum Shard Services
Maximum Total Cores for Shard Service
Maximum HDC Services
Maximum Total Cores for HDC Service
Applied Validity Period(days)
Effective Date
Expired Date
Mac Address
Reason for Application
Review Comment
Close
Profile
  • Full Name:
  • Phone:
  • Company:
  • Company Email:
Change Password
Apply

You have no license application record.

Apply
Certificate Issued at Valid until Serial No. File
Serial No. Valid until File

Not having one? Apply now! >>>

Product Created On ID Amount (USD) Invoice
Product Created On ID Amount (USD) Invoice

No Invoice

v5.2
Search
    English
    v5.2

      Graph Sharding and Storage

      Overview

      The graph data is physically stored on the shard servers that constitute the Ultipa database deployment. Depending on your setup, you can run one or multiple shard servers.

      When creating a graph, you can designate one or multiple shard servers to store its nodes and edges in a distributed manner. This sharded architecture enables horizontal scaling of your data volume while maintaining high-performance querying.

      Graph Sharding

      To create an open graph g1, the graph data will be distributed across three shards [1,2,3] using the CityHash64 hash function:

      CREATE GRAPH g1 ANY PARTITION BY HASH(CityHash64) SHARDS [1,2,3]
      

      The keyword PARTITION BY specifies the hash function, and SHARDS specifies the shard ID list:

      • Hash function: A hash function (Crc32, Crc64WE, Crc64XZ, or CityHash64) computes the hash value for the sharding key (i.e., nodes' _id), which is essential for sharding the graph data. For more information, refer to Crc and CityHash.
      • Shard ID list: A list of shard server IDs indicating where the graph data will be stored.

      Both keywords are optional. By default, the graph data is to be distributed to all shards using Crc32.

      To create a typed graph g2, the graph data will be stored on shard [1] only:

      CREATE GRAPH g2 { 
        NODE User ({name STRING, age UINT32}),
        NODE Club ({name STRING, score FLOAT}),
        EDGE Joins ()-[{joinedDate DATE}]->()
      }
      SHARDS [1]
      

      Graph Data Migration

      Graph data migration may become necessary sometime — whether to more shards when existing ones become overloaded, or to distribute data across additional geographical locations. Conversely, migrating to fewer shards can free up underutilized resources, reduce costs, and simplify management.

      To migrate graph g3 to shards [1,4,5]:

      ALTER GRAPH g3 ON SHARDS [1,4,5]
      

      This is equivalent to:

      ALTER GRAPH g3 ON SHARDS [1,4,5] PARTITION CONFIG {strategy: "balance"}
      

      The default migration strategy is balance, which redistributes the graph data evenly across the new shards. In addition, you may specify one of the following strategies:

      • quickly_expand: Quickly migrates some data from existing shards to newly added shards. The new shard list must include all current shards.
      • quickly_shrink: Quickly migrates data from removed shards to the remaining shards. The new shard list must be a sub list of the current shards.

      Assuming graph g3 is currently distributed across shards [1,2], to quickly migrate it to [1,2,4]:

      ALTER GRAPH myGraph ON SHARDS [1,2,4] PARTITION CONFIG {strategy: "quickly_expand"}
      

      To quickly migrate g3 from shards [1,2] to [1]:

      ALTER GRAPH myGraph ON SHARDS [1] PARTITION CONFIG {strategy: "quickly_shrink"}
      
      Please complete the following information to download this book
      *
      公司名称不能为空
      *
      公司邮箱必须填写
      *
      你的名字必须填写
      *
      你的电话必须填写
      Privacy Policy
      Please agree to continue.

      Copyright © 2019-2025 Ultipa Inc. – All Rights Reserved   |  Security   |  Legal Notices   |  Web Use Notices