Change Password

Input error
Input error
Input error
Submit

Change Nickname

Current Nickname:
Submit

Graph Query Languages: Simplicity & Speed

By Ricky Sun, Jason Zhang, and Victor Wang

We have learned that relational databases use SQL and two-dimensional tables to model after the world, this is in deep contrast to how graph databases address the problem. In short, graph databases use high-dimensional data modeling strategies to 100% resemble the world – and because the real world is highly dimensional, graph is considered 100% natural. This naturalness comes with a challenge, especially for people who are accustomed to the trade-off hypothesis or the zero-sum game theories. The challenge is that you have to be brave enough and smart enough to think out of the box and the limiting belief that 'There is no better way of modeling the real-world problems beyond SQL or RDBMS'.

In our previous essay on The Evolution of Database Query Languages, we depicted the evolution of SQL and NoSQL, and their advantages and disadvantages. In today’s business world, we are seeing more and more businesses relying on graph analytics and graph computing to empower their business decisions.

Graph-0 Deep Data and Graph Analytics – A Major Trend in202x

In a series of reports from November 2019 to June 2020, Gartner predicted that Graph analytics is not only 1 of the top 10 Data & Analytics trends (see the above diagram), but also laid out that: By 2023, 30% of business decisions worldwide will be powered by graph analytics.

Now, let’s pause for a second and ponder on the top features you would desire graph databases to offer. What are they? This is a question equivalent to asking: What features are missing in SQL and RDBMS?

As always, this is an open question and the answer can be highly diversified and warrants no standardization. However, out of all possible answers, 3 things come out of our guts feeling and if you think deeply, you will resonate with us, they are:

  • Simplicity (i.e., Enjoy writing 100s of lines of SQL?)
  • Recursive Operations (i.e., Recursive Queries)
  • Speed (i.e., Real-time-ness)

In a reversed order, let’s go through this 3-strike-out list.

Speed

Speed and lightning-fast performance are deeply rooted in the original promise of what graph database really offers to the world. Because conventional table joins are insanely slow to churn out data, a fundamental change of the way we build data, therefore, is both necessary and pivotal. If you think from the high-dimensional space, graph is the most natural way to connect the dots (vertices) by connecting entities into a network, a tree, or a graph, and working against the graph is simply to find the neighbors of any vertex, paths reaching to or from the vertex, common attributes sharing between any pair of vertices (similarity), the centrality of a graph or sub-graph, the ranking of a vertex in the graph, and etc. There are many ways to work against the graph and seek for data out of it. However, as the graph is getting larger, denser or the queries are getting more sophisticated, the performance degradation is considered a norm by most people… Well, this doesn’t have to be so.

Consider the following scenario: In a social graph of millions of users who interact with each other, find 200 pairs of users (starting from users whose age=22, ending with users age=33) who are 1-hop to 10-hop away from each other. Gauge the performance degradation, if there are any.

Graph-1 Real-Time Deep Graph Traversal by Ultipa Graph

The above screenshots show the turn-around time (roundtrip latency) of running the queries on Ultipa Graph (a high-performance graph database that’s capable of real-time ultra-deep graph traversal) against a reinforced Amazon-0601 Graph Dataset (3.5 Million Edges and 0.5 Million Vertices with multiple attributes like id, name, age, type of relationship, rank, and etc.). These queries traverse the graph based on the template path query criteria, starting from 1-hop, gradually going deeper to 10-hop. The following chart captures the latency comparison:

 

Graph-2 Graph Traversal Latency & Breakdown (Ultipa Graph)

As you can see that when we are traversing deeper into the graph, the overall latency growth trend stays flat (1-hop/16ms to 10-hop/18ms for total time, and 1-2ms or lower for IN-RAM computing time). There is no obvious or significant performance degradation. Moreover, the IN-RAM computing engine greatly accelerated the processing, as you can see that the IN-RAM latency accounts for 5-to-10% of the overall latency – and the absolute cost is extremely low – if you think about locating 200 paths in a graph, each with a formidable length of 10 (10 edges and 11 vertices along the path, plus the filtering parameters of starting users having age=22, ending users age=33).

This almost flat, definitely sublinear latency growth is achieved with 3 core technologies as illustrated in the below diagram:

  • High-Density Parallel Graph Computing
  • Ultra-Deep Graph Traversal (and Dynamic Graph Pruning)
  • Linear Scalable Graph Computing (and Storage, of course)

And, they jointly bring about 3 major benefits:

  • Much Lower TCO
  • Faster Time-to-Value and Time-to-Market
  • Superb Usability

 Graph-3 Core Advantages of Ultipa Graph (3x3)

Graph, unlike traditional web applications which are stateless and dispersible, is wholistic (a.k.a holistic). Being wholistic means if the entire graph is disseminated into many physical instances, any query that requires processing against all instances that are depending on each other for intermediate results will significantly slow down the overall system performance. This is a typical dilemma of any BSP (Bulky Synchronous Processing ) system. To avoid this dilemma, try as much as possible to store the skeleton of the graph in a holistic way, whereas the decorating dimensionalities of the graph (attributes, as much as you want) can be dispersed across the cluster!

The above paragraph should enlighten the readers who wonder why JanusGraph, AWS Neptune, or ArangoDB (and a handful of other commercial releases) are so sluggish and can NOT return in a timely fashion while trying to accomplish the same kind of graph queries. Their core weakness lies in their unwise and blindly complicated distributed architecture design.

Putting distribution strategy aside, the most important techniques to accelerate graph queries (like path-finding or k-hopping) are through high-density computing and deep-traversal-n-runtime-pruning. However, these 2 supposedly fundamental techniques are long forgotten by many programmers. Admittedly, we are living in a world of high-level programming languages, millions of newer generation young programmers are believing that the omnipotent power and ease of Python is ruling the world. However, in terms of processing power, Python is simply problematic, so are Java and any languages that dwell on GC.

Graph-4 Parallel Architecture Design and Memory Optimization (Turning Lecture 2018)

In ACM’s 2018 Annual Turning Lecture, Turing Award recipients David Patterson and John Hennessy outlined the opportunity to speed up the Python program using parallel C on 18-core Intel X86-64 CPU by over 62,000 times! It’s achieved with:

  • Lower-level Programming Language (to maximize HW juice)
  • Memory Optimization (ditto)
  • Leverage CPU’s Advanced Instruction-Set Features (ditto)

All in all, these techniques are all contributing to the realization of high-density computing. Moreover, runtime dynamic graph pruning and deep graph traversal are realized with novel data structure design. In a previous essay titled The Specific Evolution of Graph Data Structures, a novel data structure Adjacency Hash* was introduced, which can be used effectively to store graph topology and enable runtime filtering (a.k.a dynamic graph trimming or pruning) – of course, the most dramatic part is that: The majority of the operations on the graph are having a big-O notation, in terms of time complexity, of O(1) instead of O(log N) or O(N) or even O(N * log N) ... This is critically important to ensure that when we traverse the graph deeply, we won’t have to deal with the exponentially growing search complexities. Here is an outline of the “Exponential Challenge”, taking the boosted Amazon-0601 Graph Dataset as an example:

  • Edge-over-Vertex ratio = 8 (on average, # of edges connecting each vertex)
  • Complexity of Searching 1-hop paths ~= 8
  • Complexity of Searching 2-hop paths ~= 64
  • Complexity of Searching 5-hop paths ~= 8*8*8*8*8 = 32,768
  • Complexity of Searching 10-hop paths ~= 1,073,741,824  (1Billion Searches)
  • Complexity of Searching 200 pairs of 10-hop paths ~= 200 Billion!

Clearly, if the complexity was really over 200 Billion, there is NO WAY for any graph system to return within milliseconds. The above complexity was only theoretical, but a bit off the chart. In reality, the maximum possible searching complexity for an O(1) operation is by no means exceeding the total number of edges in the graph, which stands at 3.5 million, therefore, we can assume that starting from 7-hop (32768 * 8 * 8) to any deeper hops, the max complexity is capped at 3.5 million (for Amazon 0601, the real complexity, in terms of this Amazon dataset is actually caped at ~0.2 million, especially around 6 to 8-hop deep). 

  • Adjusted complexities for 6 to 10-hop paths = max 200,000 each hop.

Therefore, it’s possible to review the breakdown of the latency as in Graph-5:

 

Latency Matrix

HDD

SSD

DRAM

IN-CPU-CACHE

O(1) Ops

~3 ms

~100 us

~100ns

~1 ns

10 * O(1)

30 ms

1 ms

1 us

10 ns

100 * O(1)

300 ms

10 ms

10 us

0.1 us

10K * O(1)

30 s

1 s

1 ms

10 us

200K * O(1)

600 s

20 s

20 ms

0.2 ms

200 Paths

120,000 s

4,000 s

4,000 ms

40 ms

Concurrency Level=32x

3,750s

125s

125ms

1.25 ms

Graph-5  O(1) Complexity and Latency Breakdown Chart

Comparing the numbers in Graph-5 against those in Graph-2, it’s NOT hard to come to the takeaways: By leveraging maximum concurrency (parallelization) and CPU caching, Ultipa achieved latency in close proximity to the underlying hardware platform’s capabilities. The same operations on any other graph databases, namely Neo4j, TigerGraph, D-Graph, ArangoDB, or any other less mature/commercially-ready products, would either take much longer than HDD or SSD’s performance, or never return and drop dead due to the computational complexity of traversing 10-hop deep (TigerGraph might stand somewhere between SSD and IN-RAM, but definitely can NOT return for traversal operations deeper than 8-hop). Needless to challenge that if any of these aforementioned graph database vendors have O(1) time complexity in their core data structure design.

Recursive Operations

We have covered the SPEED and performance parts above. It’s time to talk about a unique feature by graph database, which really should be universally available – Recursive Operations.

Let’s revisit our last query once again:

Find 200 pairs of users (starting from users whose age=22, ending with users age=33) who are exactly X-hop away from each other. Assuming X=[1-10]

From query language design perspective, how would you define and articulate the question and fetch the answer accordingly?

The key part is to make the query narrative, which means naturally resembling human language therefore easy to understand, meanwhile making it recursive and succinct so that you don’t have to labor on writing your logic in a convoluted way.

t().n({age: 22}).e()[10].n({age: 39}).limit(200).select(name)

Graph-6  Template-based Path Search Query and List-style Results (of Paths)

The above one line uQL is pretty self-explainable, but just in case you are in shock of getting the gist of it, the recursive part lies with the e()[10], which specifies the search depth to be exactly 10-hop, of course, you can modify it to be e()[5:11] which stands for traversing depth from 5 to 11-hop inclusively, or e()[:10] which means up to 10-hop, or e()[*:10] where * stands for shortest-path (BFS) search but with max traversal depth of 10:

Of course, the query can be expanded in many ways, such as filtering by attributes on the edges, or doing mathematical and statistical calculations. For instance, searching along the path by identifying with only relationship=“Love”, the results are captured in Graph-7, with the following updated uQL:

t().n({age: 22}).e({name:”Love”})[3:5].n({age: 39}).limit(200).return(*)

Graph-7  Template-based Path Search Query and 2D Graph-Style Results (of Paths)

Simplicity

Empowered with powerful recursive queries, a lot of things can be accomplished with great simplicity. In the remaining section, we would like to present a smart recommendation use case with implementation comparisons between Neo4j, TigerGraph, and Ultipa, purely using each of their own query languages. As the popular Chinese saying goes: Truth comes out of comparison. Let’s reveal the truth next.

Most of today’s systems implementing smart recommendations involve some sort of social recommendation, which was coined: collaborative filtering. The core concept is actually easy – it’s essentially “graphic”:

To recommend something, some products, to a customer, we usually first look through the customer’s friends’ shopping behavior, and give recommendations based on the findings…

Let’s see how the 3 graph systems tackle the collaborative filtering challenge.

Neo4j uses Cypher to write the below logic:

MATCH (customer:Customer {name: ‘Ricky Summertime’})-[:FRIEND*1..2]->(friend:Customer) WHERE customer <> friend

WITH DISTINCT friend
MATCH (friend)-[:BOUGHT]->(:Basket)<-[:IN]-(product:Product)

RETURN product, count(product) ORDER BY count(product) DESC LIMIT 5

The Cypher logic goes like this:

  1. Find Ricky’s friends and friends-of-friends (1stpart)
  2. Collect distinct (unique) friends
  3. Find what products these friends bought (2ndpart)
  4. Return the top-5 purchased products

The Neo4j logic has a prerequisite on the dataset, it assumes that the friends type of relationship is available. This is a major assumption that most e-commerce websites can hardly satisfy, not even on the world’s largest Taobao or T-mall sites – they don’t keep track of the user’s social network information.

The other obvious limitation of Neo4j & Cypher is that it does NOT allow deep traversal on the graph, you will have to pause after searching for 1 to 2 hop deep and construct the intermediate dataset (friends), then continue with the 2nd part of the search. This symptom can be aggravated in scenarios like UBO (Ultimate Business Owner) finding, where you need a deep traversal of 10 or sometimes even 15-hop deep. Neo4j can NOT handle that kind of scenario with simplicity or speed.

TigerGraph uses GSQL (short for GraphSQL) to conduct “collaborative filtering”, the below chunk of code from Tigergraph online documents illustrates how this can be done. Firstly, supply a set of users (input_user); next, find all target users that liked the inputted users; thirdly, proceed from these target users and continue to search for users that they liked and aggregate (ACCUM) and return in an ordered fashion.

cf_query.gsql - Define the collaborative filtering queryCREATE QUERY topCoLiked( vertex<User> input_user, INT topk) FOR GRAPH gsql_demo{ SumAccum<int> @cnt = 0; # @cnt is a runtime attribute to be associated with each User vertex # to record how many times a user is liked. L0 = {input_user}; L1 = SELECT tgt tFROM L0-(Liked_By)->User:tgt; L2 = SELECT tgt FROM L1-(Liked)->:tgt WHERE tgt != input_user ACCUM tgt.@cnt += 1 ORDER BY tgt.@cnt DESC LIMIT topk; PRINT L2;}

First of all, this example doesn’t look very collaborative filtering from our point of view, it’s a mere 2-hop template-based path search with data aggregation. Besides, the above GSQL is way too complicated, awkward, and lengthy. One key drawback about SQL is that it drains a lot of human cognition power – a SQL stored procedure longer than 10 lines is getting unbearably difficult to interpret. And this cluster of GSQL code gives a poor show of SQL.

Just for the sake of comparison and show-case of simplicity, if uQL were to accomplish the same, you only need to write:

Pretty simple and straightforward:

  • First, start a template search (t()), starting from a set of nodes n();
  • Second, traverse edges named “Liked By”;
  • Third, continue to traverse edges typed “Liked”, reaching nodes aliased c;
  • Fourth, return distinct c and group by b.

Let’s get back to our original question on collaborative filtering, on an e-commerce graph dataset that’s comprised of customers, products, and shopping or browsing behavior connecting users and products. Think in a very graphic way, you will basically need to achieve the following steps:

  1. Starting from a user, find all products (viewed, added, or purchased)
  2. Find all users that have taken certain actions (i.e., purchase) on these products
  3. Find all other products that are taken actions by users in Step 2.
  4. Return some of the products (i.e., top-ranked) and recommend them to the user.

The results are shown on the following diagram:

 Graph-8  Template-based Graph Query for Collaborative Filtering

The uQL used to realize this was a template-based path query, as shown below, which narrates the path filtering criteria as defined above. The eventual to-be-recommended products may be further filtered, ranked, or processed in a more sophisticated way, but the query language by itself is ostensibly self-evident and easy to digest:

In this whitepaper, we have demonstrated the key features of uQL:

  • Speed:  It’s done in genuinely real-time (the above query returns in ~1ms),
  • Simplicity:  The query is clear humanly easy to digest and 100% reflects how we, human, define and desire the dataset to churn out the data in a 100% natural way.

The beauty of an advanced query language is best manifested, not in its complexity, but in its simplicity. It should be easy to understand, therefore requiring minimum cognitive loading, and it should be lightning fast! All the tedious and formidable complexities should be shielded off by the underpinning system therefore transparent to our human users. It’s like the ancient Greek titan Atlas carrying the world on his back, and not to burden the users living on the Earth (which in our context are the users of the query language and the database).

We hope that the above exemplary graph queries can show that database query languages should be as simple as possible. Writing tens of or even hundreds of lines of SQL code has aggravated cognitive load. The concept we believe in is that:

  • Anyone who understands business can master a database query language, not necessarily a data scientist or analyst;
  • The query language should be extremely easy to use, and all the complexities are embedded within the underlying database, and transparent to users.
  • Lastly, graph database has the huge potential to carry on and replace a significant share of the SQL workload. In a recent report by Gartner (June, 2020), it’s predicted that 30% of all business decision makings is powered by graph analytics by 2023, the growth trajectory of graph is steep!

Some people have claimed that RDBMS & SQL will never be replaced, but we found this hard to believe. RDBMS replaced navigational databases in the 70s and have been rather successful for almost 5 decades; but if history has taught us anything, it is that the obsession with anything will not last forever, this is especially true with the world of Internet and Information Technologies.

 

“What you cherish, perish”

“What you resist, persist”

If human brains were to be the ultimate database,

Graph database is the shortest path to be there.

 
Want to read more?