GeaFlow (brand name TuGraph-Analytics) is an open-source distributed stream graph computing engine developed by Ant Group. It is currently widely used in scenarios such as data warehousing acceleration, financial risk control, knowledge graphs, and social networks. GeaFlow's core capability is a stream computing engine based on the graph as its data model, which also has the ability to handle both streaming and batch processing. Compared with traditional stream computing engines such as Flink and Storm, which use tables as their data model for real-time processing, GeaFlow's graph-based data model has significant performance advantages when handling join relationship operations, especially complex multi-degree relationship operations like those involving 3 or more degrees of join and complex loop searches. Stream graph computing provides a high-efficiency and low-latency graph computing mode compared to offline graph computing. GeaFlow also supports real-time analysis and processing of graphs and tables, and can handle both table and graph data. For more information on GeaFlow usage scenarios, please refer to: GeaFlow introduction document
- Distribute streaming graph computing.
- High availability and exactly once support.
- Graph and table integrated processing.
- Easy to develop with SQL + ISO/GQL.
- Pluggable for UDF、graph algorithm and connector.
- High level api support.
- One-stop graph development platform
- Cloud native deployment support.
The similarities and differences between GeaFlow and traditional stream computing engine, such as Flink, are as follows:
Features | GeaFlow | Flink |
---|---|---|
Data Model | A graph-based stream computing engine that can handle both graph and table model data | A stream computing engine based on the table model |
State Management | Supports both stream and graph data state management | Supports stream state management |
Exactly once | Supported | Supported |
Join Support | Supports complex multi-degree join operations | Not suitable for complex joins |
Graph Algorithm Support | Native graph algorithm support | Flink Gelly module support (currently removed) |
Query Language | SQL + ISO/GQL | SQL |
Why using graphs for relational operations is more appealing than table joins?
GeaFlow's relevant design reference papers are as follows: GeaFlow: A Graph Extended and Accelerated Dataflow System
You need to first fork a copy of GeaFlow code on Github and then try to compile the source code. Compiling GeaFlow requires mvn and JDK8 environment. You can then attempt to run a real-time graph computing job on your local machine to experience how the streaming graph computing job is run. Running a GeaFlow job locally requires a Docker environment. For more detailed information on how to get started quickly, please refer to the quickstart document.
GeaFlow supports two sets of programming interfaces: DSL and API. You can develop streaming graph computing jobs using GeaFlow's SQL extension language SQL+ISO/GQL or use GeaFlow's high-level API programming interface to develop applications in Java. For more information on DSL application development, please refer to the DSL development document, and for the high-level API application development, please refer to the API application development document.
--GQL Style
Match (s:student)-[sc:selectCource]->(c:cource)
Return c.name
;
--SQL Style
SELECT c.name
FROM course c JOIN selectCourse sc
ON c.id = sc.targetId
JOIN student s ON sc.srcId = s.id
;
Please refer to: ReadTheDocs
Thank you very much for contributing to GeaFlow, whether it's bug reporting, documentation improvement, or major feature development, we warmly welcome all contributions. For more information on how to contribute, please refer to our guidelines:Contributing to GeaFlow.
You can contact us through DingTalk or WeChat group.
If you feel GeaFlow useful or interesting, please ⭐️ Star it on github.
Email: [email protected]
Thanks to some outstanding open-source projects in the industry, such as Apache Flink, Apache Spark, and Apache Calcite, some modules of GeaFlow were developed with their references. We would like to express our special gratitude for their contributions.