Each table shows the aggregate metrics for all RDF graph databases evaluated on the given benchmark.
Column headers: Hover over any header to see detailed explanations
of the metrics, including how failed queries are penalized.
Sorting: Click a column header to sort by that metric. Hold
Shift and click multiple headers to sort by more than one
column in order.
Filtering: Each column has a built-in filter menu with powerful
options to quickly narrow down the results.
Column resizing: Drag the edges of column headers to resize. Hold
Shift while resizing to change the column width without
shifting neighboring columns.
Column reordering: Drag and drop column headers to reorder them.
Engine rows: Click a row to view the individual query details for
that system on the given benchmark.
Compare results: Use the compare button to see a per-query
performance comparison of all systems.
Download: Export the table contents as TSV for offline
analysis or reporting.
Please select a query from the Query Runtimes table!
Please select a query from the Query Runtimes table!
Execution Tree is only available for QLever with accept header
application/qlever-results+json!
Please select a query from the Query Runtimes table!
This table shows the per-query runtime performance comparison
of a selection of RDF graph databases on the selected benchmark.
Green cells: Best runtime performance for
a query across all systems.
Red cells: Query either failed or timed out
for that system.
in a system column cell: Result size for that system differs from the majority.
in the query column cell: Result sizes differ across all systems for that query.
in a system column cell:
The server for that system was restarted after the query, either due to a crash or
no response after timeout + 30s.
Click a cell: Shows additional information which can be
copied. (Copying feature only works in a secure context i.e. https or localhost)
Other features: Sorting, filtering, resizing, reordering, and TSV
export are all available.