Frequently Asked Questions
General
Connectivity
timbr supports both JDBC and ODBC. We reuse the thrift-server protocol of Apache Hive and Spark. This means you can connect to timbr’s Knowledge Graph using Hive/Spark JDBC/ODBC drivers (in most BI tools they already come embedded, so no installation needed).
Creating an ontology: You can either use our Visual Ontology Modeler (no SQL needed) or use timbr extended SQL DDL statements.
Mapping data to the ontology: You can either use our Visual Ontology Data Mapper (no SQL needed) or use timbr’s extended SQL DDL statements.
Querying the Knowledge Graph: SQL, Python/R, dataframes, and natively in Apache Spark (SQL, Python, R, Java, Scala). GraphQL can be supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, GraphQL is supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, this could be generated easily (creating the SQL DDL statements of timbr directly from the XML hierarchy/relationships).
Yes, moving from SPARQL to timbr’s simplified SQL is quite trivial and easy to do.
Yes, timbr works extensively with SQLAlchemy. Another valid option for python users is DataFrames.
Yes, timbr is compatible with OWL-DL and some OWL-2 inferences.
If there is a clear business value to add more OWL-2 inferences, we can support them as well. timbr’s inference engine is based on query-rewriting techniques. If timbr encounters slow queries/performance, timbr can specifically materialize the part of knowledge that is required.
This is supported as part of our integration with Apache Spark/Apache Hive: https://github.com/awslabs/emr-dynamodb-connector
I a direct connection is needed we can use Simba Amazon DynamoDB ODBC and JDBC Drivers.
Load More
Connectivity
Connectivity
timbr supports both JDBC and ODBC. We reuse the thrift-server protocol of Apache Hive and Spark. This means you can connect to timbr’s Knowledge Graph using Hive/Spark JDBC/ODBC drivers (in most BI tools they already come embedded, so no installation needed).
Creating an ontology: You can either use our Visual Ontology Modeler (no SQL needed) or use timbr extended SQL DDL statements.
Mapping data to the ontology: You can either use our Visual Ontology Data Mapper (no SQL needed) or use timbr’s extended SQL DDL statements.
Querying the Knowledge Graph: SQL, Python/R, dataframes, and natively in Apache Spark (SQL, Python, R, Java, Scala). GraphQL can be supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, GraphQL is supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, this could be generated easily (creating the SQL DDL statements of timbr directly from the XML hierarchy/relationships).
Yes, moving from SPARQL to timbr’s simplified SQL is quite trivial and easy to do.
Yes, timbr works extensively with SQLAlchemy. Another valid option for python users is DataFrames.
Yes, timbr is compatible with OWL-DL and some OWL-2 inferences.
If there is a clear business value to add more OWL-2 inferences, we can support them as well. timbr’s inference engine is based on query-rewriting techniques. If timbr encounters slow queries/performance, timbr can specifically materialize the part of knowledge that is required.
This is supported as part of our integration with Apache Spark/Apache Hive: https://github.com/awslabs/emr-dynamodb-connector
I a direct connection is needed we can use Simba Amazon DynamoDB ODBC and JDBC Drivers.
Load More
Ontology Implementation
Connectivity
timbr supports both JDBC and ODBC. We reuse the thrift-server protocol of Apache Hive and Spark. This means you can connect to timbr’s Knowledge Graph using Hive/Spark JDBC/ODBC drivers (in most BI tools they already come embedded, so no installation needed).
Creating an ontology: You can either use our Visual Ontology Modeler (no SQL needed) or use timbr extended SQL DDL statements.
Mapping data to the ontology: You can either use our Visual Ontology Data Mapper (no SQL needed) or use timbr’s extended SQL DDL statements.
Querying the Knowledge Graph: SQL, Python/R, dataframes, and natively in Apache Spark (SQL, Python, R, Java, Scala). GraphQL can be supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, GraphQL is supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, this could be generated easily (creating the SQL DDL statements of timbr directly from the XML hierarchy/relationships).
Yes, moving from SPARQL to timbr’s simplified SQL is quite trivial and easy to do.
Yes, timbr works extensively with SQLAlchemy. Another valid option for python users is DataFrames.
Yes, timbr is compatible with OWL-DL and some OWL-2 inferences.
If there is a clear business value to add more OWL-2 inferences, we can support them as well. timbr’s inference engine is based on query-rewriting techniques. If timbr encounters slow queries/performance, timbr can specifically materialize the part of knowledge that is required.
This is supported as part of our integration with Apache Spark/Apache Hive: https://github.com/awslabs/emr-dynamodb-connector
I a direct connection is needed we can use Simba Amazon DynamoDB ODBC and JDBC Drivers.
Load More
Performance
Connectivity
timbr supports both JDBC and ODBC. We reuse the thrift-server protocol of Apache Hive and Spark. This means you can connect to timbr’s Knowledge Graph using Hive/Spark JDBC/ODBC drivers (in most BI tools they already come embedded, so no installation needed).
Creating an ontology: You can either use our Visual Ontology Modeler (no SQL needed) or use timbr extended SQL DDL statements.
Mapping data to the ontology: You can either use our Visual Ontology Data Mapper (no SQL needed) or use timbr’s extended SQL DDL statements.
Querying the Knowledge Graph: SQL, Python/R, dataframes, and natively in Apache Spark (SQL, Python, R, Java, Scala). GraphQL can be supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, GraphQL is supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, this could be generated easily (creating the SQL DDL statements of timbr directly from the XML hierarchy/relationships).
Yes, moving from SPARQL to timbr’s simplified SQL is quite trivial and easy to do.
Yes, timbr works extensively with SQLAlchemy. Another valid option for python users is DataFrames.
Yes, timbr is compatible with OWL-DL and some OWL-2 inferences.
If there is a clear business value to add more OWL-2 inferences, we can support them as well. timbr’s inference engine is based on query-rewriting techniques. If timbr encounters slow queries/performance, timbr can specifically materialize the part of knowledge that is required.
This is supported as part of our integration with Apache Spark/Apache Hive: https://github.com/awslabs/emr-dynamodb-connector
I a direct connection is needed we can use Simba Amazon DynamoDB ODBC and JDBC Drivers.
Load More
Connectivity
timbr supports both JDBC and ODBC. We reuse the thrift-server protocol of Apache Hive and Spark. This means you can connect to timbr’s Knowledge Graph using Hive/Spark JDBC/ODBC drivers (in most BI tools they already come embedded, so no installation needed).
Creating an ontology: You can either use our Visual Ontology Modeler (no SQL needed) or use timbr extended SQL DDL statements.
Mapping data to the ontology: You can either use our Visual Ontology Data Mapper (no SQL needed) or use timbr’s extended SQL DDL statements.
Querying the Knowledge Graph: SQL, Python/R, dataframes, and natively in Apache Spark (SQL, Python, R, Java, Scala). GraphQL can be supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, GraphQL is supported by integrating external open source projects that support the translation of GraphQL to SQL.
Yes, this could be generated easily (creating the SQL DDL statements of timbr directly from the XML hierarchy/relationships).
Yes, moving from SPARQL to timbr’s simplified SQL is quite trivial and easy to do.
Yes, timbr works extensively with SQLAlchemy. Another valid option for python users is DataFrames.
Yes, timbr is compatible with OWL-DL and some OWL-2 inferences.
If there is a clear business value to add more OWL-2 inferences, we can support them as well. timbr’s inference engine is based on query-rewriting techniques. If timbr encounters slow queries/performance, timbr can specifically materialize the part of knowledge that is required.
This is supported as part of our integration with Apache Spark/Apache Hive: https://github.com/awslabs/emr-dynamodb-connector
I a direct connection is needed we can use Simba Amazon DynamoDB ODBC and JDBC Drivers.