✨️🐍 SPARQL endpoint built with RDFLib to serve machine learning models, or any other logic implemented in Python

Overview

SPARQL endpoint for RDFLib

Version Python versions

Run tests Publish to PyPI CodeQL Coverage

rdflib-endpoint is a SPARQL endpoint based on a RDFLib Graph to easily serve machine learning models, or any other logic implemented in Python via custom SPARQL functions.

It aims to enable python developers to easily deploy functions that can be queried in a federated fashion using SPARQL. For example: using a python function to resolve labels for specific identifiers, or run a classifier given entities retrieved using a SERVICE query to another SPARQL endpoint.

🧑‍🏫 How it works

The user defines and registers custom SPARQL functions using Python, and/or populate the RDFLib Graph, then the endpoint is deployed based on the FastAPI framework.

The deployed SPARQL endpoint can be used as a SERVICE in a federated SPARQL query from regular triplestores SPARQL endpoints. Tested on OpenLink Virtuoso (Jena based) and Ontotext GraphDB (rdf4j based). The endpoint is CORS enabled.

Built with RDFLib and FastAPI. Tested for Python 3.7, 3.8 and 3.9

Please create an issue, or send a pull request if you are facing issues or would like to see a feature implemented

📥 Install the package

Install the package from PyPI:

pip install rdflib-endpoint

🐍 Define custom SPARQL functions

Checkout the example folder for a complete working app example to get started, with a docker deployment. The best way to create a new SPARQL endpoint is to copy this example folder and start from it.

Create a app/main.py file in your project folder with your functions and endpoint parameters:

from rdflib_endpoint import SparqlEndpoint
import rdflib
from rdflib.plugins.sparql.evalutils import _eval

def custom_concat(query_results, ctx, part, eval_part):
    """Concat 2 strings in the 2 senses and return the length as additional Length variable
    """
    argument1 = str(_eval(part.expr.expr[0], eval_part.forget(ctx, _except=part.expr._vars)))
    argument2 = str(_eval(part.expr.expr[1], eval_part.forget(ctx, _except=part.expr._vars)))
    evaluation = []
    scores = []
    evaluation.append(argument1 + argument2)
    evaluation.append(argument2 + argument1)
    scores.append(len(argument1 + argument2))
    scores.append(len(argument2 + argument1))
    # Append the results for our custom function
    for i, result in enumerate(evaluation):
        query_results.append(eval_part.merge({
            part.var: rdflib.Literal(result), 
            rdflib.term.Variable(part.var + 'Length'): rdflib.Literal(scores[i])
        }))
    return query_results, ctx, part, eval_part

# Start the SPARQL endpoint based on a RDFLib Graph and register your custom functions
g = rdflib.graph.ConjunctiveGraph()
app = SparqlEndpoint(
    graph=g,
    # Register the functions:
    functions={
        'https://w3id.org/um/sparql-functions/custom_concat': custom_concat
    },
    # CORS enabled by default
    cors_enabled=True,
    # Metadata used for the service description and Swagger UI:
    title="SPARQL endpoint for RDFLib graph", 
    description="A SPARQL endpoint to serve machine learning models, or any other logic implemented in Python. \n[Source code](https://github.com/vemonet/rdflib-endpoint)",
    version="0.1.0",
    public_url='https://your-endpoint-url/sparql',
    # Example queries displayed in the Swagger UI to help users try your function
    example_query="""Example query:\n
```
PREFIX myfunctions: <https://w3id.org/um/sparql-functions/>
SELECT ?concat ?concatLength WHERE {
    BIND("First" AS ?first)
    BIND(myfunctions:custom_concat(?first, "last") AS ?concat)
}
```"""
)

🦄 Run the SPARQL endpoint

To quickly get started you can run the FastAPI server from the example folder with uvicorn on http://localhost:8000

cd example
uvicorn main:app --reload --app-dir app

Checkout in the example/README.md for more details, such as deploying it with docker.

🧑‍💻 Development

📥 Install for development

Install from the latest GitHub commit to make sure you have the latest updates:

pip install rdf[email protected]+https://github.com/vemonet/[email protected]

Or clone and install locally for development:

git clone https://github.com/vemonet/rdflib-endpoint
cd rdflib-endpoint
pip install -e .

You can use a virtual environment to avoid conflicts:

# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate

✅️ Run the tests

Install additional dependencies:

pip install pytest requests

Run the tests locally (from the root folder):

pytest -s

📂 Projects using rdflib-endpoint

Here are some projects using rdflib-endpoint to deploy custom SPARQL endpoints with python:

Comments
  • rdflib.plugins.sparql.CUSTOM_EVALS[

    rdflib.plugins.sparql.CUSTOM_EVALS["exampleEval"] = customEval

    I would like to know if this is possible to implement this kind of custom evaluation function, such as in this example: Source code for examples.custom_eval

    Also, do you please know if this would work in a federated query from Wikidata or Wikibase ? Thanks.

    enhancement 
    opened by rchateauneu 3
  • Fix error when dealing with AND operator (&&)

    Fix error when dealing with AND operator (&&)

    Hello, There is an error when dealing with AND operator (&&). Actually parse_qsl incorrectly parse the request body, misunderstanding && as new parameters. I suggest unquoting the data later on. Best regards, Ba Huy

    opened by tbhuy 2
  • Should be able to use prefixes bound in graph's NamespaceManager

    Should be able to use prefixes bound in graph's NamespaceManager

    Prefixes can be bound to a graph using the NamespaceManager. This makes them globally available, when loading and serializing as well as when querying.

    rdflib-endpoint, however, requires prefixes to be declared in the query input even for those bound to the graph, because prepareQuery is not aware of the graph.

    It would be convenient if the SPARQL endpoint could use the globally bound prefixes. My suggestion would be to drop the prepareQuery completely. I actually don't see why it is there in the first place, given that the prepared query is never used later on.

    opened by Natureshadow 0
  • SPARQL query containing 'coalesce' returns no result on rdflib-endpoint.SparqlEndpoint()

    SPARQL query containing 'coalesce' returns no result on rdflib-endpoint.SparqlEndpoint()

    Hello!

    My setup:

    I define a graph, g = Graph(), then I g.parse() a number of .ttl files and create the endpoint with SparqlEndpoint(graph=g). Then I use uvicorn.run(app, ...) to expose the endpoint on my local machine.

    I can successfully run this simple sparql statement to query for keywords:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT 
    ?keyword
    WHERE { ?subj dcat:keyword ?keyword }
    

    However, as soon as I add a coalesce statement, the query does not return results any more:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT 
    ?keyword
    (coalesce(?keyword, "xyz") as ?foo) 
    WHERE { ?subj dcat:keyword ?keyword }
    

    I tried something similar on wikidata which has no problems:

    SELECT ?item ?itemLabel 
    (coalesce(?itemLabel, 2) as ?foo)
    WHERE 
    {
      ?item wdt:P31 wd:Q146. # Must be of a cat
      SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } 
    }
    

    The server debug output for the keywords query looks ok to me.

    INFO:     127.0.0.1:43942 - "GET /sparql?format=json&query=%0APREFIX+dcat%3A+%3Chttp%3A%2F%2Fwww.w3.org%2Fns%2Fdcat%23%3E%0ASELECT+%0A%3Fkeyword%0A%28coalesce%28%3Fkeyword%2C+%22xyz%22%29+as+%3Ffoo%29+%0AWHERE+%7B+%3Fsubj+dcat%3Akeyword+%3Fkeyword+%7D%0ALIMIT+10%0A HTTP/1.1" 200 OK
    

    Putting COALESCE in the WHERE clause does not solve the problem:

    PREFIX dcat: <http://www.w3.org/ns/dcat#>
    SELECT
    ?keyword
    ?foo
    WHERE {
    ?subj dcat:keyword ?keyword
    BIND(coalesce(?keyword, 2) as ?foo)
    }
    

    Am I missing something?

    Thanks in advance

    opened by byte-for-byte 4
  • Construct query

    Construct query

    Use rdflib-endpoint more frequently I ran into an issue with a sparql construct.

    sparql without prefixes

    CONSTRUCT {
      ?work dcterms:date ?jaar .
    }
    WHERE {
    {  select ?work (min(?year) as ?jaar)
        where {
              ?work dc:date ?year .
        }
        group by ?work }
    }
    

    The correct tuples are selected but no triples are created. Using rdflib directly does give the correct triples.

    opened by RichDijk 1
Releases(0.2.6)
  • 0.2.6(Dec 19, 2022)

    Changelog

    • YASGUI now use the provided example query as default query

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.2.5...0.2.6

    Source code(tar.gz)
    Source code(zip)
  • 0.2.5(Dec 19, 2022)

  • 0.2.4(Dec 19, 2022)

    Changelog

    • Add support for rdflib.Graph using store="Oxigraph" (oxrdflib package https://github.com/oxigraph/oxrdflib), related to https://github.com/vemonet/rdflib-endpoint/issues/4
    • CLI option for defining the store has been added

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.2.3...0.2.4

    Source code(tar.gz)
    Source code(zip)
  • 0.2.3(Dec 19, 2022)

  • 0.2.2(Dec 19, 2022)

  • 0.2.1(Dec 19, 2022)

  • 0.2.0(Dec 19, 2022)

    Changelog

    • Migrate from setup.py to pyproject.toml with a hatch build backend and src/ layout
    • Added types
    • Check for strict type compliance using mypy
    • Added tests for python 3.10 and 3.11
    • Added CITATION.cff file and pre-commit hooks
    • Merged pull request https://github.com/vemonet/rdflib-endpoint/pull/2 "Fix error when dealing with AND operator (&&)"

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.1.6...0.2.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.6(Feb 14, 2022)

  • 0.1.5(Jan 10, 2022)

  • 0.1.4(Dec 15, 2021)

  • 0.1.3(Dec 14, 2021)

  • 0.1.2(Dec 14, 2021)

    rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python. Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL endpoint can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Changelog:

    • Added a CLI feature to quickly start a SPARQL endpoint based on a local RDF file: rdflib-endpoint server your-file.nt

    Full Changelog: https://github.com/vemonet/rdflib-endpoint/compare/0.1.1...0.1.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Dec 8, 2021)

    rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python. Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL endpoint can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Changelog:

    • Minor improvements to tests and workflows
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Dec 7, 2021)

    First release of rdflib-endpoint, a library to quickly deploy SPARQL endpoint based on a RDFlib Graph optionally with custom functions defined in python

    Support for Select, Construct, Describe, and Ask query. No support for Insert in RDFlib currently

    This SPARQL can perform federated SERVICE queries, and can be queried through a SERVICE query on another SPARQL endpoint (tested for endpoints based on RDF4J Sail and Jena-based triplestores, such as Virtuoso)

    Source code(tar.gz)
    Source code(zip)
Cookiecutter API for creating Custom Skills for Azure Search using Python and Docker

cookiecutter-spacy-fastapi Python cookiecutter API for quick deployments of spaCy models with FastAPI Azure Search The API interface is compatible wit

Microsoft 379 Jan 03, 2023
🐍Pywork is a Yeoman generator to scaffold a Bare-bone Python Application

Pywork python app yeoman generator Yeoman | Npm Pywork | Home PyWork is a Yeoman generator for a basic python-worker project that makes use of Pipenv,

Vu Tran 10 Dec 16, 2022
Voucher FastAPI

Voucher-API Requirement Docker Installed on system Libraries Pandas Psycopg2 FastAPI PyArrow Pydantic Uvicorn How to run Download the repo on your sys

Hassan Munir 1 Jan 26, 2022
Github timeline htmx based web app rewritten from Common Lisp to Python FastAPI

python-fastapi-github-timeline Rewrite of Common Lisp htmx app _cl-github-timeline into Python using FastAPI. This project tries to prove, that with h

Jan Vlčinský 4 Mar 25, 2022
Instrument your FastAPI app

Prometheus FastAPI Instrumentator A configurable and modular Prometheus Instrumentator for your FastAPI. Install prometheus-fastapi-instrumentator fro

Tim Schwenke 441 Jan 05, 2023
A dynamic FastAPI router that automatically creates CRUD routes for your models

⚡ Create CRUD routes with lighting speed ⚡ A dynamic FastAPI router that automatically creates CRUD routes for your models Documentation: https://fast

Adam Watkins 943 Jan 01, 2023
FastAPI + Postgres + Docker Compose + Heroku Deploy Template

FastAPI + Postgres + Docker Compose + Heroku Deploy ⚠️ For educational purpose only. Not ready for production use YET Features FastAPI with Postgres s

DP 12 Dec 27, 2022
FastAPI with async for generating QR codes and bolt11 for Lightning Addresses

sendsats An API for getting QR codes and Bolt11 Invoices from Lightning Addresses. Share anywhere; as a link for tips on a twitter profile, or via mes

Bitkarrot 12 Jan 07, 2023
A Sample App to Demonstrate React Native and FastAPI Integration

React Native - Service Integration with FastAPI Backend. A Sample App to Demonstrate React Native and FastAPI Integration UI Based on NativeBase toolk

YongKi Kim 4 Nov 17, 2022
FastAPI IPyKernel Sandbox

FastAPI IPyKernel Sandbox This repository is a light-weight FastAPI project that is meant to provide a wrapper around IPyKernel interactions. It is in

Nick Wold 2 Oct 25, 2021
implementation of deta base for FastAPIUsers

FastAPI Users - Database adapter for Deta Base Ready-to-use and customizable users management for FastAPI Documentation: https://fastapi-users.github.

2 Aug 15, 2022
Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as a REST API Endpoint.

Jupter Notebook REST API Run your jupyter notebooks as a REST API endpoint. This isn't a jupyter server but rather just a way to run your notebooks as

Invictify 54 Nov 04, 2022
Sample-fastapi - A sample app using Fastapi that you can deploy on App Platform

Getting Started We provide a sample app using Fastapi that you can deploy on App

Erhan BÜTE 2 Jan 17, 2022
Prometheus integration for Starlette.

Starlette Prometheus Introduction Prometheus integration for Starlette. Requirements Python 3.6+ Starlette 0.9+ Installation $ pip install starlette-p

José Antonio Perdiguero 229 Dec 21, 2022
Minecraft biome tile server writing on Python using FastAPI

Blocktile Minecraft biome tile server writing on Python using FastAPI Usage https://blocktile.herokuapp.com/overworld/{seed}/{zoom}/{col}/{row}.png s

Vladimir 2 Aug 31, 2022
Slack webhooks API served by FastAPI

Slackers Slack webhooks API served by FastAPI What is Slackers Slackers is a FastAPI implementation to handle Slack interactions and events. It serves

Niels van Huijstee 68 Jan 05, 2023
Simple example of FastAPI + Celery + Triton for benchmarking

You can see the previous work from: https://github.com/Curt-Park/producer-consumer-fastapi-celery https://github.com/Curt-Park/triton-inference-server

Jinwoo Park (Curt) 37 Dec 29, 2022
A fast and durable Pub/Sub channel over Websockets. FastAPI + WebSockets + PubSub == ⚡ 💪 ❤️

⚡ 🗞️ FastAPI Websocket Pub/Sub A fast and durable Pub/Sub channel over Websockets. The easiest way to create a live publish / subscribe multi-cast ov

8 Dec 06, 2022
Flask + marshmallow for beautiful APIs

Flask-Marshmallow Flask + marshmallow for beautiful APIs Flask-Marshmallow is a thin integration layer for Flask (a Python web framework) and marshmal

marshmallow-code 768 Dec 22, 2022
Code Specialist 27 Oct 16, 2022