Turbulette
😴 Turbulette, a batteries-included framework to build high performance, async GraphQL APIs 😴
Turbulette packages all you need to build great GraphQL APIs :
ASGI framework, GraphQL library, ORM and data validation
Features:
- Split your API in small, independent applications
- Generate Pydantic models from GraphQL types
- JWT authentication with refresh and fresh tokens
- Declarative, powerful and extendable policy-based access control (PBAC)
- Extendable auth user model with role management
- Async caching (provided by async-caches)
- Built-in CLI to manage project, apps, and DB migrations
- Built-in pytest plugin to quickly test your resolvers
- Settings management at project and app-level (thanks to simple-settings)
- CSRF middleware
- 100% test coverage
- 100% typed, your IDE will thank you ;)
- Handcrafted with ❤️, from 🇨🇵
🔧 Requirements
Python 3.6+
👍 Turbulette makes use of great tools/frameworks and wouldn't exist without them :
- Ariadne - Schema-first GraphQL library
- Starlette - The little ASGI framework that shines
- GINO - Lightweight, async ORM
- Pydantic - Powerful data validation with type annotations
- Alembic - Lightweight database migration tool
- simple-settings - A generic settings system inspired by Django's one
- async-caches - Async caching library
- Click - A "Command Line Interface Creation Kit"
📝 Installation
$ pip install turbulette
---> 100%
You will also need an ASGI server, such as uvicorn :
$ pip install turbulette
---> 100%
🚀 Quick Start
Here is a short example that demonstrates a minimal project setup.
We will see how to scaffold a simple Turbulette project, create a Turbulette application, and write some GraphQL schema/resolver. It's advisable to start the project in a virtualenv to isolate your dependencies. Here we will be using poetry :
poetry init
Then, install Turbulette from PyPI :
poetry add uvicorn
For the rest of the tutorial, we will assume that commands will be executed under the virtualenv. You can either prepend all commands with poetry run
, or spawn a shell inside the virtualenv :
poetry shell
1: Create a project
First, create a directory that will contain the whole project.
Now, inside this folder, create your Turbulette project using the turb
CLI :
$ turb project --name eshop
You should get with something like this :
.
└── 📁 eshop
├── 📁 alembic
│ ├── 📄 env.py
│ └── 📄 script.py.mako
├── 📄 .env
├── 📄 alembic.ini
├── 📄 app.py
└── 📄 settings.py
Let's break down the structure :
📁 eshop
: Here is the so-called Turbulette project folder, it will contain applications and project-level configuration files📁 alembic
: Contains the Alembic scripts used when generating/applying DB migrations📄 env.py
📄 script.py.mako
📄 .env
: The actual project settings live here📄 app.py
: Your API entrypoint, it contains the ASGI app📄 settings.py
: Will load settings from.env
file
Question
Why have both .env
and settings.py
?
You don't have to. You can also put all your settings in settings.py
.
But Turbulette encourage you to follow the twelve-factor methodology,
that recommend to separate settings from code because config varies substantially across deploys, code does not.
This way, you can untrack .env
from version control and only keep tracking settings.py
, which will load settings
from .env
using Starlette's Config
object.
2: Create the first app
Now it's time to create a Turbulette application!
Run this command under the project directory (eshop
) :
$ turb app -n account
Info
You need to run turb app
under the project dir because the CLI needs to access the almebic.ini
file to create the initial database migration.
You should see your new app under the project folder :
.
└── 📁 eshop
...
|
└── 📁 account
├── 📁 graphql
├── 📁 migrations
│ └── 📄 20200926_1508_auto_ef7704f9741f_initial.py
├── 📁 resolvers
└── 📄 models.py
Details :
📁 graphql
: All the GraphQL schema will live here📁 migrations
: Will contain database migrations generated by Alembic📁 resolvers
: Python package where you will write resolvers binded to the schema📄 models.py
: Will hold GINO models for this app
Question
What is this "initial" python file under 📁 migrations
?
We won't cover database connection in this quickstart, but note that it's the initial database migration
for the account
app that creates its dedicated Alembic branch, needed to generate/apply per-app migrations.
Before writing some code, the only thing to do is make Turbulette aware of our lovely account app.
To do this, open 📄 eshop/settings.py
and add "eshop.account"
to INSTALLED_APPS
,
so the application is registered and can be picked up by Turbulette at startup :
# List installed Turbulette apps that defines some GraphQL schema
INSTALLED_APPS = ["eshop.account"]
3: GraphQL schema
Now that we have our project scaffold, we can start writing actual schema/code.
Create a schema.gql
file in the 📁 graphql
folder and add this base schema :
extend type Mutation {
registerCard(input: CreditCard!): SuccessOut!
}
input CreditCard {
number: String!
expiration: Date!
name: String!
}
type SuccessOut {
success: Boolean
errors: [String]
}
Info
Note that we extend the type Mutation
because Turbulette already defines it. The same goes for Query
type
Notice that we used the Date
scalar, it's one of the custom scalars provided by Turbulette. It parses string in the ISO8601 date format YYY-MM-DD.
4: Add pydantic model
We want to validate our CreditCard
input to ensure the user has entered a valid card number and date.
Fortunately, Turbulette integrates with Pydantic, a data validation library that uses python type annotations,
and offers a convenient way to generate a Pydantic model from a schema type.
Create a new 📄 pyd_models.py
under 📁 account
:
from turbulette.validation import GraphQLModel
from pydantic import PaymentCardNumber
class CreditCard(GraphQLModel):
class GraphQL:
gql_type = "CreditCard"
fields = {"number": PaymentCardNumber}
What's happening here?
The inherited GraphQLModel
class is a pydantic model that knows about the GraphQL schema and can produce pydantic fields from a given GraphQL type. We specify the GraphQL type with the gql_type
attribute; it's the only one required.
But we also add a fields
attribute to override the type of number
field because it is string typed in our schema. If we don't add this, Turbulette will assume that number
is a string and will annotate the number field as str
.
fields
is a mapping between GraphQL field names and the type that will override the schema's one.
Let's add another validation check: the expiration date. We want to ensure the user has entered a valid date (i.e., at least greater than now) :
from datetime import datetime
from pydantic import PaymentCardNumber
from turbulette.validation import GraphQLModel, validator
class CreditCard(GraphQLModel):
class GraphQL:
gql_type = "CreditCard"
fields = {"number": PaymentCardNumber}
@validator("expiration")
def check_expiration_date(cls, value):
if value < datetime.now():
raise ValueError("Expiration date is invalid")
return value
Question
Why don't we use the @validator
from Pydantic?
For those who have already used Pydantic, you probably know about the @validator
decorator used to add custom validation rules on fields.
But here, we use a @validator
imported from turbulette.validation
, why?
They're almost identical. Turbulette's validator is just a shortcut to the Pydantic one with check_fields=False
as a default, instead of True
, because we use an inherited BaseModel
. The above snippet would correctly work if we used Pydantic's validator and explicitly set @validator("expiration", check_fields=False)
.
5: Add a resolver
The last missing piece is the resolver for our user
mutation, to make the API returning something when querying for it.
The GraphQL part is handled by Ariadne, a schema-first GraphQL library that allows binding the logic to the schema with minimal code.
As you may have guessed, we will create a new Python module in our 📁 resolvers
package.
Let's call it 📄 user.py
:
from turbulette import mutation
from ..pyd_models import CreditCard
@mutation.field("registerCard")
async def register(obj, info, **kwargs):
return {"success": True}
mutation
is the base mutation type defined by Turbulette and is used to register all mutation resolvers (hence the use of extend type Mutation
on the schema).
For now, our resolver is very simple and doesn't do any data validation on inputs and doesn't handle errors.
Turbulette has a @validate
decorator that can be used to validate resolver input using a pydantic model (like the one defined in Step 4).
Here's how to use it:
from turbulette import mutation
from ..pyd_models import CreditCard
from turbulette.validation import validate
@mutation.field("registerCard")
@validate(CreditCard)
async def register(obj, info, **kwargs):
return {"success": True}
If the validation succeeds, you can access the validated input data in kwargs["_val_data"]
.
But what happens otherwise? Normally, if the validation fails, pydantic will raise a ValidationError
,
but here the @validate
decorator handles the exception and will add error messages returned by pydantic into a dedicated error field in the GraphQL response.
5: Run it
Our registerCard
mutation is now binded to the schema, so let's test it.
Start the server in the root directory (the one containing 📁 eshop
folder) :
$ uvicorn eshop.app:app --port 8000
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [306818] using statreload
INFO: Started server process [306831]
INFO: Waiting for application startup.
INFO: Application startup complete.
Now, go to http://localhost:8000/graphql, you will see the GraphQL Playground IDE.
Finally, run the registerCard
mutation, for example :
mutation card {
registerCard(
input: {
number: "4000000000000002"
expiration: "2023-05-12"
name: "John Doe"
}
) {
success
errors
}
}
Should give you the following expected result :
{
"data": {
"registerCard": {
"success": true,
"errors": null
}
}
}
Now, try entering a wrong date (before now). You should see the validation error as expected:
{
"data": {
"registerCard": {
"success": null,
"errors": [
"expiration: Expiration date is invalid"
]
}
}
}
Question
How the error message end in the errors
key?
Indeed, we didn't specify anywhere that validation errors should be passed to the errors
key in our SuccessOut
GraphQL type.
That is because Turbulette has a setting called ERROR_FIELD
, which defaults to "errors"
.
This setting indicates the error field on the GraphLQ output type used by Turbulette when collecting query errors.
It means that if you didn't specify ERROR_FIELD
on the GraphQL type, you would get an exception telling you that the field is missing.
It's the default (and recommended) way of handling errors in Turbulette. Still, as all happens in the @validate
, you can always remove it and manually instantiate your Pydantic models in resolvers.
Good job! 👏
That was a straightforward example, showing off a simple Turbulette API set up. To get the most of it, follow the User Guide.
Rationale
Right after the creation of the world, two things happened within one year: Python got asyncio and GraphQL came out (well, okay, things may have happened in the meantime). But these new features have brought a breath of fresh air for Python API developers:
- GraphQL allows you to type your API, describes how to ask for data when you make a request, and supports asynchronous messaging.
- Asynchronous programming is a type of parallel programming, which makes it possible not to wait for the end of task execution to continue program one. That allows us to write more efficient applications, especially for those whose tasks are I/O bound (which is the case for a majority of APIs).
Turbulette is "batteries included" which means that everything you need to build your API (GraphQL) is here.
To be more precise, Turbulette does not invent or reinvent anything. The open-source world already has many great libraries and frameworks, so not using them and starting from scratch would take time and could lead to fragmentation.
On the contrary, Turbulette can be considered as a kind of "glue" which makes these tools work well together and does not bring another layer of complexity on top. One of the main goals is to make their use in Turbulette as transparent as possible so that someone who already knows some of them would have practically nothing more to learn.