Upgrade Guide
Pathom tries its best not to introduce breaking changes, that means in some cases we prefer to introduce a new function or namespace instead of replacing an old one when they might differ in functionality. This part of the guide will provide information and suggestions on what to do when upgrading to a certain version. Also in the exceptional cases we do introduce breaking changes, they will also appear with upgrade notes here.
2.2.0 - Upgrade guide
Supporting resolver libraries
This is not a breaking change. Pathom 2.2.0
introduces new dispatchers to call resolvers and mutations, the old dispatchers
used to rely on a multi-method to invoke the calls, the new dispatchers will just look up
for a lambda in the resolver/mutation definition that’s stored in the index. The main advantage
is that we reduce the number of places we need to change when adding resolvers and mutations.
In the previous case you have 3 places to change: the index, the resolver dispatch and the
mutation dispatch but with the new dispatch there is just the index which needs to be changed.
This will facilitate the creation of shared resolvers/mutations libraries that you can inject and make part of your parsing system. As an example of a shared library, I have previously written a demo project implementing the Youtube API for connect.
To enable this feature you will have to change the dispatch function used in the parser setup by replacing your resolvers fns with new ones provided by connect, example:
; this is the old setup
; setup indexes atom
(def indexes (atom {}))
; setup resolver dispatch and factory
(defmulti resolver-fn pc/resolver-dispatch)
(def defresolver (pc/resolver-factory resolver-fn indexes))
; setup mutation dispatch and factory
(defmulti mutation-fn pc/mutation-dispatch)
(def defmutation (pc/mutation-factory mutation-fn indexes))
(def parser
(p/parser {::p/env {::p/reader [p/map-reader pc/all-readers]
::pc/resolver-dispatch resolver-fn
::pc/mutate-dispatch mutation-fn
::pc/indexes @indexes
::db (atom {})}
::p/mutate pc/mutate
::p/plugins [p/error-handler-plugin
pp/profile-plugin]}))
The minimal change needed is as follows:
; minimal changes to support custom
; setup indexes atom
(def indexes (atom {}))
; setup resolver dispatch and factory
(defmulti resolver-fn pc/resolver-dispatch)
(def defresolver (pc/resolver-factory resolver-fn indexes))
; setup mutation dispatch and factory
(defmulti mutation-fn pc/mutation-dispatch)
(def defmutation (pc/mutation-factory mutation-fn indexes))
(def parser
(p/parser {::p/env {::p/reader [p/map-reader pc/reader2 pc/ident-reader] ; use reader2
; replace resolver dispatch
::pc/resolver-dispatch pc/resolver-dispatch-embedded
; replace mutation dispatch
::pc/mutate-dispatch pc/mutation-dispatch-embedded
::pc/indexes @indexes
::db (atom {})}
::p/mutate pc/mutate
::p/plugins [; add connect plugin
(pc/connect-plugin)
p/error-handler-plugin
pp/profile-plugin]}))
The new versions of resolver-factory
and mutation-factory
will add the lambdas into
the definition map, making those compatible with the new *-dispatch-embedded
, so you get
your old resolvers plus any extra ones from libs.
From now on when I say resolver or resolvers , I mean both resolvers and mutations.
Adding this note here so you don’t have to read all the repetition.
|
From now on we will be recommending the new way of writing resolvers using the
pc/defresolver
macro, I see a few advantages that I’d like to highlight about this approach:
-
Your resolvers become isolated building blocks on their own, instead of having to spread its definition in the index + multi-method, now the map contains everything that a resolver needs to be used
-
You get a fine control of what resolvers you want inject in a given parser, earlier it wasn’t easy to write several parsers using sub-sets of resolvers with each in a symbol you can compose as you please
-
Simplify the boilerplate, no longer the need to define the multi-methods for dispatching
This is what the setup looks like, using the new map format:
; setup with map format
; this will generate a def for the symbol `some-resolver` and the def will
; contain a map that is the resolver definition, no external side effects
(pc/defresolver some-resolver [env input]
{::pc/input #{::id}
::pc/output [::name ::email]}
(get (::db env) (::id input)))
; define another resolver
(pc/defresolver other-resolver ...)
; now it's a good practice to create a sequence containing the resolvers
(def app-registry [some-resolver other-resolver])
(def parser
(p/parser {::p/env {::p/reader [p/map-reader pc/reader2 pc/ident-reader]
::pc/resolver-dispatch pc/resolver-dispatch-embedded
::pc/mutate-dispatch pc/mutation-dispatch-embedded
::db (atom {})}
::p/mutate pc/mutate
::p/plugins [; you can use the connect plugin to register your resolvers,
; but any plugin with the ::pc/register key will be also
; included in the index
(pc/connect-plugin {::pc/register app-registry})
p/error-handler-plugin
pp/profile-plugin]}))
A possible pain point added in this approach might be the fact that you now have to specify which resolvers to use whereas in earlier versions, the only option was all or nothing. If you have resolvers spread across many files, I suggest you create a list at the end of each namespace containing all the resolvers within that file; this way you can combine those in a later index. The resolver list will be flattened out when it’s processed and it’s OK to send multiple lists inside lists, this facilitates the combination of lists of resolvers.
The multi-method format is still OK to use, there are no plans to remove it and it’s fine if you wish to keep using it. |
Parallel parser
Pathom 2.2.0
also introduces the parallel parser. Before this all the processing
of Pathom was done serially, one attribute at a time. The new parser brings the
ability to support the attributes to be processed in parallel.
Note this benefit comes with a considerable overhead cost. After some experiments with
different users I got the conclusion that parallel-parser
is not good for the most
users. To benefit from the parallel parser, you need to be in a position that:
-
You have large queries, meaning hundreds of attributes on the same query
-
You resolvers need to be "parallelized", meaning that should be ok to trigger many of then at once, for example if most resolvers hit different foreign services. But if all your resolvers hit a single database, that may generate bad pressure.
If you are using the async-parser
then the only change required to move to the parallel parser is just changing
the parser to parallel-parser
and the connect
readers. If you are using the regular
sync parser then you may need to adapt some things to support an async environment, here are
things to watch out for:
-
If you write plugins, when wrapping those you must consider that their response will be async (returns a
core.async
channel), One of the easiest ways to handle this is by using thelet-chan
macro, which is alet
that automatically handles channels and makes the process transparent. -
If you do recursive parser calls (which includes calls to functions like
join
,entity
with arity 2)
Tracer
Pathom 2.2.0
includes a new tracer feature. I recommend that you replace the old
profiler with this. You’d need to remove pp/profile-plugin
and add the p/tracer-plugin
(better to be added as
the last plugin on your chain).
2.2.0-beta11 → 2.2.0-RC1 - Breaking changes
In version 2.2.0-beta11
we introduced the pc/connect-plugin
and pc/register
with the intent
to provide an easier way to write shared resolvers and also to reduce the boilerplate needed to setup connect
.
This strategy failed in being simple to setup a register and more integrations because it relied on multiple parts. A better strategy emerged by embedding the lambda to run the resolvers and mutations in their own map instead such that they are complete and stand alone.
But to accommodate this the connect plugin and the pc/register
had to change, earlier
the pc/connect-plugin
was a var
, now it’s an fn
which you must call. The register used
to take an index atom, a multimethod for resolver and a multimethod for mutations while
doing a stateful mutation on all three. Now it takes the index in a map format and returns another
index with the things registered as a pure function.