-
Notifications
You must be signed in to change notification settings - Fork 12
Customizing Avalon
This page is WORK IN PROGRESS and should not be used as a decent reference.
For any avalon pipeline there are two important pieces:
- Avalon Core
- Avalon Config
The core is what is shared by everyone and defines the basic behavior of Avalon, like its tools, connections with the database, the API, etcetera. You should never really need to change Avalon core unless you're building a new host integration or there's an issue with its "default behavior". However, when starting your own configuration for Avalon you usually need to change nothing inside of Avalon core.
The config is your Studio-specific configuration of your pipeline to work with Avalon. This defines what Loaders are available, how you Publish exactly and what validations are required for that.
To initialize Avalon one must always run avalon.api.install
with the host of choice, for example maya
:
from avalon import api, maya
api.install(maya)
Upon calling api.install()
Avalon will initialize the connection with the database and trigger the installation of the Application's integration, in this case: avalon.maya.install
.
Usually the Application integrations come with a setup that initialize Avalon for you, e.g. userSetup.py
for Maya, 123.py
for Houdini and menu.py
for Nuke. They differ as each has to match what the host application offers as an API to trigger Python code on startup. Usually the startup scripts end up being as simple as the api.install(maya)
example seen above.
The API allows to uninstall the integration through api.uninstall()
however there is usually no reason to call this unless you want to explicitly remove the integration in the current session. The host applications should close down fine without an explicit call to api.uninstall()
.
These is the order of steps taken on api.install(host)
:
Step | Description |
---|---|
io.install() |
Build api.Session from environment variables and connect to Avalon database |
host.install() |
Trigger the Avalon Host Integration |
config.host.install() |
The config Host Integration specific install()
|
register_host(host) |
Set the registered host |
register_config(config) |
Set the registered config |
config.install() |
The config global install()
|
Where host
is the integration, e.g. maya
or nuke
.
Where config
is the studio configuration, e.g. the default polly
or your studio config.
In essence Avalon core provides the shell of tools to work with and the config defines the workflow of your pipeline. Without the config, there's literally nothing you can load and manage in Avalon aside of your work files.
To define the behavior of your pipeline you will need to set up your own configuration of Avalon plug-ins inside your config;
A Creator is used to define "collections" in your work file to prepare content for publishing. This is usually persistent data that saves with your scene, so that when anyone opens your scene later the output remains consistent. It also eases workflows for batch publishing. That is the recommended workflow in Avalon and is why Creator plugins exist.
# pseudocode
from avalon import api
class AlembicCreator(api.Creator):
label = "Alembic"
family = "pointcache"
def process(self):
# The settings provided by the Creator tool
# are stored in `self.data`, like "subset"
instance = create("container")
instance.rename(self.data["subset"])
# Set the data and settings from the Creator
# interface on your container. That will be
# the data you will read on "Publish" to identify
# how the user wanted to publish.
for key, value in self.data.items():
instance.set_data(key, value)
# Since Creators usually "contain" the nodes
# that are meant to be published it makes life
# life easier for the artist if we add the
# currently selected nodes to the instance.
selection = get_selection()
instance.add_members(selection)
# And often you'll add custom settings for
# the user to customize how to publish.
instance.set_data("startFrame", 1001)
instance.set_data("endFrame", 1200)
instance.set_data("writeVertexColors", True)
At the top of the Creator plugin you can define attributes to define the behavior:
Attribute | Description |
---|---|
label |
The nice label shown in Creator tool |
family |
The family data for this instance. See Families |
defaults |
The default subset names to provide. See Subsets |
Loaders define what types of published you content you can load and manage in the application, without these Loader plug-ins in your config there will be no actions to perform within Avalon's Loader.
Loaders are used by the loader tool and api.load
, api.remove
, api.switch
, api.update
These plug-ins are used by Pyblish on Publishing and can also be triggered using the pyblish
api, e.g. pyblish.util.publish()
Publishing in Avalon is done through Pyblish allowing a consistent and foolproof way of collecting, validating, extracting and integrating data from a Work File to Published data in Avalon.
For an overview of how to create Pyblish plug-ins you can learn more about creating a consistent publishing pipeline using the Pyblish documentation.
TODO Example Usage
These are used by the separate Avalon Launcher tool.
Avalon allows you to customize what Applications or more generic Actions you can run from the Launcher. The launcher is basically a user interface for these Actions.
Applications are a subclass of Actions and thus inherit the same functionality. However they additionally provide the basis for how the Launcher loads its .toml
files. Each .toml
file is basically parsed and a Application type is created for it.
TODO Example Usage
Each host integration requires some specific implementation details to be a valid Avalon integration.
TODO: Create this stub template, for now see this The barebones for a Host integration can be found in getavalon/avalon_stubs/host
In essence it comes down to:
-
Create a folder the host in
avalon/
exposed as a Python package by adding a__init__.py
file. Likeavalon/maya/__init__.py
andavalon/houdini/__init__.py
. -
Expose an
install()
anduninstall()
method. It's recommended import the functions from another module inside the hosts' package like is done here. Usually Avalon host integration import from a file calledpipeline.py
which then implements the integration details. -
Set up the Work Files API for the Host otherwise the Work Files tool will not work.
Many of the host implementations currently expose a containerise
method to make it simple for configs to load whatever content they need and then containerise it so that Avalon can correctly find it as loaded content with ls()
function for that host.
The containerise
and parse_container
method are not required for a functioning integration. However, having these available across multiple integrations in a similar way does make it easier for those familiar with one integration to adopt another. Note how for example the Fusion integration does not currently expose containerise
but instead has imprint_container
doing basically the same thing.
So what do they do and why are they there?
To answer that question it is important to understand the ls()
method of that host. Implementing that method basically means what is the best way in that Host application to "find" the loaded content and identify all of it as whatever is loaded. Basically how can we find the group, container, collection of loaded content efficiently without setting very harsh requirements on the studio configuration and how a studio's pipeline would function.
-
In Maya for example, we chose to use an
objectSet
. This is perfect, because we then have a simple node type we can query. It can have any members (e.g. any loaded content). And beyond Maya's referencing it means we can even track loaded non-referenced nodes. So when usingavalon.maya.containerise
on your loaded content it will make anobjectSet
that thels()
method will find as a valid container. -
In Fusion we just allow any "loader" to be found, because the node-based nature of Fusion whatever is loaded usually boils down to a single node.
But how do we recognize actual containers?
A container
in Avalon must adhere to the avalon-core:container
schema. As such the result of ls()
must be able to return that data. Usually the containerise
functions embed this information on the container (e.g. objectSet in Maya or loader in Fusion) so that ls()
can easily retrieve it. This parsing of the container data from the node is what parse_container
does. Pass it the expected node type for that host integration, and it returns the valid container. The Fusion parse_container
is a good reference.