From f04c646f170646146e8241bedc13d56af8a41388 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 5 Jun 2024 20:46:39 +0000 Subject: [PATCH] Deployed 03efc317b to main with MkDocs 1.5.3 and mike 2.0.0 --- main/index.html | 10 ++++++++-- main/search/search_index.json | 2 +- main/sitemap.xml.gz | Bin 127 -> 127 bytes 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/main/index.html b/main/index.html index 03692d9669..4e2f7dfea8 100644 --- a/main/index.html +++ b/main/index.html @@ -2503,8 +2503,14 @@

Hyperledger Aries Cloud Agent - Python

-

pypi releases

- +

+ + + +   +   +   +

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

diff --git a/main/search/search_index.json b/main/search/search_index.json index 8e1e7f413b..c26b4ac058 100644 --- a/main/search/search_index.json +++ b/main/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Hyperledger Aries Cloud Agent - Python","text":"

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

Full access to an organized set of all of the ACA-Py documents is available at https://aca-py.org. Check it out! It's much easier to navigate than this GitHub repo for reading the documentation.

"},{"location":"#overview","title":"Overview","text":"

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using DIDComm messaging and Hyperledger Aries protocols. The \"cloud\" in the name means that ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.

ACA-Py is built on the Aries concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py\u2019s supported Aries protocols include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.

To use ACA-Py you create a business logic controller that \"talks to\" an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the Aries and DIDComm protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type Aries protocols.

This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.

"},{"location":"#multi-tenant","title":"Multi-Tenant","text":"

ACA-Py supports \"multi-tenant\" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an \"issuer-as-a-service\", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a \"cloud wallet\" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.

"},{"location":"#mediator-service","title":"Mediator Service","text":"

Startup options allow the use of an ACA-Py as an Aries mediator using core Aries protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to Aries agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio Technologies. Learn more about deploying a mediator here. See the Aries Mediator Service for a \"best practices\" configuration of an Aries mediator.

"},{"location":"#indy-transaction-endorsing","title":"Indy Transaction Endorsing","text":"

ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.

"},{"location":"#scaled-deployments","title":"Scaled Deployments","text":"

ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.

"},{"location":"#vc-api-endpoints","title":"VC-API Endpoints","text":"

A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.

"},{"location":"#example-uses","title":"Example Uses","text":"

The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:

  • An interface to a legacy system to issue verifiable credentials
  • An authentication service based on the presentation of verifiable credential proofs
  • An enterprise wallet to hold and present verifiable credentials about that enterprise
  • A user interface for a person to use a wallet not stored on a mobile device
  • An application embedded in an IoT device, capable of issuing verifiable credentials about collected data
  • A persistent connection to other agents that enables secure messaging and notifications
  • Custom code to implement a new service.
"},{"location":"#getting-started","title":"Getting Started","text":"

For those new to SSI, Aries and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.

  • Identity in Hyperledger: Indy, Aries and Ursa
  • Becoming a Hyperledger Aries Developer

The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.

Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You\u2019ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.

"},{"location":"#understanding-the-architecture","title":"Understanding the Architecture","text":"

There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.

You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the Aries ACA-Py Plugins repository. Check them out -- it might have the very plugin you need!

"},{"location":"#installation-and-usage","title":"Installation and Usage","text":"

Use the \"install and go\" page for developers if you are comfortable with Trust over IP and Aries concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the repository /demo folder there is a full set of demos for developers to use in getting up to speed quickly. Start with the Traction Workshop to go through a complete ACA-Py-based Issuer-Holder-Verifier flow in about 20 minutes. Next, the Alice-Faber Demo is a great way for developers try a zero-install example of how to use the ACA-Py API to operate a couple of Aries Agents. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.

If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging, if you are unfamiliar with poetry please see our cheat sheet

"},{"location":"#about-the-aca-py-admin-api","title":"About the ACA-Py Admin API","text":"

The overview of ACA-Py\u2019s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.

An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.

Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.

"},{"location":"#troubleshooting","title":"Troubleshooting","text":"

There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the \"aries-cloudagent-python\" channel on the Hyperledger Discord chat server (invitation here).

"},{"location":"#credit","title":"Credit","text":"

The initial implementation of ACA-Py was developed by the Government of British Columbia\u2019s Digital Trust Team in Canada. To learn more about what\u2019s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.

See the MAINTAINERS.md file for a list of the current ACA-Py maintainers, and the guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing \u2014\u00a0guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.

"},{"location":"#license","title":"License","text":"

Apache License Version 2.0

"},{"location":"CHANGELOG/","title":"Aries Cloud Agent Python Changelog","text":""},{"location":"CHANGELOG/#0121","title":"0.12.1","text":""},{"location":"CHANGELOG/#april-26-2024","title":"April 26, 2024","text":"

Release 0.12.1 is a small patch to cleanup some edge case issues in the handling of Out of Band invitations, revocation notification webhooks, and connection querying uncovered after the 0.12.0 release. Fixes and improvements were also made to the generation of ACA-Py's OpenAPI specifications.

"},{"location":"CHANGELOG/#0121-breaking-changes","title":"0.12.1 Breaking Changes","text":"

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#0121-categorized-list-of-pull-requests","title":"0.12.1 Categorized List of Pull Requests","text":"
  • Out of Band Invitations and Connection Establishment updates/fixes:

    • \ud83d\udc1b Fix ServiceDecorator parsing in oob record handling #2910 ff137
    • fix: consider all resolvable dids in invites \"public\" #2900 dbluhm
    • fix: oob record their_service should be updatable #2897 dbluhm
    • fix: look up conn record by invite msg id instead of key #2891 dbluhm
  • OpenAPI/Swagger updates, fixes and cleanups:

    • Fix api schema mixup in revocation routes #2909 jamshale
    • \ud83c\udfa8 fix typos #2898 ff137
    • \u2b06\ufe0f Upgrade codegen tools used in generate-open-api-specols #2899 ff137
    • \ud83d\udc1b Fix IndyAttrValue model that was dropped from openapi spec #2894 ff137
  • Test and Demo updates:

    • fix Faber demo to use oob with aip10 to support connection reuse #2903 ianco
    • fix: integration tests should use didex 1.1 #2889 dbluhm
  • Credential Exchange updates and fixes:

    • fix: rev notifications on publish pending #2916 dbluhm
  • Endorsement of Indy Transactions fixes:

    • Prevent 500 error when re-promoting DID with endorsement #2885 jamshale
    • Fix ack during for auto endorsement #2883 jamshale
  • Documentation publishing process updates:

    • Some updates to the mkdocs publishing process #2888 swcurran
    • Update GHA so that broken image links work on docs site - without breaking them on GitHub #2852 swcurran
  • Dependencies and Internal Updates:

    • chore(deps): Bump psf/black from 24.4.0 to 24.4.2 in the all-actions group #2924 dependabot bot
    • fix: fixes a regression that requires a log file in multi-tenant mode #2918 amanji
    • Update AnonCreds to 0.2.2 #2917 swcurran
    • chore(deps): Bump aiohttp from 3.9.3 to 3.9.4 dependencies python #2902 dependabot bot
    • chore(deps): Bump idna from 3.4 to 3.7 in /demo/playground/examples dependencies python #2886 dependabot bot
    • chore(deps): Bump psf/black from 24.3.0 to 24.4.0 in the all-actions group dependencies github_actions #2893 dependabot bot
    • chore(deps): Bump idna from 3.6 to 3.7 dependencies python #2887 dependabot bot
    • refactor: logging configs setup #2870 amanji
  • Release management pull requests:

    • 0.12.1 #2926 swcurran
    • 0.12.1rc1 #2921 swcurran
    • 0.12.1rc0 #2912 swcurran
"},{"location":"CHANGELOG/#0120","title":"0.12.0","text":""},{"location":"CHANGELOG/#april-11-2024","title":"April 11, 2024","text":"

Release 0.12.0 is a large release with many new capabilities, feature improvements, upgrades, and bug fixes. Importantly, this release completes the ACA-Py implementation of Aries Interop Profile v2.0, and enables the elimination of unqualified DIDs. While only deprecated for now, all deployments of ACA-Py SHOULD move to using only fully qualified DIDs as soon as possible.

Much progress has been made on did:peer support in this release, with the handling of inbound DID Peer 1 added, and inbound and outbound support for DID Peer 2 and 4. Much attention was also paid to making sure that the Peer DID and DID Exchange capabilities match those of Credo-TS (formerly Aries Framework JavaScript). The completion of that work eliminates the remaining places where \"unqualified\" DIDs were being used, and to enable the \"connection reuse\" feature in the Out of Band protocol when using DID Peer 2 and 4 DIDs in invitations. See the document Qualified DIDs for details about how to control the use of DID Peer 2 or 4 in an ACA-Py deployment, and how to eliminate the use of unqualified DIDs. Support for DID Exchange v1.1 has been added to ACA-Py, with support for DID Exchange v1.0 retained, and we've added support for DID Rotation.

Work continues towards supporting ledger agnostic AnonCreds, and the new Hyperledger AnonCreds Rust library. Some of that work is in this release, the rest will be in the next release.

Attention was given in the release to simplifying the handling of JSON-LD Data Integrity Verifiable Credentials.

An important change in this release is the re-organization of the ACA-Py documentation, moving the vast majority of the documents to the folders within the docs folder -- a long overdue change that will allow us to soon publish the documents on https://aca-py.org directly from the ACA-Py repository, rather than from the separate aries-acapy-docs currently being used.

A big developer improvement is a revamping of the test handling to eliminate ~2500 warnings that were previously generated in the test suite. Nice job @ff137!

"},{"location":"CHANGELOG/#0120-breaking-changes","title":"0.12.0 Breaking Changes","text":"

A deployment of this release that uses DID Peer 2 and 4 invitations may encounter problems interacting with agents deployed using older Aries protocols. Led by the Aries Working Group, the Aries community is encouraging the upgrade of all ecosystem deployments to accept all commonly used qualified DIDs, including DID Peer 2 and 4. See the document Qualified DIDs for more details about the transition to using only qualified DIDs. If deployments you interact with are still using unqualified DIDs, please encourage them to upgrade as soon as possible.

Specifically for those upgrading their ACA-Py instance that create Out of Band invitations with more than one handshake_protocol, the protocol for the connection has been removed. See [Issue #2879] contains the details of this subtle breaking change.

New deprecation notices were added to ACA-Py on startup and in the OpenAPI/Swagger interface. Those added are listed below. As well, we anticipate 0.12.0 being the last ACA-Py release to include support for the previously deprecated Indy SDK.

  • RFC 0036 Issue Credential v1
    • Migrate to use RFC 0453 Issue Credential v2
  • RFC 0037 Present Proof v2
    • Migrate to use RFC 0454 Present Proof v2
  • RFC 0169 Connections
    • Migrate to use RFC 0023 DID Exchange and 0434 Out-of-Band
  • The use of did:sov:... as a Protocol Doc URI
    • Migrate to use https://didcomm.org/.
"},{"location":"CHANGELOG/#0120-categorized-list-of-pull-requests","title":"0.12.0 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes

    • fix: conn proto in invite webhook if known #2880 dbluhm
    • Emit the OOB done event even for multi-use invites #2872 ianco
    • refactor: introduce use_did and use_did_method #2862 dbluhm
    • fix(credo-interop): various didexchange and did:peer related fixes 1.0.0 #2748 dbluhm
    • Change did \u2194 verkey logging on connections #2853 jamshale
    • fix: did exchange multiuse invites respond in kind #2850 dbluhm
    • Support connection re-use for did:peer:2/4 #2823 ianco
    • feat: did-rotate #2816 amanji
    • Author subwallet setup automation #2791 jamshale
    • fix: save multi_use to the DB for OOB invitations #2694 frostyfrog
    • Connection and DIDX Problem Reports #2653 usingtechnology
  • DID Peer and DID Resolver Updates and Fixes

    • Integration test for did:peer #2713 ianco
    • Feature/emit did peer 4 #2696 Jsyro
    • did peer 4 resolution #2692 Jsyro
    • Emit did:peer:2 for didexchange #2687 Jsyro
    • Add did web method type as a default option #2684 PatStLouis
    • feat: add did:jwk resolver #2645 dbluhm
    • feat: support resolving did:peer:1 received in did exchange #2611 dbluhm
  • AnonCreds and Ledger Agnostic AnonCreds RS Changes

    • Prevent revocable cred def being created without tails server #2849 jamshale
    • Anoncreds - support for anoncreds and askar wallets concurrently #2822 jamshale
    • Send revocation list instead of rev_list object - Anoncreds #2821 jamshale
    • Fix anoncreds non-endorsement revocation #2814 jamshale
    • Get and create anoncreds profile when using anoncreds subwallet #2803 jamshale
    • Add anoncreds multitenant endorsement integration tests #2801 jamshale
    • Anoncreds revoke and publish-revocations endorsement #2782 jamshale
    • Upgrade anoncreds to version 0.2.0-dev11 #2763 jamshale
    • Update anoncreds to 0.2.0-dev10 #2758 jamshale
    • Anoncreds - Cred Def and Revocation Endorsement #2752 jamshale
    • Upgrade anoncreds to 0.2.0-dev9 #2741 jamshale
    • Upgrade anoncred-rs to version 0.2.0-dev8 #2734 jamshale
    • Upgrade anoncreds to 0.2.0.dev7 #2719 jamshale
    • Improve api documentation and error handling #2690 jamshale
    • Add unit tests for anoncreds revocation #2688 jamshale
    • Return 404 when schema not found #2683 jamshale
    • Anoncreds - Add unit testing #2672 jamshale
    • Additional anoncreds integration tests AnonCreds #2660 ianco
    • Update integration tests for anoncreds-rs AnonCreds #2651 ianco
    • Initial migration of anoncreds revocation code AnonCreds #2643 ianco
    • Integrate Anoncreds rs into credential and presentation endpoints AnonCreds #2632 ianco
    • Initial code migration from anoncreds-rs branch AnonCreds #2596 ianco
  • Hyperledger Indy ledger related updates and fixes

    • Remove requirement for write ledger in read-only mode. #2836 esune
    • Add known issues section to Multiledger.md documentation #2788 esune
    • fix: update constants in TransactionRecord #2698 amanji
    • Cache TAA by wallet name #2676 jamshale
    • Fix: RevRegEntry Transaction Endorsement 0.11.0 #2558 shaangill025
  • JSON-LD Verifiable Credential/DIF Presentation Exchange updates

    • Add missing VC-DI/LD-Proof verification method option #2867 PatStLouis
    • Revert profile injection for VcLdpManager on vc-api endpoints #2794 PatStLouis
    • Add cached copy of BBS v1 context #2749 andrewwhitehead
    • Update BBS+ context to bypass redirections #2739 swcurran
    • feat: make VcLdpManager pluggable #2706 dbluhm
    • fix: minor type hint corrections for VcLdpManager #2704 dbluhm
    • Remove if condition which checks if the credential.type array is equal to 1 #2670 PatStLouis
    • Feature Suggestion: Include a Reason When Constraints Cannot Be Applied #2630 Ennovate-com
    • refactor: make ldp_vc logic reusable #2533 dbluhm
  • Credential Exchange (Issue, Present) Updates

    • Allow for crids in event payload to be integers #2819 jamshale
    • Create revocation notification after list entry written to ledger #2812 jamshale
    • Remove exception on connectionless presentation problem report handler #2723 loneil
    • Ensure \"preserve_exchange_records\" flags are set. #2664 usingtechnology
    • Slight improvement to credx proof validation error message #2655 ianco
    • Add ConnectionProblemReport handler #2600 usingtechnology
  • Multitenancy Updates and Fixes

    • feature/per tenant settings #2790 amanji
    • Improve Per Tenant Logging: Fix issues around default log file path #2659 shaangill025
  • Other Fixes, Demo, DevContainer and Documentation Fixes

    • chore: propose official deprecations of a couple of features #2856 dbluhm
    • feat: external signature suite provider interface #2835 dbluhm
    • Update GHA so that broken image links work on docs site - without breaking them on GitHub #2852 swcurran
    • Minor updates to the documentation - links #2848 swcurran
    • Update to run_demo script to support Apple M1 CPUs #2843 swcurran
    • Add functionality for building and running agents seprately #2845 sarthakvijayvergiya
    • Cleanup of docs #2831 swcurran
    • Create AnonCredsMethods.md #2832 swcurran
    • FIX: GHA update for doc publishing, fix doc file that was blanked #2820 swcurran
    • More updates to get docs publishing #2810 swcurran
    • Eliminate the double workflow event #2811 swcurran
    • Publish docs GHActions tweak #2806 swcurran
    • Update publish-docs to operate on main and on branches prefixed with docs-v #2804 swcurran
    • Add index.html redirector to gh-pages branch #2802 swcurran
    • Demo description of reuse in establishing a connection #2787 swcurran
    • Reorganize the ACA-Py Documentation Files #2765 swcurran
    • Tweaks to MD files to enable aca-py.org publishing #2771 swcurran
    • Update devcontainer documentation #2729 jamshale
    • Update the SupportedRFCs Document to be up to date #2722 swcurran
    • Fix incorrect Sphinx search library version reference #2716 swcurran
    • Update RTD requirements after security vulnerability recorded #2712 swcurran
    • Update legacy bcgovimages references. #2700 WadeBarnes
    • fix: link to raw content change from master to main #2663 Ennovate-com
    • fix: open-api generator script #2661 dbluhm
    • Update the ReadTheDocs config in case we do another 0.10.x release #2629 swcurran
  • Dependencies and Internal Updates

    • Add wallet.type config to /settings endpoint #2877 jamshale
    • chore(deps): Bump pillow from 10.2.0 to 10.3.0 dependencies python #2869 dependabot bot
    • Fix run_tests script #2866 ianco
    • fix: states for discovery record to emit webhook #2858 dbluhm
    • Increase promote did retries #2854 jamshale
    • chore(deps-dev): Bump black from 24.1.1 to 24.3.0 dependencies python #2847 dependabot bot
    • chore(deps): Bump the all-actions group with 1 update dependencies github_actions #2844 dependabot bot
    • patch for #2781: User Agent header in doc loader #2824 gmulhearn-anonyome
    • chore(deps): Bump jwcrypto from 1.5.4 to 1.5.6 dependencies python #2833 dependabot bot
    • bot chore(deps): Bump cryptography from 42.0.3 to 42.0.4 dependencies python #2805 dependabot
    • bot chore(deps): Bump the all-actions group with 3 updates dependencies github_actions #2815 dependabot
    • Change middleware registration order #2796 PatStLouis
    • Bump pyld version to 2.0.4 #2795 PatStLouis
    • Revert profile inject #2789 jamshale
    • Move emit events to profile and delay sending until after commit #2760 ianco
    • fix: partial revert of ConnRecord schema change 1.0.0 #2746 dbluhm
    • chore(deps): Bump aiohttp from 3.9.1 to 3.9.2 dependencies #2745 dependabot bot
    • bump pydid to v 0.4.3 #2737 PatStLouis
    • Fix subwallet record removal #2721 andrewwhitehead
    • chore(deps): Bump jinja2 from 3.1.2 to 3.1.3 dependencies #2707 dependabot bot
    • feat: inject profile #2705 dbluhm
    • Remove tiny-vim from being added to the container image to reduce reported vulnerabilities from scanning #2699 swcurran
    • chore(deps): Bump jwcrypto from 1.5.0 to 1.5.1 dependencies #2689 dependabot bot
    • Update dependencies #2686 andrewwhitehead
    • Fix: Change To Use Timezone Aware UTC datetime #2679 Ennovate-com
    • fix: update broken demo dependency #2638 mrkaurelius
    • Bump cryptography from 41.0.5 to 41.0.6 dependencies #2636 dependabot bot
    • Bump aiohttp from 3.8.6 to 3.9.0 dependencies #2635 dependabot bot
  • CI/CD, Testing, and Developer Tools/Productivity Updates

    • Fix deprecation warnings #2756 ff137
    • chore(deps): Bump the all-actions group with 10 updates dependencies #2784 dependabot bot
    • Add Dependabot configuration #2783 WadeBarnes
    • Implement B006 rule #2775 jamshale
    • \u2b06\ufe0f Upgrade pytest to 8.0 #2773 ff137
    • \u2b06\ufe0f Update pytest-asyncio to 0.23.4 #2764 ff137
    • Remove asynctest dependency and fix \"coroutine not awaited\" warnings #2755 ff137
    • Fix pytest collection errors when anoncreds package is not installed #2750 andrewwhitehead
    • chore: pin black version #2747 dbluhm
    • Tweak scope of GHA integration tests #2662 ianco
    • Update snyk workflow to execute on Pull Request #2658 usingtechnology
  • Release management pull requests

    • 0.12.0 #2882 swcurran
    • 0.12.0rc3 #2878 swcurran
    • 0.12.0rc2 #2825 swcurran
    • 0.12.0rc1 #2800 swcurran
    • 0.12.0rc1 #2799 swcurran
    • 0.12.0rc0 #2732 swcurran
"},{"location":"CHANGELOG/#0110","title":"0.11.0","text":""},{"location":"CHANGELOG/#november-24-2023","title":"November 24, 2023","text":"

Release 0.11.0 is a relatively large release of new features, fixes, and internal updates. 0.11.0 is planned to be the last significant update before we begin the transition to using the ledger agnostic AnonCreds Rust in a release that is expected to bring Admin/Controller API changes. We plan to do patches to the 0.11.x branch while the transition is made to using [Anoncreds Rust].

An important addition to ACA-Py is support for signing and verifying SD-JWT verifiable credentials. We expect this to be the first of the changes to extend ACA-Py to support OpenID4VC protocols.

This release and Release 0.10.5 contain a high priority fix to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details. Anyone using JSON-LD presentations is recommended to upgrade to one of these versions of ACA-Py as soon as possible.

In the CI/CD realm, substantial changes were applied to the source base in switching from:

  • pip to Poetry for packaging and dependency management,
  • Flake8 to Ruff for linting,
  • asynctest to IsolatedAsyncioTestCase and AsyncMock objects now included in Python's builtin unittest package for unit testing.

These are necessary and important modernization changes, with the latter two triggering many (largely mechanical) changes to the codebase.

"},{"location":"CHANGELOG/#0110-breaking-changes","title":"0.11.0 Breaking Changes","text":"

In addition to the impacts of the change for developers in switching from pip to Poetry, the only significant breaking change is the (overdue) transition of ACA-Py to always use the new DIDComm message type prefix, changing the DID Message prefix from the old hardcoded did:sov:BzCbsNYhMrjHiqZDTUASHg;spec to the new hardcoded https://didcomm.org value, and using the new DIDComm MIME type in place of the old. The vast majority (all?) Aries deployments have long since been updated to accept both values, so this change just forces the use of the newer value in sending messages. In updating this, we retained the old configuration parameters most deployments were using (--emit-new-didcomm-prefix and --emit-new-didcomm-mime-type) but updated the code to set the configuration parameters to true even if the parameters were not set. See [PR #2517].

The JSON-LD verifiable credential handling of JSON-LD contexts has been updated to pre-load the base contexts into the repository code so they are not fetched at run time. This is a security best practice for JSON-LD, and prevents errors in production when, from time to time, the JSON-LD contexts are unavailable because of outages of the web servers where they are hosted. See [PR #2587].

A Problem Report message is now sent when a request for a credential is received and there is no associated Credential Exchange Record. This may happen, for example, if an issuer decides to delete a Credential Exchange Record that has not be answered for a long time, and the holder responds after the delete. See [PR #2577].

"},{"location":"CHANGELOG/#0110-categorized-list-of-pull-requests","title":"0.11.0 Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • Change arg_parse to always set --emit-new-didcomm-prefix and --emit-new-didcomm-mime-type to true #2517 swcurran
  • DID Handling and Connection Establishment Updates/Fixes
    • Goal and Goal Code in invitation URL. #2591 usingtechnology
    • refactor: use did-peer-2 instead of peerdid #2561 dbluhm
    • Fix: Problem Report Before Exchange Established #2519 Ennovate-com
    • fix: issue #2434: Change DIDExchange States to Match rfc160 #2461 anwalker293
  • DID Peer and DID Resolver Updates and Fixes
    • fix: unique ids for services in legacy peer #2476 dbluhm
    • peer did \u2154 resolution enhancement #2472 Jsyro
    • feat: add timeout to did resolver resolve method #2464 dbluhm
  • ACA-Py as a DIDComm Mediator Updates and Fixes
    • fix: routing behind mediator #2536 dbluhm
    • fix: mediation routing keys as did key #2516 dbluhm
    • refactor: drop mediator_terms and recipient_terms #2515 dbluhm
  • Fixes to Upgrades
    • \ud83d\udc1b fix wallet_update when only extra_settings requested #2612 ff137
  • Hyperledger Indy ledger related updates and fixes
    • fix: taa rough timestamp timezone from datetime #2554 dbluhm
    • \ud83c\udfa8 clarify LedgerError message when TAA is required and not accepted #2545 ff137
    • Feat: Upgrade from tags and fix issue with legacy IssuerRevRegRecords [<=v0.5.2] #2486 shaangill025
    • Bugfix: Issue with write ledger pool when performing Accumulator sync #2480 shaangill025
    • Issue #2419 InvalidClientTaaAcceptanceError time too precise error if container timezone is not UTC #2420 Ennovate-com
  • OpenID4VC / SD-JWT Updates
    • chore: point to official sd-jwt lib release #2573 dbluhm
    • Feat/sd jwt implementation #2487 cjhowland
  • JSON-LD Verifiable Credential/Presentation updates
    • fix: report presentation result #2615 dbluhm
    • Fix Issue #2589 TypeError When There Are No Nested Requirements #2590 Ennovate-com
    • feat: use a local static cache for commonly used contexts #2587 chumbert
    • Issue #2488 KeyError raised when Subject ID is not a URI #2490 Ennovate-com
  • Credential Exchange (Issue, Present) Updates
    • Default connection_id to None to account for Connectionless Proofs #2605 popkinj
    • Send Problem report when CredEx not found #2577 usingtechnology
    • fix: clean up requests and invites #2560 dbluhm
  • Multitenancy Updates and Fixes
    • Feat: Support subwallet upgradation using the Upgrade command #2529 shaangill025
  • Other Fixes, Demo, DevContainer and Documentation Fixes
    • fix: wallet type help text out of date #2618 dbluhm
    • fix: typos #2614 omahs
    • black formatter extension configuration update #2603 usingtechnology
    • Update Devcontainer pytest ruff black #2602 usingtechnology
    • Issue 2570 devcontainer ruff, black and pytest #2595 usingtechnology
    • chore: correct type hints on base record #2604 dbluhm
    • Playground needs optionally external network #2564 usingtechnology
    • Issue 2555 playground scripts readme #2563 usingtechnology
    • Update demo/playground scripts #2562 usingtechnology
    • Update .readthedocs.yaml #2548 swcurran
    • Update .readthedocs.yaml #2547 swcurran
    • fix: correct minor typos #2544 Ennovate-com
    • Update steps for Manually Creating Revocation Registries #2491 WadeBarnes
  • Dependencies and Internal Updates
    • chore: bump pydid version #2626 dbluhm
    • chore: dependency updates #2565 dbluhm
    • chore(deps): Bump urllib3 from 2.0.6 to 2.0.7 dependencies #2552 dependabot bot
    • chore(deps): Bump urllib3 from 2.0.6 to 2.0.7 in /demo/playground/scripts dependencies #2551 dependabot bot
    • chore: update pydid #2527 dbluhm
    • chore(deps): Bump urllib3 from 2.0.5 to 2.0.6 dependencies #2525 dependabot bot
    • chore(deps): Bump urllib3 from 2.0.2 to 2.0.6 in /demo/playground/scripts dependencies #2524 dependabot bot
    • Avoid multiple open wallet connections #2521 andrewwhitehead
    • Remove unused dependencies #2510 andrewwhitehead
    • Use correct rust log level in dockerfiles #2499 loneil
    • fix: run tests script copying local env #2495 dbluhm
    • Update devcontainer to read version from aries-cloudagent package #2483 usingtechnology
    • Update Python image version to 3.9.18 #2456 WadeBarnes
    • Remove old routing protocol code #2466 dbluhm
  • CI/CD, Testing, and Developer Tools/Productivity Updates
    • fix: drop asynctest 0.11.0 #2566 dbluhm
    • Dockerfile.indy - Include aries_cloudagent code into build #2584 usingtechnology
    • fix: version should be set by pyproject.toml #2471 dbluhm
    • chore: add black back in as a dev dep #2465 dbluhm
    • Swap out flake8 in favor of Ruff #2438 dbluhm
  • Release management pull requests
    • 0.11.0 #2627 swcurran
    • 0.11.0rc2 #2613 swcurran
    • 0.11.0-rc1 #2576 swcurran
    • 0.11.0-rc0 #2575 swcurran
"},{"location":"CHANGELOG/#2289-migrate-to-poetry-2436-gavinok","title":"2289 Migrate to Poetry #2436 Gavinok","text":""},{"location":"CHANGELOG/#0105","title":"0.10.5","text":""},{"location":"CHANGELOG/#november-21-2023","title":"November 21, 2023","text":"

Release 0.10.5 is a high priority patch release to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details.

Anyone using JSON-LD presentations is recommended to upgrade to this version of ACA-Py as soon as possible.

"},{"location":"CHANGELOG/#0105-categorized-list-of-pull-requests","title":"0.10.5 Categorized List of Pull Requests","text":"
  • JSON-LD Credential Exchange (Issue, Present) Updates
    • fix(backport): report presentation result #2622 dbluhm
  • Release management pull requests
    • 0.10.5 #2623 swcurran
"},{"location":"CHANGELOG/#0104","title":"0.10.4","text":""},{"location":"CHANGELOG/#october-9-2023","title":"October 9, 2023","text":"

Release 0.10.4 is a patch release to correct an issue with the handling of did:key routing keys in some mediator scenarios, notably with the use of [Aries Framework Kotlin]. See the details in the PR and [Issue #2531 Routing for agents behind a aca-py based mediator is broken].

Thanks to codespree for raising the issue and providing the fix.

Aries Framework Kotlin

"},{"location":"CHANGELOG/#0104-categorized-list-of-pull-requests","title":"0.10.4 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes
    • fix: routing behind mediator #2536 dbluhm
  • Release management pull requests
    • 0.10.4 #2539 swcurran
"},{"location":"CHANGELOG/#0103","title":"0.10.3","text":""},{"location":"CHANGELOG/#september-29-2023","title":"September 29, 2023","text":"

Release 0.10.3 is a patch release to add an upgrade process for very old versions of Aries Cloud Agent Python (circa 0.5.2). If you have a long time deployment of an issuer that uses revocation, this release could correct internal data (tags in secure storage) related to revocation registries. Details of the about the triggering problem can be found in [Issue #2485].

The upgrade is applied by running the following command for the ACA-Py instance to be upgraded:

./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg

"},{"location":"CHANGELOG/#0103-categorized-list-of-pull-requests","title":"0.10.3 Categorized List of Pull Requests","text":"
  • Credential Exchange (Issue, Present) Updates
    • Feat: Upgrade from tags and fix issue with legacy IssuerRevRegRecords [<=v0.5.2] #2486 shaangill025
  • Release management pull requests
    • 0.10.3 #2522 swcurran
"},{"location":"CHANGELOG/#0102","title":"0.10.2","text":""},{"location":"CHANGELOG/#september-22-2023","title":"September 22, 2023","text":"

Release 0.10.2 is a patch release for 0.10.1 that addresses three specific regressions found in deploying Release 0.10.1. The regressions are to fix:

  • An ACA-Py instance upgraded to 0.10.1 that had an existing connection to another Aries agent where the connection has both an http and ws (websocket) service endpoint with the same ID cannot message that agent. A scenario is an ACA-Py issuer connecting to an Endorser with both http and ws service endpoints. The updates made in 0.10.1 to improve ACA-Py DID resolution did not account for this scenario and needed a tweak to work ([Issue #2474], [PR #2475]).
  • The \"fix revocation registry\" endpoint used to fix scenarios an Issuer's local revocation registry state is out of sync with the ledger was broken by some code being added to support a single ACA-Py instance writing to different ledgers ([Issue #2477], [PR #2480]).
  • The version of the PyDID library we were using did not handle some unexpected DID resolution use cases encountered with mediators. The PyDID library version dependency was updated in [PR #2500].
"},{"location":"CHANGELOG/#0102-categorized-list-of-pull-requests","title":"0.10.2 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes
    • LegacyPeerDIDResolver: erroneously assigning same ID to multiple services #2475 dbluhm
    • fix: update pydid #2500 dbluhm
  • Credential Exchange (Issue, Present) Updates
    • Bugfix: Issue with write ledger pool when performing Accumulator sync #2480 shaangill025
  • Release management pull requests
    • 0.10.2 #2509 swcurran
    • 0.10.2-rc0 #2484 swcurran
    • 0.10.2 Patch Release - fix issue #2475, #2477 #2482 shaangill025
"},{"location":"CHANGELOG/#0101","title":"0.10.1","text":""},{"location":"CHANGELOG/#august-29-2023","title":"August 29, 2023","text":"

Release 0.10.1 contains a breaking change, an important fix for a regression introduced in 0.8.2 that impacts certain deployments, and a number of fixes and updates. Included in the updates is a significant internal reorganization of the DID and connection management code that was done to enable more flexible uses of different DID Methods, such as being able to use did:web DIDs for DIDComm messaging connections. The work also paves the way for coming updates related to support for did:peer DIDs for DIDComm. For details on the change see [PR #2409], which includes some of the best pull request documentation ever created.

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

The regression fix is for ACA-Py deployments that use multi-use invitations but do NOT use the --auto-accept-connection-requests flag/processing. A change in 0.8.2 (PR [#2223]) suppressed an extra webhook event firing during the processing after receiving a connection request. An unexpected side effect of that change was that the subsequent webhook event also did not fire, and as a result, the controller did not get any event signalling a new connection request had been received via the multi-use invitation. The update in this release ensures the proper event fires and the controller receives the webhook.

See below for the breaking changes and a categorized list of the pull requests included in this release.

Updates in the CI/CD area include adding the publishing of a nightly container image that includes any changes in the main branch since the last nightly was published. This allows getting the \"latest and greatest\" code via a container image vs. having to install ACA-Py from the repository. In addition, Snyk scanning was added to the CI pipeline, and Indy SDK tests were removed from the pipeline.

"},{"location":"CHANGELOG/#0101-breaking-changes","title":"0.10.1 Breaking Changes","text":"

[#2352] is a breaking change related to the storage of presentation exchange records in ACA-Py. In previous releases, presentation exchange protocol state data records were retained in ACA-Py secure storage after the completion of protocol instances. With this release the default behavior changes to deleting those records by default, unless the ----preserve-exchange-records flag is set in the configuration. This extends the use of that flag that previously applied only to issue credential records. The extension matches the initial intention of the flag--that it cover both issue credential and present proof exchanges. The \"best practices\" for ACA-Py is that the controller (business logic) store any long-lasting business information needed for the service that is using the Aries Agent, and ACA-Py storage should be used only for data necessary for the operation of the agent. In particular, protocol state data should be held in ACA-Py only as long as the protocol is running (as it is needed by ACA-Py), and once a protocol instance completes, the controller should extract and store the business information from the protocol state before it is deleted from ACA-Py storage.

"},{"location":"CHANGELOG/#0100-categorized-list-of-pull-requests","title":"0.10.0 Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • fix: outbound send status missing on path #2393 dbluhm
    • fix: keylist update response race condition #2391 dbluhm
  • DID Handling and Connection Establishment Updates/Fixes
    • fix: handle stored afgo and findy docs in corrections #2450 dbluhm
    • chore: relax connections filter DID format #2451 chumbert
    • fix: ignore duplicate record errors on add key #2447 dbluhm
    • fix: ignore duplicate record errors on add key #2447 dbluhm
    • fix: more diddoc corrections #2446 dbluhm
    • feat: resolve connection targets and permit connecting via public DID #2409 dbluhm
    • feat: add legacy peer did resolver #2404 dbluhm
    • Fix: Ensure event/webhook is emitted for multi-use invitations #2413 esune
    • feat: add DID Exchange specific problem reports and reject endpoint #2394 dbluhm
    • fix: additional tweaks for did:web and other methods as public DIDs #2392 dbluhm
    • Fix empty ServiceDecorator in OobRecord causing 422 Unprocessable Entity Error #2362 ff137
    • Feat: Added support for Ed25519Signature2020 signature type and Ed25519VerificationKey2020 #2241 dkulic
  • Upgrading to Aries Askar Updates
    • Add symlink to /home/indy/.indy_client for backwards compatibility #2443 esune
  • Credential Exchange (Issue, Present) Updates
    • fix: ensure request matches offer in JSON-LD exchanges, if sent #2341 dbluhm
    • BREAKING Extend --preserve-exchange-records to include Presentation Exchange. #2352 usingtechnology
    • Correct the response type in send_rev_reg_def #2355 ff137
  • Multitenancy Updates and Fixes
    • Multitenant check endorser_info before saving #2395 usingtechnology
    • Feat: Support Selectable Write Ledger #2339 shaangill025
  • Other Fixes, Demo, and Documentation Fixes
    • Redis Plugins [redis_cache & redis_queue] documentation and docker related updates #1937 shaangill025
    • Chore: fix marshmallow warnings #2398 ff137
    • Upgrade pre-commit and flake8 dependencies; fix flake8 warnings #2399 ff137
    • Corrected typo on mediator invitation configuration argument #2365 jorgefl0
    • Add workaround for ARM based macs #2313 finnformica
  • Dependencies and Internal Updates
    • chore(deps): Bump certifi from 2023.5.7 to 2023.7.22 in /demo/playground/scripts dependencies #2354 dependabot bot
  • CI/CD and Developer Tools/Productivity Updates
    • Fix for nightly tests failing on Python 3.10 #2435 Gavinok
    • Don't run Snyk on forks #2429 ryjones
    • Issue #2250 Nightly publish workflow #2421 Gavinok
    • Enable Snyk scanning #2418 ryjones
    • Remove Indy tests from workflows #2415 dbluhm
  • Release management pull requests
    • 0.10.1 #2454 swcurran
    • 0.10.0 #2452 swcurran
    • 0.10.0-rc2 #2448 swcurran
    • 0.10.0-rc1 #2442 swcurran
    • 0.10.0-rc0 #2414 swcurran
"},{"location":"CHANGELOG/#0100","title":"0.10.0","text":""},{"location":"CHANGELOG/#august-29-2023_1","title":"August 29, 2023","text":"

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

"},{"location":"CHANGELOG/#090","title":"0.9.0","text":""},{"location":"CHANGELOG/#july-24-2023","title":"July 24, 2023","text":"

Release 0.9.0 is an important upgrade that changes (PR [#2302]) the dependency on the now archived Hyperledger Ursa project to its updated, improved replacement, AnonCreds CL-Signatures. This important change is ONLY available when using Aries Askar as the wallet type, which brings in both [Indy VDR] and the CL-Signatures via the latest version of CredX from the indy-shared-rs repository. The update is NOT available to those that are using the Indy SDK. All new deployments of ACA-Py SHOULD use Aries Askar. Further, we strongly recommend that all deployments using the Indy SDK with ACA-Py upgrade their installation to use Aries Askar and the related components using the migration scripts available. An Indy SDK to Askar migration document added to the aca-py.org documentation site, and a deprecation warning added to the ACA-Py startup.

The second big change in this release is that we have upgraded the primary Python version from 3.6 to 3.9 (PR [#2247]). In this case, primary means that Python 3.9 is used to run the unit and integration tests on all Pull Requests. We also do nightly runs of the main branch using Python 3.10. As of this release we have dropped Python 3.6, 3.7 and 3.8, and introduced new dependencies that are not supported in those versions of Python. For those that use the published ACA-Py container images, the upgrade should be easily handled. If you are pulling ACA-Py into your own image, or a non-containerized environment, this is a breaking change that you will need to address.

Please see the next section for all breaking changes, and the subsequent section for a categorized list of all pull requests in this release.

"},{"location":"CHANGELOG/#breaking-changes","title":"Breaking Changes","text":"

In addition to the breaking Python 3.6 to 3.9 upgrade, there are two other breaking changes that may impact some deployments.

[#2034] allows for additional flexibility in using public DIDs in invitations, and adds a restriction that \"implicit\" invitations must be proactively enabled using a flag (--requests-through-public-did). Previously, such requests would always be accepted if --auto-accept was enabled, which could lead to unexpected connections being established.

[#2170] is a change to improve message handling in the face of delivery errors when using a persistent queue implementation such as the ACA-Py Redis Plugin. If you are using the Redis plugin, you MUST upgrade to Redis Plugin Release 0.1.0 in conjunction with deploying this ACA-Py release. For those using their own persistent queue solution, see the PR [#2170] comments for information about changes you might need to make to your deployment.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests","title":"Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • BREAKING: feat: get queued outbound message in transport handle message #2170 dbluhm
  • DID Handling and Connection Establishment Updates/Fixes
    • Allow any did to be public #2295 mkempa
    • Feat: Added support for Ed25519Signature2020 signature type and Ed25519VerificationKey2020 #2241 dkulic
    • Add Goal and Goal Code to OOB and DIDex Request #2294 usingtechnology
    • Fix routing in set public did #2288 mkempa - Fix: Do not replace public verkey on mediator #2269 mkempa - BREAKING: Allow multi-use public invites and public invites with metadata #2034 mepeltier
    • fix: public did mediator routing keys as did keys #1977 dbluhm
  • Credential Exchange (Issue, Present) Updates
    • Add revocation registry rotate to faber demo #2333 usingtechnology
    • Update to indy-credx 1.0 #2302 andrewwhitehead
    • feat(anoncreds): Implement automated setup of revocation #2292 dbluhm
    • fix: schema class can set Meta.unknown #1885 dbluhm
    • Respect auto-verify-presentation flag in present proof v1 and v2 #2097 dbluhm
    • Feature: JWT Sign and Verify Admin Endpoints with DID Support #2300 burdettadam
  • Multitenancy Updates and Fixes
    • Fix: Track endorser and author roles in per-tenant settings #2331 shaangill025
    • Added base wallet provisioning details to Multitenancy.md #2328 esune
  • Other Fixes, Demo, and Documentation Fixes
    • Add more context to the ACA-Py Revocation handling documentation #2343 swcurran
    • Document the Indy SDK to Askar Migration process #2340 swcurran
    • Add revocation registry rotate to faber demo #2333 usingtechnology
    • chore: add indy deprecation warnings #2332 dbluhm
    • Fix alice/faber demo execution #2305 andrewwhitehead
    • Add .indy_client folder to Askar only image. #2308 WadeBarnes
    • Add build step for indy-base image in run_demo #2299 usingtechnology
    • Webhook over websocket clarification #2287 dbluhm
  • ACA-Py Deployment Upgrade Changes
    • Add Explicit/Offline marking mechanism for Upgrade #2204 shaangill025
  • Plugin Handling Updates
    • Feature: Add the ability to deny specific plugins from loading 0.7.4 #1737 frostyfrog
  • Dependencies and Internal Updates
    • upgrade pyjwt to latest; introduce leeway to jwt.decodet #2335 ff137
    • upgrade requests to latest #2336 ff137
    • upgrade packaging to latest #2334 ff137
    • chore: update PyYAML #2329 dbluhm
    • chore(deps): Bump aiohttp from 3.8.4 to 3.8.5 in /demo/playground/scripts dependencies #2325 dependabot bot
    • \u2b06\ufe0f upgrade marshmallow to latest #2322 ff137
    • fix: use python 3.9 in run_docker #2291 dbluhm
    • BREAKING!: drop python 3.6 support #2247 dbluhm
    • Minor revisions to the README.md and DevReadMe.md #2272 swcurran
  • ACA-Py Administrative Updates
    • Updating Maintainers list to be accurate and using the TOC format #2258 swcurran
  • CI/CD and Developer Tools/Productivity Updates
    • Cancel in-progress workflows when PR is updated #2303 andrewwhitehead
    • ci: add gha for pr-tests #2058 dbluhm
    • Add devcontainer for ACA-Py #2267 usingtechnology
    • Docker images and GHA for publishing images help wanted #2076 dbluhm
    • ci: test additional versions of python nightly #2059 dbluhm
  • Release management pull requests
    • 0.9.0 #2344 swcurran
    • 0.9.0-rc0 #2338 swcurran
"},{"location":"CHANGELOG/#082","title":"0.8.2","text":""},{"location":"CHANGELOG/#june-29-2023","title":"June 29, 2023","text":"

Release 0.8.2 contains a number of minor fixes and updates to ACA-Py, including the correction of a regression in Release 0.8.0 related to the use of plugins (see [#2255]). Highlights include making it easier to use tracing in a development environment to collect detailed performance information about what is going in within ACA-Py.

This release pulls in indy-shared-rs Release 3.3 which fixes a serious issue in AnonCreds verification, as described in issue [#2036], where the verification of a presentation with multiple revocable credentials fails when using Aries Askar and the other shared components. This issue occurs only when using Aries Askar and indy-credx Release 3.3.

An important new feature in this release is the ability to set some instance configuration settings at the tenant level of a multi-tenant deployment. See PR [#2233].

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_1","title":"Categorized List of Pull Requests","text":"
  • Connections Fixes/Updates
    • Resolve definitions.py fix to fix backwards compatibility break in plugins #2255 usingtechnology
    • Add support for JsonWebKey2020 for the connection invitations #2173 dkulic
    • fix: only cache completed connection targets #2240 dbluhm
    • Connection target should not be limited only to indy dids #2229 dkulic
    • Disable webhook trigger on initial response to multi-use connection invitation #2223 esune
  • Credential Exchange (Issue, Present) Updates
    • Pass document loader to jsonld.expand #2175 andrewwhitehead
  • Multi-tenancy fixes/updates
    • Allow Configuration Settings on a per-tenant basis #2233 shaangill025
    • stand up multiple agents (single and multi) for local development and testing #2230 usingtechnology
    • Multi-tenant self-managed mediation verkey lookup #2232 usingtechnology
    • fix: route multitenant connectionless oob invitation #2243 TimoGlastra
    • Fix multitenant/mediation in demo #2075 ianco
  • Other Bug and Documentation Fixes
    • Assign ~thread.thid with thread_id value #2261 usingtechnology
    • Fix: Do not replace public verkey on mediator #2269 mkempa
    • Provide an optional Profile to the verification key strategy #2265 yvgny
    • refactor: Extract verification method ID generation to a separate class #2235 yvgny
    • Create .readthedocs.yaml file #2268 swcurran
    • feat(did creation route): reject unregistered did methods #2262 chumbert
    • ./run_demo performance -c 1 --mediation --timing --trace-log #2245 usingtechnology
    • Fix formatting and grammatical errors in different readme's #2222 ff137
    • Fix broken link in README #2221 ff137
    • fix: run only on main, forks ok #2166 anwalker293
    • Update Alice Wants a JSON-LD Credential to fix invocation #2219 swcurran
  • Dependencies and Internal Updates
    • Bump requests from 2.30.0 to 2.31.0 in /demo/playground/scripts dependenciesPull requests that update a dependency file #2238 dependabot bot
    • Upgrade codegen tools in scripts/generate-open-api-spec and publish Swagger 2.0 and OpenAPI 3.0 specs #2246 ff137
  • ACA-Py Administrative Updates
    • Propose adding Jason Sherman usingtechnology as a Maintainer #2263 swcurran
    • Updating Maintainers list to be accurate and using the TOC format #2258 swcurran
  • Message Tracing/Timing Updates
    • Add updated ELK stack for demos. #2236 usingtechnology
  • Release management pull requests
    • 0.8.2 #2285 swcurran
    • 0.8.2-rc2 #2284 swcurran
    • 0.8.2-rc1 #2282 swcurran
    • 0.8.2-rc0 #2260 swcurran
"},{"location":"CHANGELOG/#081","title":"0.8.1","text":""},{"location":"CHANGELOG/#april-5-2023","title":"April 5, 2023","text":"

Version 0.8.1 is an urgent update to Release 0.8.0 to address an inability to execute the upgrade command. The upgrade command is needed for 0.8.0 Pull Request [#2116] - \"UPGRADE: Fix multi-use invitation performance\", which is useful for (at least) deployments of ACA-Py as a mediator. In the release, the upgrade process is revamped, and documented in Upgrading ACA-Py.

Key points about upgrading for those with production, pre-0.8.1 ACA-Py deployments:

  • Upgrades now happen automatically on startup, when needed.
  • The version of the last executed upgrade, even if it is a \"no change\" upgrade, is put into secure storage and is used to detect when future upgrades are needed.
    • Upgrades are needed when the running version is greater than the version is secure storage.
  • If you have an existing, pre-0.8.1 deployment with many connection records, there may be a delay in starting as an upgrade will be run that loads and saves every connection record, updating the data in the record in the process.
    • A mechanism is to be added (see Issue #2201) for preventing an upgrade running if it should not be run automatically, and requires using the upgrade command. To date, there has been no need for this feature.
  • See the Upgrading ACA-Py document for more details.
"},{"location":"CHANGELOG/#postgres-support-with-aries-askar","title":"Postgres Support with Aries Askar","text":"

Recent changes to Aries Askar have resulted in Askar supporting Postgres version 11 and greater. If you are on Postgres 10 or earlier and want to upgrade to use Askar, you must migrate your database to Postgres 10.

We have also noted that in some container orchestration environments such as Red Hat's OpenShift and possibly other Kubernetes distributions, Askar using Postgres versions greater than 14 do not install correctly. Please monitor [Issue #2199] for an update to this limitation. We have found that Postgres 15 does install correctly in other environments (such as in docker compose setups).

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_2","title":"Categorized List of Pull Requests","text":"
  • Fixes for the upgrade Command
    • Change upgrade definition file entry from 0.8.0 to 0.8.1 #2203 swcurran
    • Add Upgrading ACA-Py document #2200 swcurran
    • Fix: Indy WalletAlreadyOpenedError during upgrade process #2196 shaangill025
    • Fix: Resolve Upgrade Config file in Container #2193 shaangill025
    • Update and automate ACA-Py upgrade process #2185 shaangill025
    • Adds the upgrade command YML file to the PyPi Release #2179 swcurran
  • Test and Documentation
    • 3.7 and 3.10 unittests fix #2187 Jsyro
    • Doc update and some test scripts #2189 ianco
    • Create UnitTests.md #2183 swcurran
    • Add link to recorded session about the ACA-Py Integration tests #2184 swcurran
  • Release management pull requests
    • 0.8.1 #2207 swcurran
    • 0.8.1-rc2 #2198 swcurran
    • 0.8.1-rc1 #2194 swcurran
    • 0.8.1-rc0 #2190 swcurran
"},{"location":"CHANGELOG/#080","title":"0.8.0","text":""},{"location":"CHANGELOG/#march-14-2023","title":"March 14, 2023","text":"

0.8.0 is a breaking change that contains all updates since release 0.7.5. It extends the previously tagged 1.0.0-rc1 release because it is not clear when the 1.0.0 release will be finalized. Many of the PRs in this release were previously included in the 1.0.0-rc1 release. The categorized list of PRs separates those that are new from those in the 1.0.0-rc1 release candidate.

There are not a lot of new Aries Framework features in this release, as the focus has been on cleanup and optimization. The biggest addition is the inclusion with ACA-Py of a universal resolver interface, allowing an instance to have both local resolvers for some DID Methods and a call out to an external universal resolver for other DID Methods. Another significant new capability is full support for Hyperledger Indy transaction endorsement for Authors and Endorsers. A new repo aries-endorser-service has been created that is a pre-configured instance of ACA-Py for use as an Endorser service.

A recently completed feature that is outside of ACA-Py is a script to migrate existing ACA-Py storage from Indy SDK format to Aries Askar format. This enables existing deployments to switch to using the newer Aries Askar components. For details see the converter in the aries-acapy-tools repository.

"},{"location":"CHANGELOG/#container-publishing-updated","title":"Container Publishing Updated","text":"

With this release, a new automated process publishes container images in the Hyperledger container image repository. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\"). Additional information about the container image publication process can be found in the document Container Images and Github Actions.

The ACA-Py container images are based on Python 3.6 and 3.9 slim-bullseye images, and are designed to support linux/386 (x86), linux/amd64 (x64), and linux/arm64. However, for this release, the publication of multi-architecture containers is disabled. We are working to enable that through the updating of some dependencies that lack that capability. There are two flavors of image built for each Python version. One contains only the Indy/Aries Shared Libraries only (Aries Askar, Indy VDR and Indy Shared RS, supporting only the use of --wallet-type askar). The other (labelled indy) contains the Indy/Aries shared libraries and the Indy SDK (considered deprecated). For new deployments, we recommend using the Python 3.9 Shared Library images. For existing deployments, we recommend migrating to those images.

Those currently using the container images published by BC Gov on Docker Hub should change to use those published to the Hyperledger Package Repository under aries-cloudagent-python.

"},{"location":"CHANGELOG/#breaking-changes-and-upgrades","title":"Breaking Changes and Upgrades","text":""},{"location":"CHANGELOG/#pr-2034-implicit-connections","title":"PR #2034 -- Implicit connections","text":"

The break impacts existing deployments that support implicit connections, those initiated by another agent using a Public DID for this instance instead of an explicit invitation. Such deployments need to add the configuration parameter --requests-through-public-did to continue to support that feature. The use case is that an ACA-Py instance publishes a public DID on a ledger with a DIDComm service in the DIDDoc. Other agents resolve that DID, and attempt to establish a connection with the ACA-Py instance using the service endpoint. This is called an \"implicit\" connection in RFC 0023 DID Exchange.

"},{"location":"CHANGELOG/#pr-1913-unrevealed-attributes-in-presentations","title":"PR #1913 -- Unrevealed attributes in presentations","text":"

Updates the handling of \"unrevealed attributes\" during verification of AnonCreds presentations, allowing them to be used in a presentation, with additional data that can be checked if for unrevealed attributes. As few implementations of Aries wallets support unrevealed attributes in an AnonCreds presentation, this is unlikely to impact any deployments.

"},{"location":"CHANGELOG/#pr-2145-update-webhook-message-to-terse-form-by-default-added-startup-flag-debug-webhooks-for-full-form","title":"PR #2145 - Update webhook message to terse form by default, added startup flag --debug-webhooks for full form","text":"

The default behavior in ACA-Py has been to keep the full text of all messages in the protocol state object, and include the full protocol state object in the webhooks sent to the controller. When the messages include an object that is very large in all the messages, the webhook may become too big to be passed via HTTP. For example, issuing a credential with a photo as one of the claims may result in a number of copies of the photo in the protocol state object and hence, very large webhooks. This change reduces the size of the webhook message by eliminating redundant data in the protocol state of the \"Issue Credential\" message as the default, and adds a new parameter to use the old behavior.

"},{"location":"CHANGELOG/#upgrade-pr-2116-upgrade-fix-multi-use-invitation-performance","title":"UPGRADE PR #2116 - UPGRADE: Fix multi-use invitation performance","text":"

The way that multiuse invitations in previous versions of ACA-Py caused performance to degrade over time. An update was made to add state into the tag names that eliminated the need to scan the tags when querying storage for the invitation.

If you are using multiuse invitations in your existing (pre-0.8.0 deployment of ACA-Py, you can run an upgrade to apply this change. To run upgrade from previous versions, use the following command using the 0.8.0 version of ACA-Py, adding you wallet settings:

aca-py upgrade <other wallet config settings> --from-version=v0.7.5 --upgrade-config-path ./upgrade.yml

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_3","title":"Categorized List of Pull Requests","text":"
  • Verifiable credential, presentation and revocation handling updates

    • BREAKING: Update webhook message to terse form [default, added startup flag --debug-webhooks for full form #2145 by victorlee0505
    • Add startup flag --light-weight-webhook to trim down outbound webhook payload #1941 victorlee0505
    • feat: add verification method issue-credentials-2.0/send endpoint #2135 chumbert
    • Respect auto-verify-presentation flag in present proof v1 and v2 #2097 dbluhm
    • Feature: enabled handling VPs (request, creation, verification) with different VCs #1956 (teanas)
    • fix: update issue-credential endpoint summaries #1997 (PeterStrob)
    • fix claim format designation in presentation submission #2013 (rmnre)
    • #2041 - Issue JSON-LD has invalid Admin API documentation #2046 (jfblier-amplitude)
    • Previously flagged in release 1.0.0-rc1
    • Refactor ledger correction code and insert into revocation error handling #1892 (ianco)
    • Indy ledger fixes and cleanups #1870 (andrewwhitehead)
    • Refactoring of revocation registry creation #1813 (andrewwhitehead)
    • Fix: \bthe type of tails file path to string. #1925 (baegjae)
    • Pre-populate revoc_reg_id on IssuerRevRegRecord #1924 (andrewwhitehead)
    • Leave credentialStatus element in the LD credential #1921 (tsabolov)
    • BREAKING: Remove aca-py check for unrevealed revealed attrs on proof validation #1913 (ianco)
    • Send webhooks upon record/credential deletion #1906 (frostyfrog)
  • Out of Band (OOB) and DID Exchange / Connection Handling / Mediator

    • UPGRADE: Fix multi-use invitation performance #2116 reflectivedevelopment
    • fix: public did mediator routing keys as did keys #1977 (dbluhm)
    • Fix for mediator load testing race condition when scaling horizontally #2009 (ianco)
    • BREAKING: Allow multi-use public invites and public invites with metadata #2034 (mepeltier)
    • Do not reject OOB invitation with unknown handshake protocol\\(s\\) #2060 (andrewwhitehead)
    • fix: fix connection timing bug #2099 (reflectivedevelopment)
    • Previously flagged in release 1.0.0-rc1
    • Fix: --mediator-invitation with OOB invitation + cleanup #1970 (shaangill025)
    • include image_url in oob invitation #1966 (Zzocker)
    • feat: 00B v1.1 support #1962 (shaangill025)
    • Fix: OOB - Handling of minor versions #1940 (shaangill025)
    • fix: failed connectionless proof request on some case #1933 (kukgini)
    • fix: propagate endpoint from mediation record #1922 (cjhowland)
    • Feat/public did endpoints for agents behind mediators #1899 (cjhowland)
  • DID Registration and Resolution related updates

    • feat: allow marking non-SOV DIDs as public #2144 chumbert
    • fix: askar exception message always displaying null DID #2155 chumbert
    • feat: enable creation of DIDs for all registered methods #2067 (chumbert)
    • fix: create local DID return schema #2086 (chumbert)
    • feat: universal resolver - configurable authentication #2095 (chumbert)
    • Previously flagged in release 1.0.0-rc1
    • feat: add universal resolver #1866 (dbluhm)
    • fix: resolve dids following new endpoint rules #1863 (dbluhm)
    • fix: didx request cannot be accepted #1881 (rmnre)
    • did method & key type registry #1986 (burdettadam)
    • Fix/endpoint attrib structure #1934 (cjhowland)
    • Simple did registry #1920 (burdettadam)
    • Use did:key for recipient keys #1886 (frostyfrog)
  • Hyperledger Indy Endorser/Author Transaction Handling

    • Update some of the demo Readme and Endorser instructions #2122 swcurran
    • Special handling for the write ledger #2030 (ianco)
    • Previously flagged in release 1.0.0-rc1
    • Fix/txn job setting #1994 (ianco)
    • chore: fix ACAPY_PROMOTE-AUTHOR-DID flag #1978 (morrieinmaas)
    • Endorser write DID transaction #1938 (ianco)
    • Endorser doc updates and some bug fixes #1926 (ianco)
  • Admin API Additions

    • fix: response type on delete-tails-files endpoint #2133 chumbert
    • OpenAPI validation fixes #2127 loneil
    • Delete tail files #2103 ramreddychalla94
  • Startup Command Line / Environment / YAML Parameter Updates

    • Update webhook message to terse form [default, added startup flag --debug-webhooks for full form #2145 by victorlee0505
    • Add startup flag --light-weight-webhook to trim down outbound webhook payload #1941 victorlee0505
    • Add missing --mediator-connections-invite cmd arg info to docs #2051 (matrixik)
    • Issue #2068 boolean flag change to support HEAD requests to default route #2077 (johnekent)
    • Previously flagged in release 1.0.0-rc1
    • Add seed command line parameter but use only if also an \"allow insecure seed\" parameter is set #1714 (DaevMithran)
  • Internal Aries framework data handling updates

    • fix: resolver api schema inconsistency #2112 (TimoGlastra)
    • fix: return if return route but no response #1853 (TimoGlastra)
    • Multi-ledger/Multi-tenant issues #2022 (ianco)
    • fix: Correct typo in model -- required spelled incorrectly #2031 (swcurran)
    • Code formatting #2053 (ianco)
    • Improved validation of record state attributes #2071 (rmnre)
    • Previously flagged in release 1.0.0-rc1
    • fix: update RouteManager methods use to pass profile as parameter #1902 (chumbert)
    • Allow fully qualified class names for profile managers #1880 (chumbert)
    • fix: unable to use askar with in memory db #1878 (dbluhm)
    • Enable manually triggering keylist updates during connection #1851 (dbluhm)
    • feat: make base wallet route access configurable #1836 (dbluhm)
    • feat: event and webhook on keylist update stored #1769 (dbluhm)
    • fix: Safely shutdown when root_profile uninitialized #1960 (frostyfrog)
    • feat: include connection ids in keylist update webhook #1914 (dbluhm)
    • fix: incorrect response schema for discover features #1912 (dbluhm)
    • Fix: SchemasInputDescriptorFilter: broken deserialization renders generated clients unusable #1894 (rmnre)
    • fix: schema class can set Meta.unknown #1885 (dbluhm)
  • Unit, Integration, and Aries Agent Test Harness Test updates

    • Additional integration tests for revocation scenarios #2055 (ianco)
    • Previously flagged in release 1.0.0-rc1
    • Fixes a few AATH failures #1897 (ianco)
    • fix: warnings in tests from IndySdkProfile #1865 (dbluhm)
    • Unit test fixes for python 3.9 #1858 (andrewwhitehead)
    • Update pip-audit.yml #1945 (ryjones)
    • Update pip-audit.yml #1944 (ryjones)
  • Dependency, Python version, GitHub Actions and Container Image Changes

    • Remove CircleCI Status since we aren't using CircleCI anymore #2163 swcurran
    • Update ACA-Py docker files to produce OpenShift compatible images #2130 WadeBarnes
    • Temporarily disable multi-architecture image builds #2125 WadeBarnes
    • Fix ACA-py image builds #2123 WadeBarnes
    • Fix publish workflows #2117 WadeBarnes
    • fix: indy dependency version format #2054 (chumbert)
    • ci: add gha for pr-tests #2058 (dbluhm)
    • ci: test additional versions of python nightly #2059 (dbluhm)
    • Update github actions dependencies \\(for node16 support\\) #2066 (andrewwhitehead)
    • Docker images and GHA for publishing images #2076 (dbluhm)
    • Update dockerfiles to use python 3.9 #2109 (ianco)
    • Updating base images from slim-buster to slim-bullseye #2105 (pradeepp88)
    • Previously flagged in release 1.0.0-rc1
    • feat: update pynacl version from 1.4.0 to 1.50 #1981 (morrieinmaas)
    • Fix: web.py dependency - integration tests & demos #1973 (shaangill025)
    • chore: update pydid #1915 (dbluhm)
  • Demo and Documentation Updates

    • [fix] Removes extra comma that prevents swagger from accepting the presentation request #2149 swcurran
    • Initial plugin docs #2138 ianco
    • Acme workshop #2137 ianco
    • Fix: Performance Demo [no --revocation] #2151 shaangill025
    • Fix typos in alice-local.sh & faber-local.sh #2010 (naonishijima)
    • Added a bit about manually creating a revoc reg tails file #2012 (ianco)
    • Add ability to set docker container name #2024 (matrixik)
    • Doc updates for json demo #2026 (ianco)
    • Multitenancy demo \\(docker-compose with postgres and ngrok\\) #2089 (ianco)
    • Allow using YAML configuration file with run_docker #2091 (matrixik)
    • Previously flagged in release 1.0.0-rc1
    • Fixes to acme exercise code #1990 (ianco)
    • Fixed bug in run_demo script #1982 (pasquale95)
    • Transaction Author with Endorser demo #1975 (ianco)
    • Redis Plugins [redis_cache & redis_queue] related updates #1937 (shaangill025)
  • Release management pull requests

    • 0.8.0 release #2169 (swcurran)
    • 0.8.0-rc0 release updates #2115 (swcurran)
    • Previously flagged in release 1.0.0-rc1
    • Release 1.0.0-rc0 #1904 (swcurran)
    • Add 0.7.5 patch Changelog entry to main branch Changelog #1996 (swcurran)
    • Release 1.0.0-rc1 #2005 (swcurran)
"},{"location":"CHANGELOG/#075","title":"0.7.5","text":""},{"location":"CHANGELOG/#october-26-2022","title":"October 26, 2022","text":"

0.7.5 is a patch release to deal primarily to add PR #1881 DID Exchange in ACA-Py 0.7.4 with explicit invitations and without auto-accept broken. A couple of other PRs were added to the release, as listed below, and in Milestone 0.7.5.

"},{"location":"CHANGELOG/#list-of-pull-requests","title":"List of Pull Requests","text":"
  • Changelog and version updates for version 0.7.5-rc1 #1985 (swcurran)
  • Endorser doc updates and some bug fixes #1926 (ianco)
  • Fix: web.py dependency - integration tests & demos #1973 (shaangill025)
  • Endorser write DID transaction #1938 (ianco)
  • fix: didx request cannot be accepted #1881 (rmnre)
  • Fix: OOB - Handling of minor versions #1940 (shaangill025)
  • fix: Safely shutdown when root_profile uninitialized #1960 (frostyfrog)
  • feat: 00B v1.1 support #1962 (shaangill025)
  • 0.7.5 Cherry Picks #1967 (frostyfrog)
  • Changelog and version updates for version 0.7.5-rc0 #1969 (swcurran)
  • Final 0.7.5 changes #1991 (swcurran)
"},{"location":"CHANGELOG/#074","title":"0.7.4","text":""},{"location":"CHANGELOG/#june-30-2022","title":"June 30, 2022","text":"

Existing multitenant JWTs invalidated when a new JWT is generated: If you have a pre-existing implementation with existing Admin API authorization JWTs, invoking the endpoint to get a JWT now invalidates the existing JWT. Previously an identical JWT would be created. Please see this comment on PR #1725 for more details.

0.7.4 is a significant release focused on stability and production deployments. As the \"patch\" release number indicates, there were no breaking changes in the Admin API, but a huge volume of updates and improvements. Highlights of this release include:

  • A major performance and stability improvement resulting from the now recommended use of Aries Askar instead of the Indy-SDK.
  • There are significant improvements and tools for dealing with revocation-related issues.
  • A lot of work has been on the handling of Hyperledger Indy transaction endorsements.
  • ACA-Py now has a pluggable persistent queues mechanism in place, with Redis and Kafka support available (albeit with work still to come on documentation).

In addition, there are a significant number of general enhancements, bug fixes, documentation updates and code management improvements.

This release is a reflection of the many groups stressing ACA-Py in production environments, reporting issues and the resulting solutions. We also have a very large number of contributors to ACA-Py, with this release having PRs from 22 different individuals. A big thank you to all of those using ACA-Py, raising issues and providing solutions.

"},{"location":"CHANGELOG/#major-enhancements","title":"Major Enhancements","text":"

A lot of work has been put into this release related to performance and load testing, with significant updates being made to the key \"shared component\" ACA-Py dependencies (Aries Askar, Indy VDR) and Indy Shared RS (including CredX). We now recommend using those components (by using --wallet-type askar in the ACA-Py startup parameters) for new ACA-Py deployments. A wallet migration tool from indy-sdk storage to Askar storage is still needed before migrating existing deployment to Askar. A big thanks to those creating/reporting on stress test scenarios, and especially the team at LISSI for creating the aries-cloudagent-loadgenerator to make load testing so easy! And of course to the core ACA-Py team for addressing the findings.

The largest enhancement is in the area of the endorsing of Hyperledger Indy ledger transactions, enabling an instance of ACA-Py to act as an Endorser for Indy authors needing endorsements to write objects to an Indy ledger. We're working on an Aries Endorser Service based on the new capabilities in ACA-Py, an Endorser to be easily operated by an organization, ideally with a controller starter kit supporting a basic human and automated approvals business workflow. Contributions welcome!

A focus towards the end of the 0.7.4 development and release cycle was on the handling of AnonCreds revocation in ACA-Py. Most important, a production issue was uncovered where by an ACA-Py issuer's local Revocation Registry data could get out of sync with what was published on an Indy ledger, resulting in an inability to publish new RevRegEntry transactions -- making new revocations impossible. As a result, we have added some new endpoints to enable an update to the RevReg storage such that RevRegEntry transactions can again be published to the ledger. Other changes were added related to revocation in general and in the handling of tails files in particular.

The team has worked a lot on evolving the persistent queue (PQ) approach available in ACA-Py. We have landed on a design for the queues for inbound and outbound messages using a default in-memory implementation, and the ability to replace the default method with implementations created via an ACA-Py plugin. There are two concrete, out-of-the-box external persistent queuing solutions available for Redis and Kafka. Those ACA-Py persistent queue implementation repositories will soon be migrated to the Aries project within the Hyperledger Foundation's GitHub organization. Anyone else can implement their own queuing plugin as long as it uses the same interface.

Several new ways to control ACA-Py configurations were added, including new startup parameters, Admin API parameters to control instances of protocols, and additional web hook notifications.

A number of fixes were made to the Credential Exchange protocols, both for V1 and V2, and for both AnonCreds and W3C format VCs. Nothing new was added and there no changes in the APIs.

As well there were a number of internal fixes, dependency updates, documentation and demo changes, developer tools and release management updates. All the usual stuff needed for a healthy, growing codebase.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_4","title":"Categorized List of Pull Requests","text":"
  • Hyperledger Indy Endorser related updates:

    • Fix order of operations connecting faber to endorser #1716 (ianco)
    • Endorser support for updating DID endpoints on ledger #1696 (frostyfrog)
    • Add \"sent\" key to both Schema and Cred Defs when using Endorsers #1663 (frostyfrog)
    • Add cred_def_id to metadata when using an Endorser #1655 (frostyfrog)
    • Update Endorser documentation #1646 (chumbert)
    • Auto-promote author did to public after endorsing #1607 (ianco)
    • DID updates for endorser #1601 (ianco)
    • Qualify did exch connection lookup by role #1670 (ianco)
    • Use provided connection_id if provided #1726 (ianco)
  • Additions to the startup parameters, Admin API and Web Hooks

    • Improve typing of settings and add plugin settings object #1833 (dbluhm)
    • feat: accept taa using startup parameter --accept-taa #1643 (TimoGlastra)
    • Add auto_verify flag in present-proof protocol #1702 (DaevMithran)
    • feat: query connections by their_public_did #1637 (TimoGlastra)
    • feat: enable webhook events for mediation records #1614 (TimoGlastra)
    • Feature/undelivered events #1694 (mepeltier)
    • Allow use of SEED when creating local wallet DID Issue-1682 Issue-1682 #1705 (DaevMithran)
    • Feature: Add the ability to deny specific plugins from loading #1737 (frostyfrog)
    • feat: Add filter param to connection list for invitations #1797 (frostyfrog)
    • Fix missing webhook handler #1816 (ianco)
  • Persistent Queues

    • Redis PQ Cleanup in preparation for enabling the uses of plugin PQ implementations [Issue#1659] #1659 (shaangill025)
  • Credential Revocation and Tails File Handling

    • Fix handling of non-revocable credential when timestamp is specified \\(askar/credx\\) #1847 (andrewwhitehead)
    • Additional endpoints to get revocation details and fix \"published\" status #1783 (ianco)
    • Fix IssuerCredRevRecord state update on revocation publish #1827 (andrewwhitehead)
    • Fix put_file when the server returns a redirect #1808 (andrewwhitehead)
    • Adjust revocation registry update procedure to shorten transactions #1804 (andrewwhitehead)
    • fix: Resolve Revocation Notification environment variable name collision #1751 (frostyfrog)
    • fix: always notify if revocation notification record exists #1665 (TimoGlastra)
    • Fix for AnonCreds non-revoc proof with no timestamp #1628 (ianco)
    • Fixes for v7.3.0 - Issue #1597 #1711 (shaangill025)
    • Fixes Issue 1 from #1597: Tails file upload fails when a credDef is created and multi ledger support is enabled
    • Fix tails server upload multi-ledger mode #1785 (ianco)
    • Feat/revocation notification v2 #1734 (frostyfrog)
  • Issue Credential, Present Proof updates/fixes

    • Fix: Present Proof v2 - check_proof_vs_proposal update to support proof request with restrictions #1820 (shaangill025)
    • Fix: present-proof v1 send-proposal flow #1811 (shaangill025)
    • Prover - verification outcome from presentation ack message #1757 (shaangill025)
    • feat: support connectionless exchange #1710 (TimoGlastra)
    • Fix: DIF proof proposal when creating bound presentation request [Issue#1687] #1690 (shaangill025)
    • Fix DIF PresExch and OOB request_attach delete unused connection #1676 (shaangill025)
    • Fix DIFPresFormatHandler returning invalid V20PresExRecord on presentation verification #1645 (rmnre)
    • Update aries-askar patch version to at least 0.2.4 as 0.2.3 does not include backward compatibility #1603 (acuderman)
    • Fixes for credential details in issue-credential webhook responses #1668 (andrewwhitehead)
    • Fix: present-proof v2 send-proposal issue#1474 #1667 (shaangill025)
    • Fixes Issue 3b from #1597: V2 Credential exchange ignores the auto-respond-credential-request
    • Revert change to send_credential_ack return value #1660 (andrewwhitehead)
    • Fix usage of send_credential_ack #1653 (andrewwhitehead)
    • Replace blank credential/presentation exchange states with abandoned state #1605 (andrewwhitehead)
    • Fixes Issue 4 from #1597: Wallet type askar has issues when receiving V1 credentials
    • Fixes and cleanups for issue-credential 1.0 #1619 (andrewwhitehead)
    • Fix: Duplicated schema and cred_def - Askar and Postgres #1800 (shaangill025)
  • Mediator updates and fixes

    • feat: allow querying default mediator from base wallet #1729 (dbluhm)
    • Added async with for mediator record delete #1749 (dejsenlitro)
  • Multitenacy updates and fixes

    • feat: create new JWT tokens and invalidate older for multitenancy #1725 (TimoGlastra)
    • Multi-tenancy stale wallet clean up #1692 (dbluhm)
  • Dependencies and internal code updates/fixes

    • Update pyjwt to 2.4 #1829 (andrewwhitehead)
    • Fix external Outbound Transport loading code #1812 (frostyfrog)
    • Fix iteration over key list, update Askar to 0.2.5 #1740 (andrewwhitehead)
    • Fix: update IndyLedgerRequestsExecutor logic - multitenancy and basic base wallet type #1700 (shaangill025)
    • Move database operations inside the session context #1633 (acuderman)
    • Upgrade ConfigArgParse to version 1.5.3 #1627 (WadeBarnes)
    • Update aiohttp dependency #1606 (acuderman)
    • did-exchange implicit request pthid update & invitation key verification #1599 (shaangill025)
    • Fix auto connection response not being properly mediated #1638 (dbluhm)
    • platform target in run tests. #1697 (burdettadam)
    • Add an integration test for mixed proof with a revocable cred and a n\u2026 #1672 (ianco)
    • Fix: Inbound Transport is_external attribute #1802 (shaangill025)
    • fix: add a close statement to ensure session is closed on error #1777 (reflectivedevelopment)
    • Adds transport_id variable assignment back to outbound enqueue method #1776 (amanji)
    • Replace async workaround within document loader #1774 (frostyfrog)
  • Documentation and Demo Updates

    • Use default wallet type askar for alice/faber demo and bdd tests #1761 (ianco)
    • Update the Supported RFCs document for 0.7.4 release #1846 (swcurran)
    • Fix a typo in DevReadMe.md #1844 (feknall)
    • Add troubleshooting document, include initial examples - ledger connection, out-of-sync RevReg #1818 (swcurran)
    • Update POST /present-proof/send-request to POST /present-proof-2.0/send-request #1824 (lineko)
    • Fetch from --genesis-url likely to fail in composed container #1746 (tdiesler)
    • Fixes logic for web hook formatter in Faber demo #1739 (amanji)
    • Multitenancy Docs Update #1706 (MonolithicMonk)
    • #1674 Add basic DOCKER_ENV logging for run_demo #1675 (tdiesler)
    • Performance demo updates #1647 (ianco)
    • docs: supported features attribution #1654 (TimoGlastra)
    • Documentation on existing language wrappers for aca-py #1738 (etschelp)
    • Document impact of multi-ledger on TAA acceptance #1778 (ianco)
  • Code management and contributor/developer support updates

    • Set prefix for integration test demo agents; some code cleanup #1840 (andrewwhitehead)
    • Pin markupsafe at version 2.0.1 #1642 (andrewwhitehead)
    • style: format with stable black release #1615 (TimoGlastra)
    • Remove references to play with von #1688 (ianco)
    • Add pre-commit as optional developer tool #1671 (dbluhm)
    • run_docker start - pass environment variables #1715 (shaangill025)
    • Use local deps only #1834 (ryjones)
    • Enable pip-audit #1831 (ryjones)
    • Only run pip-audit on main repo #1845 (ryjones)
  • Release management pull requests

    • 0.7.4 Release Changelog and version update #1849 (swcurran)
    • 0.7.4-rc5 changelog, version and ReadTheDocs updates #1838 (swcurran)
    • Update changelog and version for 0.7.4-rc4 #1830 (swcurran)
    • Changelog, version and ReadTheDocs updates for 0.7.4-rc3 release #1817 (swcurran)
    • 0.7.4-rc2 update #1771 (swcurran)
    • Some ReadTheDocs File updates #1770 (swcurran)
    • 0.7.4-RC1 Changelog intro paragraph - fix copy/paste error #1753 (swcurran)
    • Fixing the intro paragraph and heading in the changelog of this 0.7.4RC1 #1752 (swcurran)
    • Updates to Changelog for 0.7.4. RC1 release #1747 (swcurran)
    • Prep for adding the 0.7.4-rc0 tag #1722 (swcurran)
    • Added missed new module -- upgrade -- to the RTD generated docs #1593 (swcurran)
    • Doh....update the date in the Changelog for 0.7.3 #1592 (swcurran)
"},{"location":"CHANGELOG/#073","title":"0.7.3","text":""},{"location":"CHANGELOG/#january-10-2022","title":"January 10, 2022","text":"

This release includes some new AIP 2.0 features out (Revocation Notification and Discover Features 2.0), a major new feature for those using Indy ledger (multi-ledger support), a new \"version upgrade\" process that automates updating data in secure storage required after a new release, and a fix for a critical bug in some mediator scenarios. The release also includes several new pieces of documentation (upgrade processing, storage database information and logging) and some other documentation updates that make the ACA-Py Read The Docs site useful again. And of course, some recent bug fixes and cleanups are included.

There is a BREAKING CHANGE for those deploying ACA-Py with an external outbound queue implementation (see PR #1501). As far as we know, there is only one organization that has such an implementation and they were involved in the creation of this PR, so we are not making this release a minor or major update. However, anyone else using an external queue should be aware of the impact of this PR that is included in the release.

For those that have an existing deployment of ACA-Py with long-lasting connection records, an upgrade is needed to use RFC 434 Out of Band and the \"reuse connection\" as the invitee. In PR #1453 (details below) a performance improvement was made when finding a connection for reuse. The new approach (adding a tag to the connection to enable searching) applies only to connections made using this ACA-Py release and later, and \"as-is\" connections made using earlier releases of ACA-Py will not be found as reuse candidates. A new \"Upgrade deployment\" capability (#1557, described below) must be executed to update your deployment to add tags for all existing connections.

The Supported RFCs document has been updated to reflect the addition of the AIP 2.0 RFCs for which support was added.

The following is an annotated list of PRs in the release, including a link to each PR.

  • AIP 2.0 Features
    • Discover Features Protocol: v1_0 refactoring and v2_0 implementation #1500
    • Updates the Discover Features 1.0 (AIP 1.0) implementation and implements the new 2.0 version. In doing so, adds generalized support for goal codes to ACA-Py.
    • fix DiscoveryExchangeRecord RECORD_TOPIC typo fix #1566
    • Implement Revocation Notification v1.0 #1464
    • Fix integration tests (revocation notifications) #1528
    • Add Revocation notification support to alice/faber #1527
  • Other New Features
    • Multiple Indy Ledger support and State Proof verification #1425
    • Remove required dependencies from multi-ledger code that was requiring the import of Aries Askar even when not being used#1550
    • Fixed IndyDID resolver bug after Tag 0.7.3rc0 created #1569
    • Typo vdr service name #1563
    • Fixes and cleanup for multiple ledger support with Askar #1583
    • Outbound Queue - more usability improvements #1501
    • Display QR code when generating/displaying invites on startup #1526
    • Enable WS Pings for WS Inbound Transport #1530
    • Faster detection of lost Web Socket connections; implementation verified with an existing mediator.
    • Performance Improvement when using connection reuse in OOB and there are many DID connections. ConnRecord tags - their_public_did and invitation_msg_id #1543
    • In previous releases, a \"their_public_did\" was not a tag, so to see if you can reuse a connection, all connections were retrieved from the database to see if a matching public DID can be found. Now, connections created after deploying this release will have a tag on the connection such that an indexed query can be used. See \"Breaking Change\" note above and \"Update\" feature below.
    • Follow up to #1543 - Adding invitation_msg_id and their_public_did back to record_value #1553
    • A generic \"Upgrade Deployment\" capability was added to ACA-Py that operates like a database migration capability in relational databases. When executed (via a command line option), a current version of the deployment is detected and if any storage updates need be applied to be consistent with the new version, they are, and the stored \"current version\"is updated to the new version. An instance of this capability can be used to address the new feature #1543 documented above. #1557
    • Adds a \"credential_revoked\" state to the Issue Credential protocol state object. When the protocol state object is retained past the completion of the protocol, it is updated when the credential is revoked. #1545
    • Updated a missing dependency that recently caused an error when using the --version command line option #1589
  • Critical Fixes
    • Fix connection record response for mobile #1469
  • Documentation Additions and Updates
    • added documentation for wallet storage databases #1523
    • added logging documentation #1519
    • Fix warnings when generating ReadTheDocs #1509
    • Remove Streetcred references #1504
    • Add RTD configs to get generator working #1496
    • The Alice/Faber demo was updated to allow connections based on Public DIDs to be established, including reusing a connection if there is an existing connection. #1574
  • Other Fixes
    • Connection Handling / Out of Band Invitations Fixes
    • OOB: Fixes issues with multiple public explicit invitation and unused 0160 connection #1525
    • OOB added webhooks to notify the controller when a connection reuse message is used in response to an invitation #1581
    • Delete unused ConnRecord generated - OOB invitation (use_exising_connection) #1521
    • When an invitee responded with a \"reuse\" message, the connection record associated with the invitation was not being deleted. Now it is.
    • Await asyncio.sleeps to cleanup warnings in Python 3.8/3.9 #1558
    • Add alias field to didexchange invitation UI #1561
    • fix: use invitation key for connection query #1570
    • Fix the inconsistency of invitation_msg_id between invitation and response #1564
    • chore: update pydid to ^0.3.3 #1562
    • DIF Presentation Exchange Cleanups
    • Fix DIF Presentation Request Input Validation #1517
    • Some validation checking of a DIF presentation request to prevent uncaught errors later in the process.
    • DIF PresExch - ProblemReport and \"is_holder\" #1493
    • Cleanups related to when \"is_holder\" is or is not required. Related to Issue #1486
    • Indy SDK Related Fixes
    • Fix AttributeError when writing an Indy Cred Def record #1516
    • Fix TypeError when calling credential_definitions_fix_cred_def_wallet\u2026 #1515
    • Fix TypeError when writing a Schema record #1494
    • Fix validation for range checks #1538
    • Back out some of the validation checking for proof requests with predicates as they were preventing valid proof requests from being processed.
    • Aries Askar Related Fixes:
    • Fix bug when getting credentials on askar-profile #1510
    • Fix error when removing a wallet on askar-profile #1518
    • Fix error when connection request is received (askar, public invitation) #1508
    • Fix error when an error occurs while issuing a revocable credential #1591
    • Docker fixes:
    • Update docker scripts to use new & improved docker IP detection #1565
    • Release Adminstration:
    • Changelog and RTD updates for the pending 0.7.3 release #1553
"},{"location":"CHANGELOG/#072","title":"0.7.2","text":""},{"location":"CHANGELOG/#november-15-2021","title":"November 15, 2021","text":"

A mostly maintenance release with some key updates and cleanups based on community deployments and discovery. With usage in the field increasing, we're cleaning up edge cases and issues related to volume deployments.

The most significant new feature for users of Indy ledgers is a simplified approach for transaction authors getting their transactions signed by an endorser. Transaction author controllers now do almost nothing other than configuring their instance to use an Endorser, and ACA-Py takes care of the rest. Documentation of that feature is here.

  • Improve cloud native deployments/scaling
    • unprotect liveness and readiness endpoints #1416
    • Open askar sessions only on demand - Connections #1424
    • Fixed potential deadlocks by opening sessions only on demand (Wallet endpoints) #1472
    • Fixed potential deadlocks by opening sessions only on demand #1439
    • Make mediation invitation parameter idempotent #1413
  • Indy Transaction Endorser Support Added
    • Endorser protocol configuration, automation and demo integration #1422
    • Auto connect from author to endorser on startup #1461
    • Startup and shutdown events (prep for endorser updates) #1459
    • Endorser protocol askar fixes #1450
    • Endorser protocol updates - refactor to use event bus #1448
  • Indy verifiable credential/presentation fixes and updates
    • Update credential and proof mappings to allow negative encoded values #1475
    • Add credential validation to offer issuance step #1446
    • Fix error removing proof req entries by timestamp #1465
    • Fix issue with cred limit on presentation endpoint #1437
    • Add support for custom offers from the proposal #1426
    • Make requested attributes and predicates required on indy proof request #1411
    • Remove connection check on proof verify #1383
  • General cleanups and improvements to existing features
    • Fixes failing integration test -- JSON-LD context URL not loading because of external issue #1491
    • Update base record time-stamp to standard ISO format #1453
    • Encode DIDComm messages before sent to the queue #1408
    • Add Event bus Metadata #1429
    • Allow base wallet to connect to a mediator after startup #1463
    • Log warning when unsupported problem report code is received #1409
    • feature/inbound-transport-profile #1407
    • Import cleanups #1393
    • Add no-op handler for generic ack message (RFC 0015) #1390
    • Align OutOfBandManager.receive_invitation with other connection managers #1382
  • Bug fixes
    • fix: fixes error in use of a default mediator in connections/out of band -- mediation ID was being saved as None instead of the retrieved default mediator value #1490
    • fix: help text for open-mediation flag #1445
    • fix: incorrect return type #1438
    • Add missing param to ws protocol #1442
    • fix: create static doc use empty endpoint if None #1483
    • fix: use named tuple instead of dataclass in mediation invite store #1476
    • When fetching the admin config, don't overwrite webhook settings #1420
    • fix: return type of inject #1392
    • fix: typo in connection static result schema #1389
    • fix: don't require push on outbound queue implementations #1387
  • Updates/Fixes to the Alice/Faber demo and integration tests
    • Clarify instructions in the Acme Controller Demo #1484
    • Fix aip 20 behaviour and other cleanup #1406
    • Fix issue with startup sequence for faber agent #1415
    • Connectionless proof demo #1395
    • Typos in the demo's README.md #1405
    • Run integration tests using external ledger and tails server #1400
  • Chores
    • Update CONTRIBUTING.md #1428
    • Update to ReadMe and Supported RFCs for 0.7.2 #1489
    • Updating the RTDs code for Release 0.7.2 - Try 2 #1488
"},{"location":"CHANGELOG/#071","title":"0.7.1","text":""},{"location":"CHANGELOG/#august-31-2021","title":"August 31, 2021","text":"

A relatively minor maintenance release to address issues found since the 0.7.0 Release. Includes some cleanups of JSON-LD Verifiable Credentials and Verifiable Presentations

  • W3C Verifiable Credential cleanups
    • Timezone inclusion [ISO 8601] for W3C VC and Proofs (#1373)
    • W3C VC handling where attachment is JSON and not Base64 encoded (#1352)
  • Refactor outbound queue interface (#1348)
  • Command line parameter handling for arbitrary plugins (#1347)
  • Add an optional parameter '--ledger-socks-proxy' (#1342)
  • OOB Protocol - CredentialOffer Support (#1316), (#1216)
  • Updated IndyCredPrecisSchema - pres_referents renamed to presentation_referents (#1334)
  • Handle unpadded protected header in PackWireFormat::get_recipient_keys (#1324)
  • Initial cut of OpenAPI Code Generation guidelines (#1339)
  • Correct revocation API in credential revocation documentation (#612)
  • Documentation updates for Read-The-Docs (#1359, #1366, #1371)
  • Add inject_or method to dynamic injection framework to resolve typing ambiguity (#1376)
  • Other fixes:
    • Indy Proof processing fix, error not raised in predicate timestamp check (#1364)
    • Problem Report handler for connection specific problems (#1356)
    • fix: error on deserializing conn record with protocol (#1325)
    • fix: failure to verify jsonld on non-conformant doc but vaild vmethod (#1301)
    • fix: allow underscore in endpoints (#1378)
"},{"location":"CHANGELOG/#070","title":"0.7.0","text":""},{"location":"CHANGELOG/#july-14-2021","title":"July 14, 2021","text":"

Another significant release, this version adds support for multiple new protocols, credential formats, and extension methods.

  • Support for W3C Standard Verifiable Credentials based on JSON-LD using LD-Signatures and BBS+ Signatures, contributed by Animo Solutions - #1061
  • Present Proof V2 including support for DIF Presentation Exchange - #1125
  • Pluggable DID Resolver (with a did:web resolver) with fallback to an external DID universal resolver, contributed by Indicio - #1070
  • Updates and extensions to ledger transaction endorsement via the Sign Attachment Protocol, contributed by AyanWorks - #1134, #1200
  • Upgrades to Demos to add support for Credential Exchange 2.0 and W3C Verifiable Credentials #1235
  • Alpha support for the Indy/Aries Shared Components (indy-vdr, indy-credx and aries-askar), which enable running ACA-Py without using Indy-SDK, while still supporting the use of Indy as a ledger, and Indy AnonCreds verifiable credentials #1267
  • A new event bus for distributing internally generated ACA-Py events to controllers and other listeners, contributed by Indicio - #1063
  • Enable operation without Indy ledger support if not needed
  • Performance fix for deployments with large numbers of DIDs/connections #1249
  • Simplify the creation/handling of plugin protocols #1086, #1133, #1226
  • DID Exchange implicit invitation handling #1174
  • Add support for Indy 1.16 predicates (restrictions on predicates based on attribute name and value) #1213
  • BDD Tests run via GitHub Actions #1046
"},{"location":"CHANGELOG/#060","title":"0.6.0","text":""},{"location":"CHANGELOG/#february-25-2021","title":"February 25, 2021","text":"

This is a significant release of ACA-Py with several new features, as well as changes to the internal architecture in order to set the groundwork for using the new shared component libraries: indy-vdr, indy-credx, and aries-askar.

"},{"location":"CHANGELOG/#mediator-support","title":"Mediator support","text":"

While ACA-Py had previous support for a basic routing protocol, this was never fully developed or used in practice. Starting with this release, inbound and outbound connections can be established through a mediator agent using the Aries Mediator Coordination Protocol. This work was initially contributed by Adam Burdett and Daniel Bluhm of Indicio on behalf of SICPA. Read more about mediation support.

"},{"location":"CHANGELOG/#multi-tenancy-support","title":"Multi-Tenancy support","text":"

Started by BMW and completed by Animo Solutions and Anon Solutions on behalf of SICPA, this feature allows for a single ACA-Py instance to host multiple wallet instances. This can greatly reduce the resources required when many identities are being handled. Read more about multi-tenancy support.

"},{"location":"CHANGELOG/#new-connection-protocols","title":"New connection protocol(s)","text":"

In addition to the Aries 0160 Connections RFC, ACA-Py now supports the Aries DID Exchange Protocol for connection establishment and reuse, as well as the Aries Out-of-Band Protocol for representing connection invitations and other pre-connection requests.

"},{"location":"CHANGELOG/#issue-credential-v2","title":"Issue-Credential v2","text":"

This release includes an initial implementation of the Aries Issue Credential v2 protocol.

"},{"location":"CHANGELOG/#notable-changes-for-administrators","title":"Notable changes for administrators","text":"
  • There are several new endpoints available for controllers as well as new startup parameters related to the multi-tenancy and mediator features, see the feature description pages above in order to make use of these features. Additional admin endpoints are introduced for the DID Exchange, Issue Credential v2, and Out-of-Band protocols.

  • When running aca-py start, a new wallet will no longer be created unless the --auto-provision argument is provided. It is recommended to always use aca-py provision to initialize the wallet rather than relying on automatic behaviour, as this removes the need for repeatedly providing the wallet seed value (if any). This is a breaking change from previous versions.

  • When running aca-py provision, an existing wallet will not be removed and re-created unless the --recreate-wallet argument is provided. This is a breaking change from previous versions.

  • The logic around revocation intervals has been tightened up in accordance with Present Proof Best Practices.

"},{"location":"CHANGELOG/#notable-changes-for-plugin-writers","title":"Notable changes for plugin writers","text":"

The following are breaking changes to the internal APIs which may impact Python code extensions.

  • Manager classes generally accept a Profile instance, where previously they accepted a RequestContext.

  • Admin request handlers now receive an AdminRequestContext as app[\"context\"]. The current profile is available as app[\"context\"].profile. The admin server now generates a unique context instance per request in order to facilitate multi-tenancy, rather than reusing the same instance for each handler.

  • In order to inject the BaseStorage or BaseWallet interfaces, a ProfileSession must be used. Other interfaces can be injected at the Profile or ProfileSession level. This is obtained by awaiting profile.session() for the current Profile instance, or (preferably) using it as an async context manager:

python= async with profile.session() as session: storage = session.inject(BaseStorage)

  • The inject method of a context is no longer async.
"},{"location":"CHANGELOG/#056","title":"0.5.6","text":""},{"location":"CHANGELOG/#october-19-2020","title":"October 19, 2020","text":"
  • Fix an attempt to update the agent endpoint when configured with a read-only ledger #758
"},{"location":"CHANGELOG/#055","title":"0.5.5","text":""},{"location":"CHANGELOG/#october-9-2020","title":"October 9, 2020","text":"
  • Support interactions using the new https://didcomm.org message type prefix (currently opt-in via the --emit-new-didcomm-prefix flag) #705, #713
  • Updates to application startup arguments, adding support for YAML configuration #739, #746, #748
  • Add a new endpoint to check the revocation status of a stored credential #735
  • Clean up API documentation and OpenAPI definition, minor API adjustments #712, #726, #732, #734, #738, #741, #747
  • Add configurable support for unencrypted record tags #723
  • Retain more limited records on issued credentials #718
  • Fix handling of custom endpoint in connections accept-request API method #715, #716
  • Add restrictions around revocation registry sizes #727
  • Allow the state for revocation registry records to be set manually #708
  • Handle multiple matching credentials when satisfying a presentation request using names #706
  • Additional handling for a missing local tails file, tails file rollover process #702, #717
  • Handle unknown credential ID in create-proof API method #700
  • Improvements to revocation interval handling in presentation requests #699, #703
  • Clean up warnings on API redirects #692
  • Extensions to DID publicity status #691
  • Support Unicode text in JSON-LD credential handling #687
"},{"location":"CHANGELOG/#054","title":"0.5.4","text":""},{"location":"CHANGELOG/#august-24-2020","title":"August 24, 2020","text":"
  • Improvements to schema, cred def registration procedure #682, #683
  • Updates to align admin API output with documented interface #674, #681
  • Fix provisioning issue when ledger is configured as read-only #673
  • Add get-nym-role action #671
  • Basic support for w3c profile endpoint #667, #669
  • Improve handling of non-revocation interval #648, #680
  • Update revocation demo after changes to tails file handling #644
  • Improve handling of fatal ledger errors #643, #659
  • Improve did:key: handling in out-of-band protocol support #639
  • Fix crash when no public DID is configured #637
  • Fix high CPU usage when only messages pending retry are in the outbound queue #636
  • Additional unit tests for config, messaging, revocation, startup, transports #633, #641, #658, #661, #666
  • Allow forwarded messages to use existing connections and the outbound queue #631
"},{"location":"CHANGELOG/#053","title":"0.5.3","text":""},{"location":"CHANGELOG/#july-23-2020","title":"July 23, 2020","text":"
  • Store endpoint on provisioned DID records #610
  • More reliable delivery of outbound messages and webhooks #615
  • Improvements for OpenShift pod handling #614
  • Remove support for 'on-demand' revocation registries #605
  • Sort tags in generated swagger JSON for better consistency #602
  • Improve support for multi-credential proofs #601
  • Adjust default settings for tracing and add documentation #598, #597
  • Fix reliance on local copy of revocation tails file #590
  • Improved handling of problem reports #595
  • Remove credential preview parameter from credential issue endpoint #596
  • Looser format restrictions on dates #586
  • Support names and attribute-value specifications in present-proof protocol #587
  • Misc documentation updates and unit test coverage
"},{"location":"CHANGELOG/#052","title":"0.5.2","text":""},{"location":"CHANGELOG/#june-26-2020","title":"June 26, 2020","text":"
  • Initial out-of-band protocol support #576
  • Support provisioning a new local-only DID in the wallet, updating a DID endpoint #559, #573
  • Support pagination for holder search operation #558
  • Add raw JSON credential signing and verification admin endpoints #540
  • Catch fatal errors in admin and protocol request handlers #527, #533, #534, #539, #543, #554, #555
  • Add wallet and DID key rotation operations #525
  • Admin API documentation and usability improvements #504, #516, #570
  • Adjust the maximum number of attempts for outbound messages #501
  • Add demo support for tails server #499
  • Various credential and presentation protocol fixes and improvements #491, #494, #498, #526, #561, #563, #564, #577, #579
  • Fixes for multiple agent endpoints #495, #497
  • Additional test coverage #482, #485, #486, #487, #490, #493, #509, #553
  • Update marshmallow dependency #479
"},{"location":"CHANGELOG/#051","title":"0.5.1","text":""},{"location":"CHANGELOG/#april-23-2020","title":"April 23, 2020","text":"
  • Restore previous response format for the /credential/{id} admin route #474
"},{"location":"CHANGELOG/#050","title":"0.5.0","text":""},{"location":"CHANGELOG/#april-21-2020","title":"April 21, 2020","text":"
  • Add support for credential revocation and revocation registry handling, with thanks to Medici Ventures #306, #417, #425, #429, #432, #435, #441, #455
  • Breaking change Remove previous credential and presentation protocols (0.1 versions) #416
  • Add support for major/minor protocol version routing #443
  • Event tracing and trace reports for message exchanges #440
  • Support additional Indy restriction operators (>, <, <= in addition to >=) #457
  • Support signed attachments according to the updated Aries RFC 0017 #456
  • Increased test coverage #442, #453
  • Updates to demo agents and documentation #402, #403, #411, #415, #422, #423, #449, #450, #452
  • Use Indy generate_nonce method to create proof request nonces #431
  • Make request context available in the outbound transport handler #408
  • Contain indy-anoncreds usage in IndyIssuer, IndyHolder, IndyProver classes #406, #463
  • Fix issue with validation of proof with predicates and revocation support #400
"},{"location":"CHANGELOG/#045","title":"0.4.5","text":""},{"location":"CHANGELOG/#march-3-2020","title":"March 3, 2020","text":"
  • Added NOTICES file with license information for dependencies #398
  • Updated documentation for administration API demo #397
  • Accept self-attested attributes in presentation verification, only when no restrictions are present on the requested attribute #394, #396
"},{"location":"CHANGELOG/#044","title":"0.4.4","text":""},{"location":"CHANGELOG/#february-28-2020","title":"February 28, 2020","text":"
  • Update docker image used in demo and test containers #391
  • Fix pre-verify check on received presentations #390
  • Do not canonicalize attribute names in credential previews #389
"},{"location":"CHANGELOG/#043","title":"0.4.3","text":""},{"location":"CHANGELOG/#february-26-2020","title":"February 26, 2020","text":"
  • Fix the application of transaction author agreement acceptance to signed ledger requests #385
  • Add a command line argument to preserve connection exchange records #355
  • Allow custom credential IDs to be specified by the controller in the issue-credential protocol #384
  • Handle send timeouts in the admin server websocket implementation #377
  • Aries RFC 0348: Support the 'didcomm.org' message type prefix for incoming messages #379
  • Add support for additional postgres wallet schemes such as \"MultiWalletDatabase\" #378
  • Updates to the demo agents and documentation to support demos using the OpenAPI interface #371, #375, #376, #382, #383, #382
  • Add a new flag for preventing writes to the ledger #364
"},{"location":"CHANGELOG/#042","title":"0.4.2","text":""},{"location":"CHANGELOG/#february-8-2020","title":"February 8, 2020","text":"
  • Adjust logging on HTTP request retries #363
  • Tweaks to run_docker/run_demo scripts for Windows #357
  • Avoid throwing exceptions on invalid or incomplete received presentations #359
  • Restore the present-proof/create-request admin endpoint for creating connectionless presentation requests #356
  • Activate the connections/create-static admin endpoint for creating static connections #354
"},{"location":"CHANGELOG/#041","title":"0.4.1","text":""},{"location":"CHANGELOG/#january-31-2020","title":"January 31, 2020","text":"
  • Update Forward messages and handlers to align with RFC 0094 for compatibility with libvcx and Streetcred #240, #349
  • Verify encoded attributes match raw attributes on proof presentation #344
  • Improve checks for existing credential definitions in the wallet and on ledger when publishing #333, #346
  • Accommodate referents in presentation proposal preview attribute specifications #333
  • Make credential proposal optional in issue-credential protocol #336
  • Handle proofs with repeated credential definition IDs #330
  • Allow side-loading of alternative inbound transports #322
  • Various fixes to documentation and message schemas, and improved unit test coverage
"},{"location":"CHANGELOG/#040","title":"0.4.0","text":""},{"location":"CHANGELOG/#december-10-2019","title":"December 10, 2019","text":"
  • Improved unit test coverage (actionmenu, basicmessage, connections, introduction, issue-credential, present-proof, routing protocols)
  • Various documentation and bug fixes
  • Add admin routes for fetching and accepting the ledger transaction author agreement #144
  • Add support for receiving connection-less proof presentations #296
  • Set attachment id explicitly in unbound proof request #289
  • Add create-proposal admin endpoint to the present-proof protocol #288
  • Remove old anon/authcrypt support #282
  • Allow additional endpoints to be specified #276
  • Allow timestamp without trailing 'Z' #275, #277
  • Display agent label and version on CLI and SwaggerUI #274
  • Remove connection activity tracking and add ping webhooks (with --monitor-ping) #271
  • Refactor message transport to track all async tasks, active message handlers #269, #287
  • Add invitation mode \"static\" for static connections #260
  • Allow for cred proposal underspecification of cred def id, only lock down cred def id at issuer on offer. Sync up api requests to Aries RFC-36 verbiage #259
  • Disable cookies on outbound requests (avoid session affinity) #258
  • Add plugin registry for managing all loaded protocol plugins, streamline ClassLoader #257, #261
  • Add support for locking a cache key to avoid repeating expensive operations #256
  • Add optional support for uvloop #255
  • Output timing information when --timing-log argument is provided #254
  • General refactoring - modules moved from messaging into new core, protocols, and utils sub-packages #250, #301
  • Switch performance demo to the newer issue-credential protocol #243
"},{"location":"CHANGELOG/#035","title":"0.3.5","text":""},{"location":"CHANGELOG/#november-1-2019","title":"November 1, 2019","text":"
  • Switch performance demo to the newer issue-credential protocol #243
  • Remove old method for reusing credential requests and replace with local caching for credential offers and requests #238, #242
  • Add statistics on HTTP requests to timing output #237
  • Reduce the number of tags on non-secrets records to reduce storage requirements and improve performance #235
"},{"location":"CHANGELOG/#034","title":"0.3.4","text":""},{"location":"CHANGELOG/#october-23-2019","title":"October 23, 2019","text":"
  • Clean up base64 handling in wallet utils and add tests #224
  • Support schema sequence numbers for lookups and caching and allow credential definition tag override via admin API #223
  • Support multiple proof referents in the present-proof protocol #222
  • Group protocol command line arguments appropriately #217
  • Don't require a signature for get_txn_request in credential_definition_id2schema_id and reduce public DID lookups #215
  • Add a role property to credential exchange and presentation exchange records #214, #218
  • Improve attachment decorator handling #210
  • Expand and correct documentation of the OpenAPI interface #208, #212
"},{"location":"CHANGELOG/#033","title":"0.3.3","text":""},{"location":"CHANGELOG/#september-27-2019","title":"September 27, 2019","text":"
  • Clean up LGTM errors and warnings and fix a message dispatch error #203
  • Avoid wrapping messages with Forward wrappers when returning them directly #199
  • Add a CLI parameter to override the base URL used in URL-formatted connection invitations #197
  • Update the feature discovery protocol to match the RFC and rename the admin API endpoint #193
  • Add CLI parameters for specifying additional properties of the printed connection invitation #192
  • Add support for explicitly setting the wallet credential ID on storage #188
  • Additional performance tracking and storage reductions #187
  • Handle connection invitations in base64 or URL format in the Alice demo agent #186
  • Add admin API methods to get and set the credential tagging policy for a credential definition ID #185
  • Allow querying of credentials for proof requests with multiple referents #181
  • Allow self-connected agents to issue credentials, present proofs #179
  • Add admin API endpoints to register a ledger nym, fetch a ledger DID verkey, or fetch a ledger DID endpoint #178
"},{"location":"CHANGELOG/#032","title":"0.3.2","text":""},{"location":"CHANGELOG/#september-3-2019","title":"September 3, 2019","text":"
  • Merge support for Aries #36 (issue-credential) and Aries #37 (present-proof) protocols #164, #167
  • Add initiator to connection record queries to ensure uniqueness in the case of a self-connection #161
  • Add connection aliases #149
  • Misc documentation updates
"},{"location":"CHANGELOG/#031","title":"0.3.1","text":""},{"location":"CHANGELOG/#august-15-2019","title":"August 15, 2019","text":"
  • Do not fail with an error when no ledger is configured #145
  • Switch to PyNaCl instead of pysodium; update dependencies #143
  • Support reusable connection invitations #142
  • Fix --version option and optimize Docker builds #136
  • Add connection_id to basicmessage webhooks #134
  • Fixes for transaction author agreements #133
"},{"location":"CHANGELOG/#030","title":"0.3.0","text":""},{"location":"CHANGELOG/#august-9-2019","title":"August 9, 2019","text":"
  • Ledger and wallet config updates; add support for transaction author agreements #127
  • Handle duplicate schema in send_schema by always fetching first #126
  • More flexible timeout support in detect_process #125
  • Add start command to run_docker invocations #119
  • Add issuer stored state #114
  • Add admin route to create a presentation request without sending it #112
  • Add -v option to aca-py executable to print version #110
  • Fix demo presentation request, optimize credential retrieval #108
  • Add pypi badge to README and make document link URLs absolute #103
  • Add admin routes for creating and listing wallet DIDs, adjusting the public DID #102
  • Update the running locally instructions based on feedback from Sam Smith #101
  • Add support for multiple invocation commands, implement start/provision/help commands #99
  • Add admin endpoint to send problem report #98
  • Add credential received state transition #97
  • Adding documentation for the routing version of the performance example #94
  • Document listing the Aries RFCs supported by ACA-Py and reference to the list in the README #89
  • Further updates to the running locally section of the demo README #86
  • Don't extract decorators with names matching the 'data_key' of defined schema fields #85
  • Allow demo scripts to run outside of Docker; add command line parsing #84
  • Connection invitation fixes and improvements; support DID-based invitations #82
"},{"location":"CHANGELOG/#021","title":"0.2.1","text":""},{"location":"CHANGELOG/#july-16-2019","title":"July 16, 2019","text":"
  • Add missing MANIFEST file #78
"},{"location":"CHANGELOG/#020","title":"0.2.0","text":""},{"location":"CHANGELOG/#july-16-2019_1","title":"July 16, 2019","text":"

This is the first PyPI release. The history begins with the transfer of aca-py from bcgov to hyperledger.

  • Prepare for version 0.2.0 release #77
  • Update von-network related references. #74
  • Fixed log_level arg, added validation error logging #73
  • fix shell inconsistency #72
  • further cleanup to the OpenAPI demo script #71
  • Updates to invitation handling and performance test #68
  • Api security #67
  • Fix line endings on Windows #66
  • Fix repository name in badge links #65
  • Connection record is_ready refactor #64
  • Fix API instructions for cred def id #58
  • Updated API demo docs to use alice/faber scripts #54
  • Updates to the readme for the demo to add PWD support #53
  • Swallow empty input in demo scripts #51
  • Set credential_exchange state when created from a cached credential request #49
  • Check for readiness instead of activeness in credential admin routes #46
  • Demo updates #43
  • Misc fixes #42
  • Readme updates #41
  • Change installed \"binary\" name to aca-py #40
  • Tweak in script to work under Linux; updates to readme for demo #33
  • New routing example document, typo corrections #31
  • More bad links #30
  • Links cleanup for the documentation #29
  • Alice-Faber demo update #28
  • Deployment Model document #27
  • Plantuml source and images for documentation; w/image generator script #26
  • Move generated documentation. #25
  • Update generated documents #24
  • Split application configuration into separate modules and add tests #23
  • Updates to the RTD configuration file #22
  • Merge DIDDoc support from von_anchor #21
  • Adding Prov of BC, Gov of Canada copyright #19
  • Update test configuration #18
  • CI updates #17
  • Transport updates #15
"},{"location":"CODE_OF_CONDUCT/","title":"Hyperledger Code of Conduct","text":"

Hyperledger is a collaborative project at The Linux Foundation. It is an open-source and open community project where participants choose to work together, and in that process experience differences in language, location, nationality, and experience. In such a diverse environment, misunderstandings and disagreements happen, which in most cases can be resolved informally. In rare cases, however, behavior can intimidate, harass, or otherwise disrupt one or more people in the community, which Hyperledger will not tolerate.

A Code of Conduct is useful to define accepted and acceptable behaviors and to promote high standards of professional practice. It also provides a benchmark for self evaluation and acts as a vehicle for better identity of the organization.

This code (CoC) applies to any member of the Hyperledger community \u2013 developers, participants in meetings, teleconferences, mailing lists, conferences or functions, etc. Note that this code complements rather than replaces legal rights and obligations pertaining to any particular situation.

"},{"location":"CODE_OF_CONDUCT/#statement-of-intent","title":"Statement of Intent","text":"

Hyperledger is committed to maintain a positive work environment. This commitment calls for a workplace where participants at all levels behave according to the rules of the following code. A foundational concept of this code is that we all share responsibility for our work environment.

"},{"location":"CODE_OF_CONDUCT/#code","title":"Code","text":"
  1. Treat each other with respect, professionalism, fairness, and sensitivity to our many differences and strengths, including in situations of high pressure and urgency.

  2. Never harass or bully anyone verbally, physically or sexually.

  3. Never discriminate on the basis of personal characteristics or group membership.

  4. Communicate constructively and avoid demeaning or insulting behavior or language.

  5. Seek, accept, and offer objective work criticism, and acknowledge properly the contributions of others.

  6. Be honest about your own qualifications, and about any circumstances that might lead to conflicts of interest.

  7. Respect the privacy of others and the confidentiality of data you access.

  8. With respect to cultural differences, be conservative in what you do and liberal in what you accept from others, but not to the point of accepting disrespectful, unprofessional or unfair or unwelcome behavior or advances.

  9. Promote the rules of this Code and take action (especially if you are in a leadership position) to bring the discussion back to a more civil level whenever inappropriate behaviors are observed.

  10. Stay on topic: Make sure that you are posting to the correct channel and avoid off-topic discussions. Remember when you update an issue or respond to an email you are potentially sending to a large number of people.

  11. Step down considerately: Members of every project come and go, and the Hyperledger is no different. When you leave or disengage from the project, in whole or in part, we ask that you do so in a way that minimizes disruption to the project. This means you should tell people you are leaving and take the proper steps to ensure that others can pick up where you left off.

"},{"location":"CODE_OF_CONDUCT/#glossary","title":"Glossary","text":""},{"location":"CODE_OF_CONDUCT/#demeaning-behavior","title":"Demeaning Behavior","text":"

is acting in a way that reduces another person's dignity, sense of self-worth or respect within the community.

"},{"location":"CODE_OF_CONDUCT/#discrimination","title":"Discrimination","text":"

is the prejudicial treatment of an individual based on criteria such as: physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.

"},{"location":"CODE_OF_CONDUCT/#insulting-behavior","title":"Insulting Behavior","text":"

is treating another person with scorn or disrespect.

"},{"location":"CODE_OF_CONDUCT/#acknowledgement","title":"Acknowledgement","text":"

is a record of the origin(s) and author(s) of a contribution.

"},{"location":"CODE_OF_CONDUCT/#harassment","title":"Harassment","text":"

is any conduct, verbal or physical, that has the intent or effect of interfering with an individual, or that creates an intimidating, hostile, or offensive environment.

"},{"location":"CODE_OF_CONDUCT/#leadership-position","title":"Leadership Position","text":"

includes group Chairs, project maintainers, staff members, and Board members.

"},{"location":"CODE_OF_CONDUCT/#participant","title":"Participant","text":"

includes the following persons:

  • Developers
  • Member representatives
  • Staff members
  • Anyone from the Public partaking in the Hyperledger work environment (e.g. contribute code, comment on our code or specs, email us, attend our conferences, functions, etc)
"},{"location":"CODE_OF_CONDUCT/#respect","title":"Respect","text":"

is the genuine consideration you have for someone (if only because of their status as participant in Hyperledger, like yourself), and that you show by treating them in a polite and kind way.

"},{"location":"CODE_OF_CONDUCT/#sexual-harassment","title":"Sexual Harassment","text":"

includes visual displays of degrading sexual images, sexually suggestive conduct, offensive remarks of a sexual nature, requests for sexual favors, unwelcome physical contact, and sexual assault.

"},{"location":"CODE_OF_CONDUCT/#unwelcome-behavior","title":"Unwelcome Behavior","text":"

Hard to define? Some questions to ask yourself are:

  • how would I feel if I were in the position of the recipient?
  • would my spouse, parent, child, sibling or friend like to be treated this way?
  • would I like an account of my behavior published in the organization's newsletter?
  • could my behavior offend or hurt other members of the work group?
  • could someone misinterpret my behavior as intentionally harmful or harassing?
  • would I treat my boss or a person I admire at work like that ?
  • Summary: if you are unsure whether something might be welcome or unwelcome, don't do it.
"},{"location":"CODE_OF_CONDUCT/#unwelcome-sexual-advance","title":"Unwelcome Sexual Advance","text":"

includes requests for sexual favors, and other verbal or physical conduct of a sexual nature, where:

  • submission to such conduct is made either explicitly or implicitly a term or condition of an individual's employment,
  • submission to or rejection of such conduct by an individual is used as a basis for employment decisions affecting the individual,
  • such conduct has the purpose or effect of unreasonably interfering with an individual's work performance or creating an intimidating hostile or offensive working environment.
"},{"location":"CODE_OF_CONDUCT/#workplace-bullying","title":"Workplace Bullying","text":"

is a tendency of individuals or groups to use persistent aggressive or unreasonable behavior (e.g. verbal or written abuse, offensive conduct or any interference which undermines or impedes work) against a co-worker or any professional relations.

"},{"location":"CODE_OF_CONDUCT/#work-environment","title":"Work Environment","text":"

is the set of all available means of collaboration, including, but not limited to messages to mailing lists, private correspondence, Web pages, chat channels, phone and video teleconferences, and any kind of face-to-face meetings or discussions.

"},{"location":"CODE_OF_CONDUCT/#incident-procedure","title":"Incident Procedure","text":"

To report incidents or to appeal reports of incidents, send email to Mike Dolan (mdolan@linuxfoundation.org) or Angela Brown (angela@linuxfoundation.org). Please include any available relevant information, including links to any publicly accessible material relating to the matter. Every effort will be taken to ensure a safe and collegial environment in which to collaborate on matters relating to the Project. In order to protect the community, the Project reserves the right to take appropriate action, potentially including the removal of an individual from any and all participation in the project. The Project will work towards an equitable resolution in the event of a misunderstanding.

"},{"location":"CODE_OF_CONDUCT/#credits","title":"Credits","text":"

This code is based on the W3C\u2019s Code of Ethics and Professional Conduct with some additions from the Cloud Foundry\u2018s Code of Conduct.

"},{"location":"CONTRIBUTING/","title":"How to contribute","text":"

You are encouraged to contribute to the repository by forking and submitting a pull request.

For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.

(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)

Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main branch. Pull requests should have a descriptive name, include a summary of all changes made in the pull request description, and include unit tests that provide good coverage of the feature or fix. A Continuous Integration (CI) pipeline is executed on all PRs before review and contributors are expected to address all CI issues identified. Where appropriate, PRs that impact the end-user and developer demos in the repo should include updates or extensions to those demos to cover the new capabilities.

If you would like to propose a significant change, please open an issue first to discuss the work with the community.

Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).

"},{"location":"CONTRIBUTING/#development-tools","title":"Development Tools","text":""},{"location":"CONTRIBUTING/#pre-commit","title":"Pre-commit","text":"

A configuration for pre-commit is included in this repository. This is an optional tool to help contributors commit code that follows the formatting requirements enforced by the CI pipeline. Additionally, it can be used to help contributors write descriptive commit messages that can be parsed by changelog generators.

On each commit, pre-commit hooks will run that verify the committed code complies with ruff and is formatted with black. To install the ruff and black checks:

pre-commit install\n

To install the commit message linter:

pre-commit install --hook-type commit-msg\n
"},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#maintainer-scopes-github-roles-and-github-teams","title":"Maintainer Scopes, GitHub Roles and GitHub Teams","text":"

Maintainers are assigned the following scopes in this repository:

Scope Definition GitHub Role GitHub Team Admin Admin aries-admins Maintainer The GitHub Maintain role Maintain aries-cloudagent-python committers Triage The GitHub Triage role Triage aries triage Read The GitHub Read role Read Aries Contributors Read The GitHub Read role Read TOC Read The GitHub Read role Read aries-framework-go-ext committers"},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"GitHub ID Name Scope LFID Discord ID Email Company Affiliation andrewwhitehead Andrew Whitehead Admin cywolf@gmail.com BC Gov dbluhm Daniel Bluhm Admin daniel@indicio.tech Indicio PBC dhh1128 Daniel Hardman Admin daniel.hardman@gmail.com Provident shaangill025 Shaanjot Gill Maintainer gill.shaanjots@gmail.com BC Gov swcurran Stephen Curran Admin swcurran@cloudcompass.ca BC Gov TelegramSam Sam Curren Maintainer telegramsam@gmail.com Indicio PBC TimoGlastra Timo Glastra Admin timo@animo.id Animo Solutions WadeBarnes Wade Barnes Admin wade@neoterictech.ca BC Gov usingtechnology Jason Sherman Maintainer tools@usingtechnolo.gy BC Gov"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name GitHub ID Scope LFID Discord ID Email Company Affiliation"},{"location":"MAINTAINERS/#the-duties-of-a-maintainer","title":"The Duties of a Maintainer","text":"

Maintainers are expected to perform the following duties for this repository. The duties are listed in more or less priority order:

  • Review, respond, and act on any security vulnerabilities reported against the repository.
  • Review, provide feedback on, and merge or reject GitHub Pull Requests from Contributors.
  • Review, triage, comment on, and close GitHub Issues submitted by Contributors.
  • When appropriate, lead/facilitate architectural discussions in the community.
  • When appropriate, lead/facilitate the creation of a product roadmap.
  • Create, clarify, and label issues to be worked on by Contributors.
  • Ensure that there is a well defined (and ideally automated) product test and release pipeline, including the publication of release artifacts.
  • When appropriate, execute the product release process.
  • Maintain the repository CONTRIBUTING.md file and getting started documents to give guidance and encouragement to those wanting to contribute to the product, and those wanting to become maintainers.
  • Contribute to the product via GitHub Pull Requests.
  • Monitor requests from the Hyperledger Technical Oversight Committee about the contents and management of Hyperledger repositories, such as branch handling, required files in repositories and so on.
  • Contribute to the Hyperledger Project's Quarterly Report.
"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

This community welcomes contributions. Interested contributors are encouraged to progress to become maintainers. To become a maintainer the following steps occur, roughly in order.

  • The proposed maintainer establishes their reputation in the community, including authoring five (5) significant merged pull requests, and expresses an interest in becoming a maintainer for the repository.
  • A PR is created to update this file to add the proposed maintainer to the list of active maintainers.
  • The PR is authored by an existing maintainer or has a comment on the PR from an existing maintainer supporting the proposal.
  • The PR is authored by the proposed maintainer or has a comment on the PR from the proposed maintainer confirming their interest in being a maintainer.
  • The PR or comment from the proposed maintainer must include their willingness to be a long-term (more than 6 month) maintainer.
  • Once the PR and necessary comments have been received, an approval timeframe begins.
  • The PR MUST be communicated on all appropriate communication channels, including relevant community calls, chat channels and mailing lists. Comments of support from the community are welcome.
  • The PR is merged and the proposed maintainer becomes a maintainer if either:
  • Two weeks have passed since at least three (3) Maintainer PR approvals have been recorded, OR
  • An absolute majority of maintainers have approved the PR.
  • If the PR does not get the requisite PR approvals, it may be closed.
  • Once the add maintainer PR has been merged, any necessary updates to the GitHub Teams are made.
"},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

Being a maintainer is not a status symbol or a title to be carried indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

  • Resignation of a maintainer.
  • Violation of the Code of Conduct warranting removal.
  • Inactivity.
  • A general measure of inactivity will be no commits or code review comments for one reporting quarter. This will not be strictly enforced if the maintainer expresses a reasonable intent to continue contributing.
  • Reasonable exceptions to inactivity will be granted for known long term leave such as parental leave and medical leave.
  • Other circumstances at the discretion of the other Maintainers.

The process to move a maintainer from active to emeritus status is comparable to the process for adding a maintainer, outlined above. In the case of voluntary resignation, the Pull Request can be merged following a maintainer PR approval. If the removal is for any other reason, the following steps SHOULD be followed:

  • A PR is created to update this file to move the maintainer to the list of emeritus maintainers.
  • The PR is authored by, or has a comment supporting the proposal from, an existing maintainer or Hyperledger GitHub organization administrator.
  • Once the PR and necessary comments have been received, the approval timeframe begins.
  • The PR MAY be communicated on appropriate communication channels, including relevant community calls, chat channels and mailing lists.
  • The PR is merged and the maintainer transitions to maintainer emeritus if:
  • The PR is approved by the maintainer to be transitioned, OR
  • Two weeks have passed since at least three (3) Maintainer PR approvals have been recorded, OR
  • An absolute majority of maintainers have approved the PR.
  • If the PR does not get the requisite PR approvals, it may be closed.

Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

"},{"location":"PUBLISHING/","title":"How to Publish a New Version","text":"

The code to be published should be in the main branch. Make sure that all the PRs to go in the release are merged, and decide on the release tag. Should it be a release candidate or the final tag, and should it be a major, minor or patch release, per semver rules.

Once ready to do a release, create a local branch that includes the following updates:

  1. Create a PR branch from an updated main branch.

  2. Update the CHANGELOG.md to add the new release. Only create a new section when working on the first release candidate for a new release. When transitioning from one release candidate to the next, or to an official release, just update the title and date of the change log section.

  3. Include details of the merged PRs included in this release. General process to follow:

  4. Gather the set of PRs since the last release and put them into a list. A good tool to use for this is the github-changelog-generator. Steps:

  5. Create a read only GitHub token for your account on this page: https://github.com/settings/tokens with a scope of repo / public_repo.
  6. Use a command like the following, adjusting the tag parameters as appropriate. docker run -it --rm -v \"$(pwd)\":/usr/local/src/your-app githubchangeloggenerator/github-changelog-generator --user hyperledger --project aries-cloudagent-python --output 0.11.0rc2.md --since-tag 0.10.4 --future-release 0.11.1rc2 --release-branch main --token <your-token>
  7. In the generated file, use only the PR list -- we don't include the list of closed issues in the Change Log.

In some cases, the approach above fails because of too many API calls. An alternate approach to getting the list of PRs in the right format is to use OpenAI ChatGPT.

Prepare the following ChatGPT request. Don't hit enter yet--you have to add the data.

Generate from this the github pull request number, the github id of the author and the title of the pull request in a tab-delimited list

Get a list of the merged PRs since the last release by displaying the PR list in the GitHub UI, highlighting/copying the PRs and pasting them below the ChatGPT request, one page after another. Hit <Enter>, let the AI magic work, and you should have a list of the PRs in a nice table with a Copy link that you should click.

Once you have that, open this Google Sheet and highlight the A1 cell and paste in the ChatGPT data. A formula in column E will have the properly formatted changelog entries. Double check the list with the GitHub UI to make sure that ChatGPT isn't messing with you and you have the needed data.

If using ChatGPT doesn't appeal to you, try this scary sed/command line approach:

  • Put the following commands into a file called changelog.sed
/Approved/d\n/updated /d\n/^$/d\n/^ [0-9]/d\ns/was merged.*//\n/^@/d\ns# by \\(.*\\) # [\\1](https://github.com/\\1)#\ns/^ //\ns#  \\#\\([0-9]*\\)# [\\#\\1](https://github.com/hyperledger/aries-cloudagent-python/pull/\\1) #\ns/  / /g\n/^Version/d\n/tasks done/d\ns/^/- /\n
  • Navigate in your browser to the paged list of PRs merged since the last release (using in the GitHub UI a filter such as is:pr is:merged sort:updated merged:>2022-04-07) and for each page, highlight, and copy the text of only the list of PRs on the page to use in the following step.
  • For each page, run the command sed -e :a -e '$!N;s/\\n#/ #/;ta' -e 'P;D' <<EOF | sed -f changelog.sed, paste in the copied text and then type EOF. Redirect the output to a file, appending each page of output to the file.
  • The first sed command in the pipeline merges the PR title and PR number plus author lines onto a single line. The commands in the changelog.sed file just clean up the data, removing unwanted lines, etc.
  • At the end of that process, you should have a list of all of the PRs in a form you can use in the CHANGELOG.md file.
  • To verify you have right number of PRs, you can do a wc of the file and there should be one line per PR. You should scan the file as well, looking for anomalies, such as missing \\s before # characters. It's a pretty ugly process.
  • Using a curl command and the GitHub API is probably a much better and more robust way to do this, but this was quick and dirty...

Once you have the list of PRs:

  • Organize the list into suitable categories, update (if necessary) the PR description and add notes to clarify the changes. See previous release entries to understand the style -- a format that should help developers.
  • Add a narrative about the release above the PR that highlights what has gone into the release.

  • Check to see if there are any other PRs that should be included in the release.

  • Update the ReadTheDocs in the /docs folder by following the instructions in the ./UpdateRTD.md file. That will likely add a number of new and modified files to the PR. Eliminate all of the errors in the generation process, either by mocking external dependencies or by fixing ACA-Py code. If necessary, create an issue with the errors and assign it to the appropriate developer. Experience has demonstrated to use that documentation generation errors should be fixed in the code.

  • Search across the repository for the previous version number and update it everywhere that makes sense. The CHANGELOG.md entry for the previous release is a likely exception, and the pyproject.toml in the root MUST be updated. You can skip (although it won't hurt) to update the files in the open-api folder as they will be automagically updated by the next step in publishing. The incremented version number MUST adhere to the Semantic Versioning Specification based on the changes since the last published release. For Release Candidates, the form of the tag is \"0.11.0rc2\". As of release 0.11.0 we have dropped the previously used - in the release candidate version string to better follow the semver rules.

  • Regenerate openapi.json and swagger.json by running ../scripts/generate-open-api-spec from within the aries_cloudagent folder.

Command: cd aries_cloudagent;../scripts/generate-open-api-spec;cd ..

  1. Double check all of these steps above, and then submit a PR from the branch. Add this new PR to CHANGELOG.md so that all the PRs are included. If there are still further changes to be merged, mark the PR as \"Draft\", repeat ALL of the steps again, and then mark this PR as ready and then wait until it is merged. It's embarrassing when you have to do a whole new release just because you missed something silly...I know!

  2. Immediately after it is merged, create a new GitHub tag representing the version. The tag name and title of the release should be the same as the version in pyproject.toml. Use the \"Generate Release Notes\" capability to get a sequential listing of the PRs in the release, to complement the manually curated Changelog. Verify on PyPi that the version is published.

  3. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\").

Additional information about the container image publication process can be found in the document Container Images and Github Actions.

  1. Update the ACA-Py Read The Docs site by building the new \"latest\" (main branch) and activating and building the new release. Appropriate permissions are required to publish the new documentation version.

  2. Update the https://aca-py.org website with the latest documentation by creating a PR and tag of the latest documentation from this site. Details are provided in the aries-acapy-docs repository.

"},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

"},{"location":"UpdateRTD/","title":"Managing Aries Cloud Agent Python Read The Docs Documentation","text":"

This document describes how to maintain the Read The Docs documentation that is generated from the ACA-Py code base. As the structure of the ACA-Py code evolves, the RTD files need to be regenerated and possibly updated, as described here.

"},{"location":"UpdateRTD/#generating-aca-py-read-the-docs-rtd-documentation","title":"Generating ACA-Py Read The Docs (RTD) documentation","text":""},{"location":"UpdateRTD/#before-you-start","title":"Before you start","text":"

To test generate and view the RTD documentation locally, you must install Sphinx and the Sphinx RTD theme. Follow the instructions on the respective pages to install and verify the installation on your system.

"},{"location":"UpdateRTD/#generate-module-files","title":"Generate Module Files","text":"

To rebuild the project and settings from scratch (you'll need to move the generated index file up a level):

rm -rf generated; sphinx-apidoc -f -M -o ./generated ../aries_cloudagent/ $(find ../aries_cloudagent/ -name '*tests*')

Note that the find command that is used to exclude any of the test python files from the RTD documentation.

Check the git status in your repo to see if the generator updates, adds or removes any existing RTD modules.

"},{"location":"UpdateRTD/#reviewing-the-files-locally","title":"Reviewing the files locally","text":"

To auto-generate the module documentation locally run:

sphinx-build -b html -a -E -c ./ ./ ./_build\n

Once generated, go into the _build folder and open index.html in a browser. Note that the _build is .gitignore'd and so will not be part of a git push.

"},{"location":"UpdateRTD/#look-for-errors","title":"Look for Errors","text":"

This is the hard part; looking for errors in docstrings added by devs. Some tips:

  • missing imports (No module named 'async_timeout') can be solved by adding the module to the list of autodoc_mock_imports in the conf.py file in the ACA-Py docs folder.
  • Ignore any errors in .md files
  • Ignore the warnings about including docs/README.md
  • Ignore an dist-package errors

Other than that, please investigate and fix things that you find. If there are fixes, it's usually to adhere to the rules around processing docstrings, and especially around JSON samples.

"},{"location":"UpdateRTD/#checking-for-missing-modules","title":"Checking for missing modules","text":"

The file index.rst in the ACA-Py docs folder drive the RTD generation. It picks up all the modules in the source code, starting from the root ../aries_cloudagent folder. However, some modules are not picked up automatically from the root and have to be manually added to index.rst. To do that:

  • Get a list of all generated modules by running: ls generated | grep \"aries_cloudagent.[a-z]*.rst\"
  • Compare that list with the modules listed in the \"Subpackages\" section of the left side menu in your browser, including any listed below the \"Submodules\".

If any are missing, you likely need to add them to the index.rst file in the toctree section of the file. You will see there are already several instances of that, notably \"connections\" and \"protocols\".

"},{"location":"UpdateRTD/#updating-the-readthedocsorg-site","title":"Updating the readthedocs.org site","text":"

The RTD documentation is not currently auto-generated, so a manual re-generation of the documentation is still required.

TODO: Automate this when new tags are applied to the repository.

"},{"location":"aca-py.org/","title":"Welcome!","text":"

Welcome to the Aries Cloud Agent Python documentation site. On this site you will find documentation for recent releases of ACA-Py. You'll find a few of the older versions of ACA-Py (pre-0.8.0), all versions since 0.8.0, and the main branch, which is the latest and greatest.

All of the documentation here is extracted from the Aries Cloud Agent Python repository. If you want to contribute to the documentation, please start there.

Ready to go? Scan the tabs in the page header to find the documentation you need now!

"},{"location":"aca-py.org/#code-internals-documentation","title":"Code Internals Documentation","text":"

In addition to this documentation site, the ACA-Py community also maintains an ACA-Py internals documentation site. The internals documentation consists of the docstrings extracted from the ACA-Py Python code and covers all of the (non-test) modules in the codebase. Check it out on the Aries Cloud Agent-Python ReadTheDocs site. As with this site, the ReadTheDocs documentation is version specific.

Got questions?

  • Join us on the Hyperledger Discord Server, in the #aries-cloudagent-python channel.
  • Add an issue in the Aries Cloud Agent Python repository.
"},{"location":"assets/","title":"Assets Folder for Documentation","text":"

Put any assets (images, source for images, videos, etc.) in this folder to be referenced in the various documents for this repo.

"},{"location":"assets/#plantuml-source-and-images","title":"Plantuml Source and Images","text":"

Plantuml diagrams are stored in this folder in source form in files ending in .puml and are generated manually using the ./genPlantuml script. The script uses a docker image from docker-hub and can be run without downloading any dependencies.

If you don't want to use the script, download plantuml and a command line utility and use that for the plantuml generation. I preferred not having any dependencies used (other than docker) and couldn't find a nice way to run plantuml headless from a command line.

"},{"location":"assets/#to-do","title":"To Do","text":"

It would be better to use a local Dockerfile vs. one found on Docker Hub. The one I did find was simple and straight forward.

I couldn't tell if the svg generation was working so just went with png. Not sure which would be better.

"},{"location":"demo/","title":"Aries Cloud Agent Python (ACA-Py) Demos","text":"

There are several demos available for ACA-Py mostly (but not only) aimed at developers learning how to deploy an instance of the agent and an ACA-Py controller to implement an application.

"},{"location":"demo/#table-of-contents","title":"Table of Contents","text":"
  • The Alice/Faber Python demo
  • Running in a Browser
  • Running in Docker
  • Running Locally
    • Installing Prerequisites
    • Start a local Indy ledger
    • Genesis File handling
    • Run a local Postgres instance
    • Optional: Run a von-network ledger browser
    • Run the Alice and Faber Controllers/Agents
  • Follow The Script
    • Exchanging Messages
    • Issuing and Proving Credentials
  • Additional Options in the Alice/Faber demo
  • Revocation
  • DID Exchange
  • Endorser
  • Run Indy-SDK Backend
  • Mediation
  • Multi-ledger
  • Multi-tenancy
  • Multi-tenancy with Mediation!!!
  • Other Environment Settings
  • Learning about the Alice/Faber code
  • OpenAPI (Swagger) Demo
  • Performance Demo
  • Coding Challenge: Adding ACME
"},{"location":"demo/#the-alicefaber-python-demo","title":"The Alice/Faber Python demo","text":"

The Alice/Faber demo is the (in)famous first verifiable credentials demo. Alice, a former student of Faber College (\"Knowledge is Good\"), connects with the College, is issued a credential about her degree and then is asked by the College for a proof. There are a variety of ways of running the demo. The easiest is in your browser using a site (\"Play with VON\") that let's you run docker containers without installing anything. Alternatively, you can run locally on docker (our recommendation), or using python on your local machine. Each approach is covered below.

"},{"location":"demo/#running-in-a-browser","title":"Running in a Browser","text":"

In your browser, go to the docker playground service Play with Docker. On the title screen, click \"Start\". On the next screen, click (in the left menu) \"+Add a new instance\". That will start up a terminal in your browser. Run the following commands to start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Alice's agent is now running.

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-in-docker","title":"Running in Docker","text":"

Running the demo in docker requires having a von-network (a Hyperledger Indy public ledger sandbox) instance running in docker locally. See the VON Network Tutorial for guidance on starting and stopping your own local Hyperledger Indy instance.

Open three bash shells. For Windows users, git-bash is highly recommended. bash is the default shell in Linux and Mac terminal sessions. For Mac users on the newer M\u00bd/3 Apple Silicon devices, make sure that you install Apple's Rosetta 2 software, using these installation instructions from Apple, and this even more useful guidance on how to install Rosetta 2 from the command line which amounts to running this MacOS command: softwareupdate --install-rosetta.

In the first terminal window, start von-network by following the Building and Starting instructions.

In the second terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the faber agent by issuing the following command:

  ./run_demo faber\n

In the third terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the alice agent by issuing the following command:

  ./run_demo alice\n

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-locally","title":"Running Locally","text":"

The following is an approach to to running the Alice and Faber demo using Python3 running on a bare machine. There are other ways to run the components, but this covers the general approach.

We don't recommend this approach if you are just trying this demo, as you will likely run into issues with the specific setup of your machine.

"},{"location":"demo/#installing-prerequisites","title":"Installing Prerequisites","text":"

We assume you have a running Python 3 environment. To install the prerequisites specific to running the agent/controller examples in your Python environment, run the following command from this repo's demo folder. The precise command to run may vary based on your Python environment setup.

pip3 install -r demo/requirements.txt\n

While that process will include the installation of the Indy python prerequisite, you still have to build and install the libindy code for your platform. Follow the installation instructions in the indy-sdk repo for your platform.

"},{"location":"demo/#start-a-local-indy-ledger","title":"Start a local Indy ledger","text":"

Start a local von-network Hyperledger Indy network running in Docker by following the VON Network Building and Starting instructions.

We strongly recommend you use Docker for the local Indy network until you really, really need to know the details of running an Indy Node instance on a bare machine.

"},{"location":"demo/#genesis-file-handling","title":"Genesis File handling","text":"

Assuming you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section. If you started the Indy ledger without using VON Network, this information might be helpful.

An Aries agent (or other client) connecting to an Indy ledger must know the contents of the genesis file for the ledger. The genesis file lets the agent/client know the IP addresses of the initial nodes of the ledger, and the agent/client sends ledger requests to those IP addresses. When using the indy-sdk ledger, look for the instructions in that repo for how to find/update the ledger genesis file, and note the path to that file on your local system.

The environment variable GENESIS_FILE is used to let the Aries demo agents know the location of the genesis file. Use the path to that file as value of the GENESIS_FILE environment variable in the instructions below. You might want to copy that file to be local to the demo so the path is shorter.

"},{"location":"demo/#run-a-local-postgres-instance","title":"Run a local Postgres instance","text":"

The demo uses the postgres database the wallet persistence. Use the Docker Hub certified postgres image to start up a postgres instance to be used for the wallet storage:

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres -c 'log_statement=all' -c 'logging_collector=on' -c 'log_destination=stderr'\n
"},{"location":"demo/#optional-run-a-von-network-ledger-browser","title":"Optional: Run a von-network ledger browser","text":"

If you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section, as you already have a Ledger browser running, accessible on http://localhost:9000.

If you started the Indy ledger without using VON Network, and you want to be able to browse your local ledger as you run the demo, clone the von-network repo, go into the root of the cloned instance and run the following command, replacing the /path/to/local-genesis.txt with a path to the same genesis file as was used in starting the ledger.

GENESIS_FILE=/path/to/local-genesis.txt PORT=9000 REGISTER_NEW_DIDS=true python -m server.server\n
"},{"location":"demo/#run-the-alice-and-faber-controllersagents","title":"Run the Alice and Faber Controllers/Agents","text":"

With the rest of the pieces running, you can run the Alice and Faber controllers and agents. To do so, cd into the demo folder your clone of this repo in two terminal windows.

If you are using a VON Network instance of Hyperledger, run the following commands:

DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

If you started the Indy ledger without using VON Network, use the following commands, replacing the /path/to/local-genesis.txt with the one for your configuration.

GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

Note that Alice and Faber will each use 5 ports, e.g., using the parameter ... --port 8020 actually uses ports 8020 through 8024. Feel free to use different ports if you want.

Everything running? See the Follow the Script section below for further instructions.

If the demo fails with an error that references the genesis file, a timeout connecting to the Indy Pool, or an Indy 307 error, it's likely a problem with the genesis file handling. Things to check:

  • Review the instructions for running the ledger with indy-sdk. Is it running properly?
  • Is the /path/to/local-genesis.txt file correct in your start commands?
  • Look at the IP addresses in the genesis file you are using, and make sure that those IP addresses are accessible from the location you are running the Aries demo
  • Check to make sure that all of the nodes of the ledger started. We've seen examples of only some of the nodes starting up, triggering an Indy 307 error.
"},{"location":"demo/#follow-the-script","title":"Follow The Script","text":"

With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (X) Exit?\n
"},{"location":"demo/#exchanging-messages","title":"Exchanging Messages","text":"

Feel free to use the \"3\" option to send messages back and forth between the agents. Fun, eh? Those are secure, end-to-end encrypted messages.

"},{"location":"demo/#issuing-and-proving-credentials","title":"Issuing and Proving Credentials","text":"

When ready to test the credentials exchange protocols, go to the Faber prompt, enter \"1\" to send a credential, and then \"2\" to request a proof.

You don't need to do anything with Alice's agent - her agent is implemented to automatically receive credentials and respond to proof requests.

Note there is an option \"2a\" to initiate a connectionless proof - you can execute this option but it will only work end-to-end when connecting to Faber from a mobile agent.

"},{"location":"demo/#additional-options-in-the-alicefaber-demo","title":"Additional Options in the Alice/Faber demo","text":"

You can enable support for various ACA-Py features by providing additional command-line arguments when starting up alice or faber.

Note that when the controller starts up the agent, it prints out the ACA-Py startup command with all parameters - you can inspect this command to see what parameters are provided in each case. For more details on the parameters, just start ACA-Py with the --help parameter, for example:

./scripts/run_docker start --help\n
"},{"location":"demo/#revocation","title":"Revocation","text":"

To enable support for revoking credentials, run the faber demo with the --revocation option:

./run_demo faber --revocation\n

Note that you don't specify this option with alice because it's only applicable for the credential issuer (who has to enable revocation when creating a credential definition, and explicitly revoke credentials as appropriate; alice doesn't have to do anything special when revocation is enabled).

You need to run an AnonCreds revocation registry tails server in order to support revocation - the details are described in the Alice gets a Phone demo instructions.

Faber will setup support for revocation automatically, and you will see an extra option in faber's menu to revoke a credential:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (5) Revoke Credential\n    (6) Publish Revocations\n    (7) Rotate Revocation Registry\n    (8) List Revocation Registries\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n  ```\n\nWhen you issue a credential, make a note of the `Revocation registry ID` and `Credential revocation ID`:\n
Faber | Revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Faber | Credential revocation ID: 1
When you revoke a credential you will need to provide those values:\n
[\u00bd/\u00be/\u215a/\u215e/T/X] 5

Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y

Note that you need to Publish the revocation information to the ledger.  Once you've revoked a credential any proof which uses this credential will fail to verify.  \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol.  There is no other affect on the operation of the agents.\n\nWith DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, connection re-use, and use of qualified DIDs:\n\n- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations\n- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation)\n- `--multi-use-invitations` - inviter will issue multi-use invitations\n- `--emit-did-peer-4` - participants will prefer use of did:peer:4 for their pairwise connection DIDs\n- `--emit-did-peer-2` - participants will prefer use of did:peer:2 for their pairwise connection DIDs\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Run Indy-SDK Backend\n\nThis runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask):\n\n```bash\n./run_demo faber --wallet-type indy\n

"},{"location":"demo/#mediation","title":"Mediation","text":"

To enable mediation, run the alice or faber demo with the --mediation option:

./run_demo faber --mediation\n

This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.

"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"

To enable multiple ledger mode, run the alice or faber demo with the --multi-ledger option:

./run_demo faber --multi-ledger\n

The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml.

"},{"location":"demo/#multi-tenancy","title":"Multi-tenancy","text":"

To enable support for multi-tenancy, run the alice or faber demo with the --multitenant option:

./run_demo faber --multitenant\n

(This option can be used with both (or either) alice and/or faber.)

You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (W) Create and/or Enable Wallet\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (W) Create and/or Enable Wallet\n    (X) Exit?\n

When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)

[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber      | Register or switch to wallet new_wallet_12\nFaber      | Created new profile\nFaber      | Profile backend: indy\nFaber      | Profile name: new_wallet_12\nFaber      | No public DID\n... etc\n

Note that faber will create a public DID for this wallet, and will create a schema and credential definition.

Once you have created a new wallet, you must establish a connection between alice and faber (remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").

In faber, create a new invitation:

[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n

In alice, accept the invitation:

[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n

You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:

Show me a screenshot - multi-tenancy via admin API

Note that with multi-tenancy enabled:

  • The \"base\" wallet will have access to this new \"agency API\" - the agent's admin key, if enabled, must be provided in a header
  • \"Base wallet\" API calls are handled here
  • The \"sub-wallets\" will have access to the \"normal\" ACA-Py admin API - to identify the sub-wallet, a JWT token must be provided, this token is created upon creation of the new wallet (see: this code here)
  • \"Sub-wallet\" API calls are handled here

Documentation on ACA-Py's multi-tenancy support can be found here.

"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"

There are two options for configuring mediation with multi-tenancy, documented here.

This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.

Run the demo (Alice or Faber) specifying both options:

./run_demo faber --multitenant --mediation\n

This works exactly as the vanilla multi-tenancy, except that all connections are mediated.

"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To override the default port settings:

AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n

(The agent requires up to 10 available ports.)

To pass extra arguments to the agent (for example):

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n

Additionally, separating the build and run functionalities in the script allows for smoother development and debugging processes. With the mounting of volumes from the host into the Docker container, code changes can be automatically reloaded without the need to repeatedly build the demo.

Build Command:

./demo/run_demo build alice --wallet-type askar-anoncreds --events\n

Run Command:

./demo/run_demo run alice --wallet-type askar-anoncreds --events\n

"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"

These Alice and Faber scripts (in the demo/runners folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.

The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.

"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"

Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This Aries OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connecting, issuing a credential, and presenting a proof sequence.

"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"

Another example in the demo/runners folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.

To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:

  • When starting the first agent, replace the agent name (e.g. faber) with performance.
  • Don't start the second agent (alice) at all.

The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.

A second version of the performance test can be run by adding the parameter --routing to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.

You can also run the demo against a postgres database using the following:

./run_demo performance --arg-file demo/postgres-indy-args.yml\n

(Obviously you need to be running a postgres database - the command to start postgres is in the yml file provided above.)

You can tweak the number of credentials issued using the --count and --batch parameters, and you can run against an Askar database using the --wallet-type askar option (or run using indy-sdk using --wallet-type indy).

An example full set of options is:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n

Or:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"

Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:

  • ACME inviting Alice to connect
  • ACME requesting a proof of her College degree
  • ACME issuing Alice a credential after she is hired.

The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:

  • Use the instructions above to start the Alice/Faber demo (above).
  • Start another terminal session and run the same commands as for \"Alice\", but replace \"alice\" with \"acme\".

All done? Checkout how we added the missing code segments here.

"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"

In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.

Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo

"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"

There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)

To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:

Open 2 bash shells, and in each run:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

In one shell run Faber:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

... and in the second shell run Alice:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

When Faber has produced an invitation, copy it over to Alice.

Then, in the Faber shell, select option 1 to issue a credential to Alice. (You can select option 2 if you like, to confirm via proof.)

Then, in the Faber shell, enter X to exit the controller, and then run the Acme controller:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n

In the Alice shell, select option 4 (to enter a new invitation) and then copy over Acme's invitation once it's available.

Then, in the Acme shell, you can select option 2 and then option 1, which don't do anything ... yet!!!

"},{"location":"demo/AcmeDemoWorkshop/#asking-alice-for-a-proof-of-education","title":"Asking Alice for a Proof of Education","text":"

In the Acme code acme.py we are going to add code to issue a proof request to Alice, and then validate the received proof.

First the following import statements and constants that we will need near the top of acme.py:

import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n

Next locate the code that is triggered by option 2:

            elif option == \"2\":\n                log_status(\"#20 Request proof of degree from alice\")\n                # TODO presentation requests\n

Replace the # TODO comment with the following code:

                req_attrs = [\n                    {\n                        \"name\": \"name\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"date\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"degree\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    }\n                ]\n                req_preds = []\n                indy_proof_request = {\n                    \"name\": \"Proof of Education\",\n                    \"version\": \"1.0\",\n                    \"nonce\": str(uuid4().int),\n                    \"requested_attributes\": {\n                        f\"0_{req_attr['name']}_uuid\": req_attr\n                        for req_attr in req_attrs\n                    },\n                    \"requested_predicates\": {}\n                }\n                proof_request_web_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"presentation_request\": {\"indy\": indy_proof_request},\n                }\n                # this sends the request to our agent, which forwards it to Alice\n                # (based on the connection_id)\n                await agent.admin_POST(\n                    \"/present-proof-2.0/send-request\",\n                    proof_request_web_request\n                )\n

Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):

        if state == \"presentation-received\":\n            # TODO handle received presentations\n            pass\n

then replace the # TODO comment and the pass statement:

            log_status(\"#27 Process the proof provided by X\")\n            log_status(\"#28 Check if proof is valid\")\n            proof = await self.admin_POST(\n                f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n            )\n            self.log(\"Proof = \", proof[\"verified\"])\n\n            # if presentation is a degree schema (proof of education),\n            # check values received\n            pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n            pres = message[\"by_format\"][\"pres\"][\"indy\"]\n            is_proof_of_education = (\n                pres_req[\"name\"] == \"Proof of Education\"\n            )\n            if is_proof_of_education:\n                log_status(\"#28.1 Received proof of education, check claims\")\n                for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n                    if referent in pres['requested_proof']['revealed_attrs']:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n                        )\n                    else:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            \"(attribute not revealed)\"\n                        )\n                for id_spec in pres[\"identifiers\"]:\n                    # just print out the schema/cred def id's of presented claims\n                    self.log(f\"schema_id: {id_spec['schema_id']}\")\n                    self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n                # TODO placeholder for the next step\n            else:\n                # in case there are any other kinds of proofs received\n                self.log(\"#28.1 Received \", pres_req[\"name\"])\n

Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.

Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!

"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"

Now we can issue a work credential to Alice!

There are two options for this. We can (a) add code under option 1 to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.

We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!

First though we need to register a schema and credential definition. Find this code:

        # acme_schema_name = \"employee id schema\"\n        # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n        await acme_agent.initialize(\n            the_agent=agent,\n            # schema_name=acme_schema_name,\n            # schema_attrs=acme_schema_attrs,\n        )\n\n        # TODO publish schema and cred def\n

... and uncomment the code lines. Replace the # TODO comment with the following code:

        with log_timer(\"Publish schema and cred def duration:\"):\n            # define schema\n            version = format(\n                \"%d.%d.%d\"\n                % (\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                )\n            )\n            # register schema and cred def\n            (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n                \"employee id schema\",\n                version,\n                [\"employee_id\", \"name\", \"date\", \"position\"],\n                support_revocation=False,\n                revocation_registry_size=TAILS_FILE_COUNT,\n            )\n

For option (1) we want to replace the # TODO comment here:

            elif option == \"1\":\n                log_status(\"#13 Issue credential offer to X\")\n                # TODO credential offers\n

with the following code:

                agent.cred_attrs[cred_def_id] = {\n                    \"employee_id\": \"ACME0009\",\n                    \"name\": \"Alice Smith\",\n                    \"date\": date.isoformat(date.today()),\n                    \"position\": \"CEO\"\n                }\n                cred_preview = {\n                    \"@type\": CRED_PREVIEW_TYPE,\n                    \"attributes\": [\n                        {\"name\": n, \"value\": v}\n                        for (n, v) in agent.cred_attrs[cred_def_id].items()\n                    ],\n                }\n                offer_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"comment\": f\"Offer on cred def id {cred_def_id}\",\n                    \"credential_preview\": cred_preview,\n                    \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n                }\n                await agent.admin_POST(\n                    \"/issue-credential-2.0/send-offer\", offer_request\n                )\n

... and then locate the code that handles the credential request callback:

        if state == \"request-received\":\n            # TODO issue credentials based on offer preview in cred ex record\n            pass\n

... and replace the # TODO comment and pass statement with the following code to issue the credential as Acme offered it:

            # issue credentials based on offer preview in cred ex record\n            if not message.get(\"auto_issue\"):\n                await self.admin_POST(\n                    f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n                    {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n                )\n

Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.

"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"

In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.

This demo also introduces revocation of credentials.

"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":"
  • Getting Started
  • Get a mobile agent
  • Running Locally in Docker
    • Install ngrok and jq
    • Expose services publicly using ngrok
  • Running in Play With Docker
  • Run an instance of indy-tails-server
    • Running locally in a bash shell?
    • Running in Play with Docker?
  • Run faber With Extra Parameters
    • Running locally in a bash shell?
    • Running in Play with Docker?
    • Waiting for the Faber agent to start ...
  • Accept the Invitation
  • Issue a Credential
  • Accept the Credential
  • Issue a Presentation Request
  • Present the Proof
  • Review the Proof
  • Revoke the Credential and Send Another Proof Request
  • Send a Connectionless Proof Request
  • Conclusion
"},{"location":"demo/AliceGetsAPhone/#getting-started","title":"Getting Started","text":"

This demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.

If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.

"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"

Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

There are a couple of extra steps you need to take to prepare to run the Faber agent locally:

"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"

ngrok is used to expose public endpoints for services running locally on your computer.

jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.

You can install ngrok from here

You can download jq releases here

"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"

Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.

Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accessible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:

ngrok http 8020\n

This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.

You will see something like this:

Forwarding                    http://abc123.ngrok.io -> http://localhost:8020\nForwarding                    https://abc123.ngrok.io -> http://localhost:8020\n

This creates a public url for ports 8020 on your local machine.

Note that an ngrok process is created automatically for your tails server.

Keep this process running as we'll come back to it in a moment.

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

"},{"location":"demo/AliceGetsAPhone/#run-an-instance-of-indy-tails-server","title":"Run an instance of indy-tails-server","text":"

For revocation to function, we need another component running that is used to store what are called tails files.

If you are not running with revocation enabled you can skip this step.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"

Open a new bash shell, and in a project directory, run:

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n

This will run the required components for the tails server to function and make a tails server available on port 6543.

This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:

ngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n

Note the server name in the url=https://c5789aa0.ngrok.io parameter (https://c5789aa0.ngrok.io) - this is the external url for your tails server. Make sure you use the https url!

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_1","title":"Running in Play with Docker?","text":"

Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.

Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events option when you run the Faber agent to reduce the quantity of information logged to the screen.

"},{"location":"demo/AliceGetsAPhone/#run-faber-with-extra-parameters","title":"Run faber With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"

If you are running in a local bash shell, navigate to the demo directory in your fork/clone of the Aries Cloud Agent Python repository and run:

TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

(Note that we have to start faber with --aip 10 for compatibility with mobile clients.)

The TAILS_NETWORK parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_2","title":"Running in Play with Docker?","text":"

If you are running in Play with Docker, navigate to the demo folder in the clone of Aries Cloud Agent Python and run the following:

PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

The PUBLIC_TAILS_URL parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.

Use the ngrok url for the tails server that you noted earlier.

*Note that you must use the https url for the tails server endpoint.

*Note - you may want to leave off the --events option when you run the Faber agent, if you are finding you are getting too much logging output.

"},{"location":"demo/AliceGetsAPhone/#waiting-for-the-faber-agent-to-start","title":"Waiting for the Faber agent to start ...","text":"

The Preparing agent image... step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:

  • We are using the BCovrin Test network because that is a network that the mobile agents can be configured to use.
  • We are running in \"auto\" mode, so we will make no manual acknowledgements.
  • The revocation related changes:
  • The TAILS_NETWORK parameter tells the ./run_demo script how to connect to the tails server and determine the public ngrok endpoint.
  • The PUBLIC_TAILS_URL environment variable is the address of your tails server (must be https).
  • The --revocation parameter to the ./run-demo script activates the ACA-Py revocation issuance.

As part of its startup process, the agent will publish a revocation registry to the ledger.

Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"

When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.

Click here to view screenshot

The mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".

Click here to view screenshot Click here to view screenshot

Switch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.

Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL option, and copy the invitation_url, which will look something like:

https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n

Or this:

http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n

Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.

"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"

We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.

In the Faber console, select option 1 to send a credential to the mobile agent.

Click here to view screenshot

The Faber agent outputs details to the console; e.g.,

Faber      | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber      | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber      | Credential revocation ID: 1\nFaber      | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n

The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id later, so we recommend that you copy it it now and paste it into a text file or some place that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id} endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created endpoint).

"},{"location":"demo/AliceGetsAPhone/#accept-the-credential","title":"Accept the Credential","text":"

The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"

We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.

In the Faber console, select option 2 to send a proof request to the mobile agent.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#present-the-proof","title":"Present the Proof","text":"

The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot

If the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.

The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.

"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"

In the Faber console window, the proof should be received as validated.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"

If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber options 5 and 6). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.

Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"

A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.

This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)

If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).

Then in the faber demo, select option 2a - Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.

Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py code in the repository.

"},{"location":"demo/AliceGetsAPhone/#conclusion","title":"Conclusion","text":"

That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.

"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"

ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.

The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.

"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"

Clone this repository to a directory on your local:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

Open up a second shell (so you have 2 shells open in the demo directory) and in one shell:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n

... and in the other:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Note that you start the faber agent with AIP2.0 options. (When you specify --cred-type json-ld faber will set aip to 20 automatically, so the --aip option is not strictly required). Note as well the use of the LEDGER_URL. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.

Also note that the above will only work with the /issue-credential-2.0/create-offer endpoint. If you want to use the /issue-credential-2.0/send endpoint - which automates each step of the credential exchange - you will need to include the --no-auto option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).

(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh and ./alice-local.sh scripts in the demo directory.)

Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.

(If you are running with --no-auto you will also need to call the /connections/{conn_id}/accept-invitation endpoint in alice's admin api swagger page.)

Now open up two browser windows to the Faber and Alice admin api swagger pages.

Using the Faber admin api, you have to create a DID with the appropriate:

  • DID method (\"key\" or \"sov\")
  • key type \"ed25519\" or \"bls12381g2\" (corresponding to signature types \"Ed25519Signature2018\" or \"BbsBlsSignature2020\")
  • if you use DID method \"sov\" you must use key type \"ed25519\"

Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.

For example, in Faber's swagger page call the /wallet/did/create endpoint with the following payload:

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

This will return something like:

{\n  \"result\": {\n    \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n    \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n    \"posture\": \"wallet_only\",\n    \"key_type\": \"bls12381g2\",\n    \"method\": \"key\"\n  }\n}\n

You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).

You will need to create a DID as above for Alice as well (/wallet/did/create etc ...).

Congratulations, you are now ready to start issuing JSON-LD credentials!

  • You have two agents with a connection established between the agents - you will need to copy Faber's connection_id into the examples below.
  • You have created a (non-public) DID for Faber to use to sign/issue the credentials - you will need to copy the DID that you created above into the examples below (as issuer).
  • You have created a (non-public) DID for Alice to use as her credentialSubject.id - this is required for Alice to sign the proof (the credentialSubject.id is not required, but then the provided presentation can't be verified).

To issue a credential, use the /issue-credential-2.0/send-offer endpoint. (You can also use the /issue-credential-2.0/send) endpoint, if, as mentioned above, you have included the --no-auto when starting both of the agents.)

You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:

{\n  \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request, /store, etc endpoints to complete the protocol.

To see the issued credential, call the /credentials/w3c endpoint on Alice's admin api - this will return something like:

{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\"\n      ],\n      \"types\": [\n        \"UniversityDegreeCredential\",\n        \"VerifiableCredential\"\n      ],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n      \"subject_ids\": [],\n      \"proof_types\": [\n        \"BbsBlsSignature2020\"\n      ],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\n          \"VerifiableCredential\",\n          \"UniversityDegreeCredential\"\n        ],\n        \"issuer\": \"did:key:zUC71Kd...poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n          \"created\": \"2021-05-19T16:19:44.458170\",\n          \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n    }\n  ]\n}\n

If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records) and check the state. If the state is credential-received, then the credential has been received but not stored, in this case just call the /store endpoint for this credential exchange.

"},{"location":"demo/AliceWantsAJsonCredential/#building-more-realistic-json-ld-credentials","title":"Building More Realistic JSON-LD Credentials","text":"

The above example uses the https://www.w3.org/2018/credentials/examples/v1 context, which should never be used in a real application.

To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.

"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"

You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).

You first include https://schema.org in the @context block of the credential as follows:

\"@context\": [\n  \"https://www.w3.org/2018/credentials/v1\",\n  \"https://schema.org\"\n],\n

Then you review the attributes and objects defined by https://schema.org and decide what you need to include in your credential.

For example to issue a credetial with givenName, familyName and alumniOf attributes, submit the following:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"alumniOf\": \"Example University\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that with https://schema.org, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject in the above with:

\"credentialSubject\": {\n  \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n  \"givenName\": \"Sally\",\n  \"familyName\": \"Student\",\n  \"alumniOf\": \"Example University\",\n  \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n

... and the credential issuance should fail, however https://schema.org defines a @vocab that by default all terms derive from (see here).

You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName and familyName):

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"student\": {\n            \"type\": \"Person\",\n            \"givenName\": \"Sally\",\n            \"familyName\": \"Student\",\n            \"alumniOf\": \"Example University\"\n          }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"

The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org, you just shouldn't uste this directly in your credential.)

"},{"location":"demo/AliceWantsAJsonCredential/#credential-issue-example","title":"Credential Issue Example","text":"

The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id, issuer and credentialSubject.id with your local values):

{\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"filter\": {\n        \"ld_proof\": {\n            \"credential\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://w3id.org/citizenship/v1\"\n                ],\n                \"type\": [\n                    \"VerifiableCredential\",\n                    \"PermanentResident\"\n                ],\n                \"id\": \"https://credential.example.com/residents/1234567890\",\n                \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n                \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n                \"credentialSubject\": {\n                    \"type\": [\n                        \"PermanentResident\"\n                    ],\n                    \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n                    \"givenName\": \"ALICE\",\n                    \"familyName\": \"SMITH\",\n                    \"gender\": \"Female\",\n                    \"birthCountry\": \"Bahamas\",\n                    \"birthDate\": \"1958-07-17\"\n                }\n            },\n            \"options\": {\n                \"proofType\": \"BbsBlsSignature2020\"\n            }\n        }\n    }\n}\n

Copy and paste this content into Faber's /issue-credential-2.0/send-offer endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.

In Alice's swagger page, submit the /credentials/records/w3c endpoint to see the issued credential.

"},{"location":"demo/AliceWantsAJsonCredential/#request-presentation-example","title":"Request Presentation Example","text":"

To request a proof, submit the following (with appropriate connection_id) to Faber's /present-proof-2.0/send-request endpoint:

{\n    \"comment\": \"string\",\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"presentation_request\": {\n        \"dif\": {\n            \"options\": {\n                \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n                \"domain\": \"4jt78h47fh47\"\n            },\n            \"presentation_definition\": {\n                \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n                \"format\": {\n                    \"ldp_vp\": {\n                        \"proof_type\": [\n                            \"BbsBlsSignature2020\"\n                        ]\n                    }\n                },\n                \"input_descriptors\": [\n                    {\n                        \"id\": \"citizenship_input_1\",\n                        \"name\": \"EU Driver's License\",\n                        \"schema\": [\n                            {\n                                \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n                            },\n                            {\n                                \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n                            }\n                        ],\n                        \"constraints\": {\n                            \"limit_disclosure\": \"required\",\n                            \"is_holder\": [\n                                {\n                                    \"directive\": \"required\",\n                                    \"field_id\": [\n                                        \"1f44d55f-f161-4938-a659-f8026467f126\"\n                                    ]\n                                }\n                            ],\n                            \"fields\": [\n                                {\n                                    \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                                    \"path\": [\n                                        \"$.credentialSubject.familyName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\",\n                                    \"filter\": {\n                                        \"const\": \"SMITH\"\n                                    }\n                                },\n                                {\n                                    \"path\": [\n                                        \"$.credentialSubject.givenName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\"\n                                }\n                            ]\n                        }\n                    }\n                ]\n            }\n        }\n    }\n}\n

Note that the is_holder property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName). Later on, the received presentation will be signed and verifiable only if is_holder with \"directive\": \"required\" is included in the presentation request.

There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation:

{\n  \"dif\": {\n  }\n}\n

There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.

Firstly, Alice can include the received presentation request in the body to the /send-presentation endpoint, and can include additional constraints on the fields:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"is_holder\": [\n                {\n                    \"directive\": \"required\",\n                    \"field_id\": [\n                        \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"332be361-823a-4863-b18b-c3b930c5623e\"\n                    ],\n                }\n            ],\n            \"fields\": [\n              {\n                \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              },\n              {\n                  \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n                  \"path\": [\n                      \"$.id\"\n                  ],\n                  \"purpose\": \"Specify the id of the credential to present\",\n                  \"filter\": {\n                      \"const\": \"https://credential.example.com/residents/1234567890\"\n                  }\n              }\n            ]\n          }\n        }\n      ]\n    }\n  }\n}\n

Note the additional constraint on \"path\": [ \"$.id\" ] - this restricts the presented credential to the one with the matching credential.id. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.

Another option is for Alice to specify the credential record_id - this is an internal value within ACA-Py:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"fields\": [\n              {\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              }\n            ]\n          }\n        }\n      ]\n    },\n    \"record_ids\": {\n      \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"

TBD the following credential is based on the W3C Vaccination schema:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://w3id.org/vaccination/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n            \"type\": \"VaccinationEvent\",\n            \"batchNumber\": \"1183738569\",\n            \"administeringCentre\": \"MoH\",\n            \"healthProfessional\": \"MoH\",\n            \"countryOfVaccination\": \"NZ\",\n            \"recipient\": {\n              \"type\": \"VaccineRecipient\",\n              \"givenName\": \"JOHN\",\n              \"familyName\": \"SMITH\",\n              \"gender\": \"Male\",\n              \"birthDate\": \"1958-07-17\"\n            },\n            \"vaccine\": {\n              \"type\": \"Vaccine\",\n              \"disease\": \"COVID-19\",\n              \"atcCode\": \"J07BX03\",\n              \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n              \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n            }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/Aries-Workshop/","title":"A Hyperledger Aries/AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/Aries-Workshop/#introduction","title":"Introduction","text":"

Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.

To run the labs, you\u2019ll need a Hyperledger Aries agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant Aries agent built on Hyperledger Aries Cloud Agent Python (ACA-Py). Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.

The four labs in this workshop are laid out as follows:

  • Lab 1: Getting a Traction Tenant Agent and Mobile Wallet
  • Lab 2: Getting Ready To Be An Issuer
  • Lab 3: Issuing Credentials to a Mobile Wallet
  • Lab 4: Requesting and Sending Presentations

Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py

Jump in!

"},{"location":"demo/Aries-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"

Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.

"},{"location":"demo/Aries-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"
  1. Get a compatible Aries Mobile Wallet to use with your Aries Traction tenant. There are a number to choose from. We suggest that you use one of these:
    1. BC Wallet from the Government of British Columbia
    2. Orbit Wallet from Northern Block
  2. Click this Traction Sandbox link to go to the Sandbox login page to create your own Traction Tenant Aries agent. Once there, do the following:
    1. Click \"Create Request!\", fill in at least the required form fields, and click \"Submit\".
    2. Your new Traction Tenant's Wallet ID and Wallet Key will be displayed. SAVE THOSE IMMEDIATELY SO THAT YOU HAVE THEM TO ACCESS YOUR TENANT. You only get to see/save them once!
      1. You will need those each time you open your Traction Tenant agent. Putting them into a Password Manager is a great idea!
      2. We can't recover your Wallet ID and Wallet Key, so if you lose them you have to start the entire process again.
  3. Go back to the Traction Sandbox login and this time, use your Wallet ID/Key to log in to your brand new Traction Tenant agent. You might want to bookmark the site.
  4. Make your new Traction Tenant a verifiable credential issuer by:
    1. Clicking on the \"User\" (folder icon) menu (top right), and choosing \"Profile\"
    2. Clicking the \u201cBCovrin Test\u201d Action in the Endorser section.
      1. When done, you will have your own public DID (displayed on the page) that has been published on the BCovrin Test Ledger (can you find it?). Your DID will be used to publish other AnonCreds transactions so you can issue verifiable credentials.
  5. Connect from your Traction Tenant to your mobile Wallet app by:
    1. Selecting on the left menu \"Connections\" and then \"Invitations\"
    2. Click the \"Single Use Connection\" button, give the connection an alias (maybe \"My Wallet\"), and click \"Submit.\"
    3. Scan the resulting QR code with your initialized mobile Wallet and follow the prompts. Once you connect, type a quick \"Hi!\" message to the Traction Agent and you should get an automated message back.
    4. Check the Traction Tenant menu item \"Connections\u2192Connections\" to see the status of your connection \u2013 it should be active.
    5. If anything didn't work in the sequence, here are some things to try:
    6. If the Traction Tenant connection is not active, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.
    7. We've created a Traction Sandbox Workshop FAQ and Questions GitHub issue that you can check to see if your question is already answered, and if not, you can add your question as comment on the issue, and we'll get back to you.

That's it--you should be ready to start issuing and receiving verifiable credentials.

"},{"location":"demo/Aries-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"

::: todo To Do: Update lab to use this schema: H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0 :::

In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:

  • A Schema defines the list of attributes (claims) in a credential. An issuer often publishes their own schema, but they may also use one published by someone else. For example, a group of universities all might use the schema published by the \"Association of Universities and Colleges\" to which they belong.
  • A Credential Definition (CredDef) is published by the issuer, linking together Issuer's DID with the schema upon which the credentials will be issued, and containing the public key material needed to verify presentations of the credential. Revocation Registries are also linked to the Credential Definition, enabling an issuer to revoke credentials when necessary.
"},{"location":"demo/Aries-Workshop/#lab-2-steps-to-follow","title":"Lab 2: Steps to Follow","text":"
  1. Log into your Traction Sandbox. You did record your Wallet ID and Key, right?
    1. If not \u2014 jump back to Lab 1 to create a new Traction Tenant, and to a connection to your mobile Wallet.
  2. Create a Schema:
    1. Click the menu item \u201cConfiguration\u201d and then \u201cSchema Storage\u201d.
    2. Click \u201cAdd Schema From Ledger\u201d and fill in the Schema Id with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0.
      1. By doing this, you (as the issuer) will be using a previously published schema. Click here to see the schema on the ledger.
    3. To see the details about your schema, hit the Expand (>) link, and then the subsequent > to \u201cView Raw Content.\"
  3. With the schema in place, it's time to become an issuer. To do that, you have to create a Credential Definition. Click on the \u201cCredential\u201d icon in the \u201cCredential Definition\u201d column of your schema to create the Credential Definition (CredDef) for the Schema. The \u201cTag\u201d can be any value you want \u2014 it is an issuer defined part of the identifier for the Credential Definition. Wait for the operation to complete. Click the \u201cRefresh\u201d button if needed to see the Create icon has been replaced with the identifier for your CredDef.
  4. Move to the menu item \"Configuration \u2192 Credential Definition Storage\" to see the CredDef you created, If you want, expand it to view the raw data. In this case, the raw data does not show the actual CredDef, but rather the Traction data about the CredDef. You can again use the BCovrin Test ledger browser to see your new, published CredDef.

Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.

"},{"location":"demo/Aries-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"

In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.

"},{"location":"demo/Aries-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Issue a Credential:
    1. Click the menu item \u201cIssuance\u201d and then \u201cOffer a Credential\u201d.
    2. Select the Credential Definition of the credential you want to issue.
    3. Select the Contact Name to whom you are issuing the credential\u2014the alias of the connection you made to your mobile Wallet.
    4. Click the \u201cEnter Credential Value\u201d to popup a data entry form for the attributes to populate.
      1. When you enter the date values that you want to use in predicates (e.g., \u201cOlder than 19\u201d), put the date into the following format: YYYYMMDD, e.g., 20231001. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.
      2. We suggest you use realistic dates for Date of Birth (DOB) (e.g., 20-ish years in the past) and expiry (e.g., 3 years in the future) to make using them in predicates easier.
    5. Click \u201cSave\u201d when you are finished entering the attributes and review the information you have entered.
    6. When you are ready, click \u201cSend Offer\u201d to initiate the issuance of the credential.
  3. Receive the Credential:
    1. Open up your mobile Wallet and look for a notification about the credential offer. Where that appears may vary based on the Wallet you are using.
    2. Review the offer and then click the \u201cAccept\u201d button.
    3. Your new credential should be saved to your wallet.
  4. Review the Issuance Data:
    1. Back in your Traction Tenant, refresh the list to see the updates status of the issuance you just completed (should be \u201ccredential_issued\u201d or \u201ccredential_acked\u201d, depending on the Wallet you are using).
    2. Expand the issuance and again to \u201cView Raw Content.\u201d to see the data that was exchanged between the Traction Issuer and the Wallet.
  5. If you want, repeat the process for other credentials types your Traction Tenant is capable of issuing.

That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.

"},{"location":"demo/Aries-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"

In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.

"},{"location":"demo/Aries-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Create and send a presentation request:
    1. Click the menu item \u201cVerification\u201d and then the button \u201cCreate Presentation Request\u201d.
    2. Select the Connection to whom you are sending the request\u2014the alias of the connection you made to your mobile Wallet.
    3. Update the example Presentation Request to match the credential that you want to request. Keep it simple for your first request\u2014it\u2019s easy to iterate in Traction to make your request more complicated. If you used the schema we suggested in Lab 1, just use the default presentation request. It should just work! If not, start from it, and:
      1. Update the value of \u201cschema_name\u201d to the name(s) of the schema for the credential(s) you issued.
      2. Update the group name(s) to something that makes sense for your credential(s) and make sure the attributes listed match your credential(s).
      3. Update (or perhaps remove) the \u201crequest_predicates\u201d JSON item, if it is not applicable to your credential.
    4. Update the optional fields (\u201cAuto Verify\u201d and \u201cOptional Comment\u201d) as you see fit. The \u201cOptional Comment\u201d goes into the list of Verifications so you can keep track of the different presentation requests you create.
    5. Click \u201cSubmit\u201d when your presentation request is ready.
  3. Respond to the Presentation Request:
    1. Open up your mobile Wallet and look for a notification about receiving a presentation request. Where that appears may vary based on the Wallet you are using.
    2. Review the information you are being asked to share, and then click the \u201cShare\u201d button to send the presentation.
  4. Review the Presentation Request Result:
    1. Back in your Traction Tenant, refresh the Verifications list to see the updated status of the presentation request you just completed. It should be something positive, like \u201cpresentation_received\u201d if all went well. It may be different depending on the Wallet you are using.
    2. If you want, expand the presentation request and \u201cView Raw Content.\u201d to see the presentation request, and presentation data exchanged between the Traction Verifier and the Wallet.
  5. Repeat the process, making the presentation request more complicated:
    1. From the list of presentations, use the arrow icon action to copy an existing presentation request and just re-run it, or evolve it.
    2. Ideas:
    3. Add predicates using date of birth (\u201colder than\u201d) and expiry (\u201cnot expired today\u201d).
      1. The p_value should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD format (the integer form of the date).
      2. The p_type should be >= for the \u201colder than\u201d, and =< for \u201cnot expired\u201d. See the table below for the form of the expression form.
    4. Add a second credential group with a restriction for a different credential to the request, so the presentation is derived from two source credentials.
p_value p_type credential_data 20230527 <= expiry_dateint 20030527 >= dob_dateint

That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!

"},{"location":"demo/Aries-Workshop/#whats-next","title":"What's Next","text":"

The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.

Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!

"},{"location":"demo/Aries-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"

Are you going to build an app that uses Traction or an instance of the Aries Cloud Agent Python (ACA-Py)? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.

To access and use your Tenant's OpenAPI (aka Swagger) interface:

  • In your Traction Tenant, click the User icon (top right) and choose \u201cDeveloper\u201d
  • Scroll to the bottom and expand the \u201cEncoded JWT\u201d, and click the \u201cCopy\u201d icon to the right to get the JWT into your clipboard.
  • By using the \u201ccopy\u201d icon, the JWT is prefixed with \u201cBearer \u201c, which is needed in the OpenAPI authorization. If you just highlight and copy the JWT, you don\u2019t get the prefix.
  • Click on \u201cAbout\u201d from the left menu and then click \u201cTraction.\u201d
  • Click on the link with the \u201cSwagger URL\u201d label to open up the OpenAPI (Swagger) API.
  • The URL is just the normal Traction Tenant API with `\u201dapi/doc\u201d added to it.
  • Click Authorize in the top right, click in the second box \u201cAuthorizationHeader (apiKey)\u201d and paste in your previously copied encoded JWT.
  • Close the authorization window and try out an Endpoint. For example, scroll down to the \u201cGET /connections\u201d endpoint, \u201cTry It Out\u201d and \u201cExecute\u201d. You should get back a list of the connections you have established in your Tenant.

The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.

We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.

"},{"location":"demo/Aries-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"

If you are challenged to use Traction or [Aries Cloud Agent Python] to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.

"},{"location":"demo/AriesOpenAPIDemo/","title":"Aries OpenAPI Demo","text":"

What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.

"},{"location":"demo/AriesOpenAPIDemo/#contents","title":"Contents","text":"
  • Getting Started
  • Running in a Browser
  • Start the Faber Agent
  • Start the Alice Agent
  • Running in Docker
  • Start the Faber Agent
  • Start the Alice Agent
  • Restarting the Docker Containers
  • Using the OpenAPI/Swagger User Interface
  • Establishing a Connection
  • Use the Faber Agent to Create an Invitation
  • Copy the Invitation created by the Faber Agent
  • Use the Alice Agent to Receive Faber's Invitation
  • Tell Alice's Agent to Accept the Invitation
  • The Faber Agent Gets the Request
  • The Faber Agent Completes the Connection
  • Review the Connection Status in Alice's Agent
  • Review the Connection Status in Faber's Agent
  • Basic Messaging Between Agents
  • Sending a message from Alice to Faber
  • Receiving a Basic Message (Faber)
  • Alice's Agent Verifies that Faber has Received the Message
  • Preparing to Issue a Credential
  • Confirming your Schema and Credential Definition
  • Notes
  • Issuing a Credential
  • Faber - Preparing to Issue a Credential
  • Faber - Issuing the Credential
  • Alice Receives Credential
  • Alice Stores Credential in her Wallet
  • Faber Receives Acknowledgment that the Credential was Received
  • Issue Credential Notes
  • Bonus Points
  • Requesting/Presenting a Proof
  • Faber sends a Proof Request
  • Alice - Responding to the Proof Request
  • Faber - Verifying the Proof
  • Present Proof Notes
  • Bonus Points
  • Conclusion
"},{"location":"demo/AriesOpenAPIDemo/#getting-started","title":"Getting Started","text":"

We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.

Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.

For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.

Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg. In that:

  • The LEDGER_URL environment variable informs the agent what ledger to use.
  • The --events option indicates that we want the controller to display the webhook events from ACA-Py in the log displayed on the terminal.
  • The --no-auto option indicates that we don't want the ACA-Py agent to automatically handle some events such as connecting. We want the controller (you!) to handle each step of the protocol.
  • The --bg option indicates that the docker container will run in the background, so accidentally hitting Ctrl-C won't stop the process.
"},{"location":"demo/AriesOpenAPIDemo/#running-in-a-browser","title":"Running in a Browser","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"

In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct....

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent","title":"Start the Alice Agent","text":"

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).

Once the Alice agent has started up (with the invite: prompt displayed), click the link near the top of the screen 8031. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct....

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!

You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.

"},{"location":"demo/AriesOpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"

To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"

To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.

In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"

To start Alice's agent, open up a second terminal window and in it, change to the same demo directory as where Faber's agent was started above. Once there, start Alice's agent:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR) that may appear.

If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"

When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:

docker stop faber\ndocker stop alice\n
"},{"location":"demo/AriesOpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"

Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.

From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.

In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.

Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation. That means you must:

  1. scroll to and find that endpoint;
  2. click on the endpoint name to expand its section of the UI;
  3. click on the Try it out button;
  4. fill in any data necessary to run the command;
  5. click Execute;
  6. check the response to see if the request worked.

So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.

Enough with the preliminaries, let\u2019s get started!

"},{"location":"demo/AriesOpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"

We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer DID method, a public ledger is not needed.

"},{"location":"demo/AriesOpenAPIDemo/#use-the-faber-agent-to-create-an-invitation","title":"Use the Faber Agent to Create an Invitation","text":"

In the Faber browser tab, navigate to the POST /connections/create-invitation endpoint. Replace the sample body with and empty production ({}) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"

Copy the entire block of the invitation object, from the curly brackets {}, excluding the trailing comma.

Show me a screenshot - Create Invitation Response

Before switching over to the Alice browser tab, scroll to and execute the GET /connections endpoint to see the list of Faber's connections. You should see a connection with a connection_id that is identical to the invitation you just created, and that its state is invitation.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#use-the-alice-agent-to-receive-fabers-invitation","title":"Use the Alice Agent to Receive Faber's Invitation","text":"

Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation Response

A key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.

"},{"location":"demo/AriesOpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"

At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections endpoint.

Show me a screenshot

To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id in the connection response from the previous POST /connections/receive-invitation endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation endpoint and paste the connection_id in the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking Execute should show that the connection has a state of request.

Show me a screenshot - Accept Invitation Request Show me a screenshot - Accept Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-gets-the-request","title":"The Faber Agent Gets the Request","text":"

In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id from the event for the next step.

Show me the event

Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.

"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"

To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request endpoint and paste the connection_id you previously copied into the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking the Execute button should show that the connection has a state of response, which indicates that Faber has accepted Alice's connection request.

Show me a screenshot - Accept Connection Request Show me a screenshot - Accept Connection Request"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-alices-agent","title":"Review the Connection Status in Alice's Agent","text":"

Switch over to the Alice browser tab.

Scroll to and execute GET /connections to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active.

Show me a screenshot - Alice Connection Status

As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.

Show me the event"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"

You are connected! Switch to the Faber browser tab and run the same GET /connections endpoint to see Faber's view of the connection. Its state is also active. Note the connection_id, you\u2019ll need it later in the tutorial.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#basic-messaging-between-agents","title":"Basic Messaging Between Agents","text":"

Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.

"},{"location":"demo/AriesOpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"

On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message endpoint. Click on Try it Out and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}). Enter the connection id of Alice's connection in the field provided. Then click on Execute.

Show me a screenshot"},{"location":"demo/AriesOpenAPIDemo/#receiving-a-basic-message-faber","title":"Receiving a Basic Message (Faber)","text":"

How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:

Show me a screenshot

Faber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).

"},{"location":"demo/AriesOpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"

How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:

Show me a screenshot

Again, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received.

"},{"location":"demo/AriesOpenAPIDemo/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.

Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:

  1. registered a public DID and stored it on the ledger;
  2. created a schema and registered it on the ledger;
  3. created a credential definition and registered it on the ledger.

The schema and credential definition could also be created through this swagger interface.

We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.

To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.

"},{"location":"demo/AriesOpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"

You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public endpoint. Click on Try it Out and Execute and you will see Faber's public DID.

Show me a screenshot

On the ledger browser of the BCovrin ledger, click the Domain page, refresh, and paste the Faber public DID into the Filter: field:

Show me a screenshot

The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:

  • the initial DID registration
  • registration of the DID endpoint (Faber is an issuer so it has a public endpoint)
  • the registered schema
  • the registered credential definition
Show me the ledger transactions

You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created endpoint to get a list of schemas, including the one schema_id that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Likewise use the GET /credential-definitions/created endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST versions of these endpoints. Now you know!

"},{"location":"demo/AriesOpenAPIDemo/#notes","title":"Notes","text":"

The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!

"},{"location":"demo/AriesOpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"

Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.

"},{"location":"demo/AriesOpenAPIDemo/#faber-preparing-to-issue-a-credential","title":"Faber - Preparing to Issue a Credential","text":"

First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections API endpoint, execute, copy it and paste the connection_id value into the same field in the issue credential JSON.

Click here to see a screenshot

For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:

  • issuer_did the Faber public DID (use GET /wallet/DID/public),
  • schema_id the Id of the schema Faber created (use GET /schemas/created) and,
  • cred_def_id the Id of the credential definition Faber created (use GET /credential-definitions/created)

into the filter section's indy subsection. Remove the \"dif\" subsection of the filter section within the JSON, and specify the remaining indy filter criteria as follows:

  • schema_version: set to the last segment of the schema_id, a three part version number that was randomly generated on startup of the Faber agent. Segments of the schema_id are separated by \":\"s.
  • schema_issuer_did: set to the same the value as in issuer_did,
  • schema_name: set to the second last segment of the schema_id, in this case degree schema

Finally, set the remaining values as follows: - auto_remove: set to true (no quotes), see note below - comment: set to any string. It's intended to let Alice know something about the credential being offered. - trace: set to false (no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.

By setting auto_remove to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.

"},{"location":"demo/AriesOpenAPIDemo/#faber-issuing-the-credential","title":"Faber - Issuing the Credential","text":"

Finally, we need put into the JSON the data values for the credential_preview section of the JSON. Copy the following and paste it between the square brackets of the attributes item, replacing what is there. Feel free to change the attribute value items, but don't change the labels or names:

      {\n        \"name\": \"name\",\n        \"value\": \"Alice Smith\"\n      },\n      {\n        \"name\": \"timestamp\",\n        \"value\": \"1234567890\"\n      },\n      {\n        \"name\": \"date\",\n        \"value\": \"2018-05-28\"\n      },\n      {\n        \"name\": \"degree\",\n        \"value\": \"Maths\"\n      },\n      {\n        \"name\": \"birthdate_dateint\",\n        \"value\": \"19640101\"\n      }\n

(Note that the birthdate above is used to present later on to pass an \"age proof\".)

OK, finally, you are ready to click Execute. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?

Show me a screenshot - credential offer

To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0 section and execute the GET /issue-credential-2.0/records endpoint. You should see a lot of information about the exchange just initiated.

"},{"location":"demo/AriesOpenAPIDemo/#alice-receives-credential","title":"Alice Receives Credential","text":"

Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.

Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.

Show me a screenshot - issue credential"},{"location":"demo/AriesOpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"

We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records endpoint. Note that within the results, the cred_ex_record just received has a state of credential-received, but not yet done. Let's address that.

Show me a screenshot - check credential exchange status

First, we need the cred_ex_id from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id string. A real controller might use the cred_ex_id for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).

Show me a screenshot - store credential

Now, in Alice\u2019s swagger browser tab, find the credentials section and within that, execute the GET /credentials endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent is the value of the credential_id element used in other calls. referent is the name returned in the indy-sdk call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.

"},{"location":"demo/AriesOpenAPIDemo/#faber-receives-acknowledgment-that-the-credential-was-received","title":"Faber Receives Acknowledgment that the Credential was Received","text":"

On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.

Show me Faber's event activity

Note that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records

Show me a screenshot

You\u2019ve done it, issued a credential! w00t!

"},{"location":"demo/AriesOpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"

Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.

  • On the Faber agent side, this is because we used the POST /issue-credential-2.0/send administrative message, which handles the back and forth for the issuer automatically. We could have used the other /issue-credential-2.0/ endpoints to allow the controller to handle each step of the protocol.
  • On Alice's agent side, this is because the handler for the issue_credential_v2_0 event always responds to credential offers with corresponding credential requests.
"},{"location":"demo/AriesOpenAPIDemo/#bonus-points","title":"Bonus Points","text":"

If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/ messages. Use the GET /issue-credential-2.0/records to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.

Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential Offer POST /issue-credential-2.0/send-offer REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/AriesOpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"

Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.

"},{"location":"demo/AriesOpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"

From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request endpoint. After hitting Try it Now, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id (there are four instances) and connection_id with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment item to whatever you want.

{\n  \"comment\": \"This is a comment about the reason for the proof\",\n  \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n  \"presentation_request\": {\n    \"indy\": {\n      \"name\": \"Proof of Education\",\n      \"version\": \"1.0\",\n      \"requested_attributes\": {\n        \"0_name_uuid\": {\n          \"name\": \"name\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_date_uuid\": {\n          \"name\": \"date\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_degree_uuid\": {\n          \"name\": \"degree\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_self_attested_thing_uuid\": {\n          \"name\": \"self_attested_thing\"\n        }\n      },\n      \"requested_predicates\": {\n        \"0_age_GE_uuid\": {\n          \"name\": \"birthdate_dateint\",\n          \"p_type\": \"<=\",\n          \"p_value\": 20030101,\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n

(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18), and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py demo code.)

Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute and cross your fingers. If the request fails check your JSON!

Show me a screenshot - send proof request"},{"location":"demo/AriesOpenAPIDemo/#alice-responding-to-the-proof-request","title":"Alice - Responding to the Proof Request","text":"

As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.

Show me Alice's event activity

In a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.

"},{"location":"demo/AriesOpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"

Note that in the response, the state is request-sent. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id} endpoint. That should return a result showing the state as done and verified as true. Proof positive!

You can see some of Faber's activity below:

Show me Faber's event activity"},{"location":"demo/AriesOpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"

As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0 event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points_1","title":"Bonus Points","text":"

If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0 messages.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.

Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof Request POST /present-proof-2.0/send-request REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation REST service Save Proof application data"},{"location":"demo/AriesOpenAPIDemo/#conclusion","title":"Conclusion","text":"

That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.

One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.

"},{"location":"demo/AriesPostmanDemo/","title":"Aries Postman Demo","text":"

In these demos we will use Postman as our controller client.

"},{"location":"demo/AriesPostmanDemo/#contents","title":"Contents","text":"
  • Getting Started
  • Installing Postman
  • Creating a workspace
  • Importing the environment
  • Importing the collections
  • Postman basics
  • Experimenting with the vc-api endpoints
  • Register new dids
  • Issue credentials
  • Store and retrieve credentials
  • Verify credentials
  • Prove a presentation
  • Verify a presentation
"},{"location":"demo/AriesPostmanDemo/#getting-started","title":"Getting Started","text":"

Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.

"},{"location":"demo/AriesPostmanDemo/#installing-postman","title":"Installing Postman","text":"

Download, install and launch postman.

"},{"location":"demo/AriesPostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"

Create a new postman workspace labeled \"acapy-demo\".

"},{"location":"demo/AriesPostmanDemo/#importing-the-environment","title":"Importing the environment","text":"

In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.

Make sure you have the environment set as your active environment.

"},{"location":"demo/AriesPostmanDemo/#importing-the-collections","title":"Importing the collections","text":"

In the collections tab from the left, click the import button.

The following collections are available:

  • vc-api
"},{"location":"demo/AriesPostmanDemo/#postman-basics","title":"Postman basics","text":"

Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.

You have your environment where you define variables to be accessed by your collections.

Each collection consists of a series of requests which can be configured independently.

"},{"location":"demo/AriesPostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"

Make sure you have a demo agent available. You can use the following command to deploy one:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n

When running for the first time, please allow some time for the images to build.

"},{"location":"demo/AriesPostmanDemo/#register-new-dids","title":"Register new dids","text":"

The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020 and BbsBlsSignature2020 credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create endpoint.

"},{"location":"demo/AriesPostmanDemo/#issue-credentials","title":"Issue credentials","text":"

For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.

{\n    \"credential\":   { \n        \"@context\": [\n            \"https://www.w3.org/2018/credentials/v1\"\n        ],\n        \"type\": [\n            \"VerifiableCredential\"\n        ],\n        \"issuer\": \"did:example:123\",\n        \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:123\"\n        }\n    },\n    \"options\": {}\n}\n

Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.

"},{"location":"demo/AriesPostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"

Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.

Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.

"},{"location":"demo/AriesPostmanDemo/#verify-credentials","title":"Verify credentials","text":"

You can verify your last issued credential with this endpoint or any issued credential you provide to it.

"},{"location":"demo/AriesPostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"

Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.

"},{"location":"demo/AriesPostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"

The final request is to verify a presentation.

"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"

There are two ways to run the alice/faber demo with endorser support enabled.

"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"

This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.

Start a VON Network instance and a Tails server:

  • Following the Building and Starting section of the VON Network Tutorial to get ledger started. You can leave off the --logs option if you want to use the same terminal for running both VON Network and the Tails server. When you are finished with VON Network, follow the Stopping And Removing a VON Network instructions.
  • Run an AnonCreds revocation registry tails server in order to support revocation by following the instructions in the Alice gets a Phone demo.

Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n

Start up Alice as normal:

./run_demo alice\n

You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.

If you issue more than 5 credentials, you will see Faber creating a new revocation registry (including endorser operations).

"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"

This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:

  • Faber runs as an Endorser (all of Faber's functions - issue credential, request proof, etc.) run normally, since Faber has ledger write access
  • Alice starts up with a DID with Author privileges (no ledger write access) and Faber is setup as Alice's Endorser

Start a VON Network and a Tails server using the instructions above.

Start up Faber as Endorser:

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n

Start up Alice as Author:

TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n

Copy the invitation from Faber to Alice to complete the connection.

Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).

This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.

Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).

If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.

"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"

The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.

The requirements on your invitations (such as in the example below) are:

  • The invitation services item MUST be a resolvable DID.
  • Or alternatively, the invitation services item MUST NOT be an inline service.
  • The DID in the invitation services item is the same one in every invitation.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.

  • The Wallet user is using a browser on the Issuers website and gets to the point where they are going to be offered a credential. As part of that flow, they are presented with a QR code that they scan with their wallet app.
  • The QR contains an RFC 0434 Out of Band invitation to connect that the Wallet processes as the invitee.
  • The Wallet uses the information in the invitation to send an RFC 0023 DID Exchange request DIDComm message back to the Issuer to initiate establishing a connection.
  • The Issuer responds back to the request with a response message, and the connection is established.
  • Later, the Wallet user returns to the Issuer's website, and does something (perhaps starts the process to get another credential) that results in the same QR code being displayed, and again the users scans the QR code with their Wallet app.
  • The Wallet recognizes (based on the DID in the services item in the invitation -- see example below) that it already has a connection to the Issuer, so instead of sending a DID Exchange request message back to the Issuer, they send an RFC 0434 Out of Band reuse DIDComm message, and both parties know to use the existing connection.
  • Had the Wallet used the DID Exchange request message, a new connection would have been established.

The RFC 0434 Out of Band protocol requirement enables reuse message by the invitee (the Wallet in the flow above) is that the service in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services item of the Out of Band invitation. As noted in the Out of Band specification, reuse cannot be used with such DID types even if the contents are the same.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

The use of connection reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.

  1. On a command line, run Faber with these parameters: ./run_demo faber --reuse-connections --public-did-connections --events.
  2. On a second command line, run Alice as normal, perhaps with the events option: ./run_demo alice --reuse-connections --events
  3. Copy the invitation from the Faber terminal and paste it into the Alice terminal at the prompt.
  4. Verify that the connection was established.
  5. If you want, go to the Alice OpenAPI screen (port 8031, path api/docs), and then use the GET Connections to see that Alice has one connection to Faber.
  6. In the Faber terminal, type 4 to get a prompt for a new connection. This will generate a new invitation with the same public DID.
  7. In the Alice terminal, type 4 to get a prompt for a new connection, and paste the new invitation.
  8. Note from the webhook events in the Faber terminal that the reuse message is received from Alice, and as a result, no new connection was created.
  9. Execute again the GET Connections endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.
  10. Try running the demo again without the --reuse-connections parameter and compare the services value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.

While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.

For example, to run faber with connection reuse using a non-public DID:

./run_demo faber --reuse-connections --events\n

To run faber using a did:peer and reusable connections:

./run_demo faber --reuse-connections --emit-did-peer-2 --events\n

To run this demo using a multi-use invitation (from Faber):

./run_demo faber --reuse-connections --emit-did-peer-2 --multi-use-invitations --events\n
"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"

A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:

--wallet-type askar-anoncreds\n

When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs instead of credx.

There is a new package under aries_cloudagent/anoncreds with code that supports the new library.

There are new endpoints (under /anoncreds) for creating a Schema and Credential Definition. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).

Within the protocols, there are new handler libraries to support the new anoncreds format (these are in parallel to the existing indy libraries).

The existing indy code are in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/indy/handler.py\naries_cloudagent/protocols/indy/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/indy/handler.py\n

The new anoncreds code is in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\naries_cloudagent/protocols/present_proof/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n

The Indy handler checks to see if the wallet type is askar-anoncreds and if so delegates the calls to the anoncreds handler, for example:

        # Temporary shim while the new anoncreds library integration is in progress\n        wallet_type = profile.settings.get_value(\"wallet.type\")\n        if wallet_type == \"askar-anoncreds\":\n            self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n

... and then:

        # Temporary shim while the new anoncreds library integration is in progress\n        if self.anoncreds_handler:\n            return self.anoncreds_handler.get_format_identifier(message_type)\n

To run the alice/faber demo using the new anoncreds library, start the demo with:

--wallet-type askar-anoncreds\n

There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:

--wallet-type askar-anoncreds\n

Everything should just work!!!

Theoretically ATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).

"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"

The changes are significant. Notably:

  • the old way was that from Indy you got the timestamp of the RevRegEntry used, accumulator and the \"deltas\" -- list of revoked and list of unrevoked credentials for a given range. I'm not exactly sure what was passed to the AnonCreds library code for building the presentation.
  • In the new way, the AnonCreds library expects the identifier for the revregentry used (aka the timestamp), the accumulator, and the full state (0s and 1s) of the revocation status of all credentials in the registry.
  • The conversion from delta to full state must be handled in the Indy resolver -- not in the \"generic\" ACA-Py code, since the other ledgers automagically provide the full state. In fact, we're likely to update Indy VDR to always provide the full state. The \"common\" (post resolver) code should get back from the resolver the full state.

The Tails File changes are minimal -- nothing about the file itself changed. What changed:

  • the tails-file-server can be published to WITHOUT knowing the ID of the RevRegEntry, since that is not known when the tails file is generated/published. See: https://github.com/bcgov/indy-tails-server/pull/53 -- basically, by publishing based on the hash.
  • The tails-file is not needed by the issuer after generation. It used to be needed for issuing and revoking credentials. Those are now done without the tails file. See: https://github.com/hyperledger/aries-cloudagent-python/pull/2302/files. That code is already in Main, so you should have it.
"},{"location":"deploying/AnonCredsWalletType/#outstanding-work","title":"Outstanding work","text":"
  • revocation notifications (not sure if they're included in anoncreds-rs updates, haven't tested them ...)
  • revocation support - complete the revocation implementation (support for unhappy path scenarios)
  • testing - various scenarios like mediation, multitenancy etc.

  • unit tests (in the new anoncreds package) (see https://github.com/hyperledger/aries-cloudagent-python/pull/2596/commits/229ffbba209aff0ea7def5bad6556d93057f3c2a)

  • unit tests (review and possibly update unit tests for the credential and presentation integration)
  • endorsement (not implemented with new anoncreds code)
  • wallet upgrade (askar to askar-anoncreds)
  • update V1.0 versions of the Credential and Presentation endpoints to use anoncreds
  • any other anoncreds issues - https://github.com/hyperledger/aries-cloudagent-python/issues?q=is%3Aopen+is%3Aissue+label%3AAnonCreds
"},{"location":"deploying/AnonCredsWalletType/#retiring-old-indy-and-askar-credx-code","title":"Retiring old Indy and Askar (credx) Code","text":"

The main changes for the Credential and Presentation support are in the following two files:

aries_cloudagent/protocols/issue_credential/v2_0/messages/cred_format.py\naries_cloudagent/protocols/present_proof/v2_0/messages/pres_format.py\n

The INDY handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.

The new code is already in place (in comments). For example for the Credential handler:

        To make the switch from indy to anoncreds replace the above with the following\n        INDY = FormatSpec(\n            \"hlindy/\",\n            DeferLoad(\n                \"aries_cloudagent.protocols.present_proof.v2_0\"\n                \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n            ),\n        )\n

There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.

Some new methods were added within the Ledger class.

New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).

"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"

Aries Cloud Agent - Python is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their Aries stack using the container images graciously provided by BC Gov and hosted through their bcgovimages docker hub account. These images have been critical to the adoption of not only ACA-Py but also Hyperledger Aries and SSI more generally.

Recognizing how critical these images are to the success of ACA-Py and consistent with Hyperledger's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.

"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"

This project builds and publishes the ghcr.io/hyperledger/aries-cloudagent-python image. Multiple variants are available; see Tags.

"},{"location":"deploying/ContainerImagesAndGithubActions/#tags","title":"Tags","text":"

ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. The following variants exist:

  • \"Standard\" - The default configuration of ACA-Py, including:
  • Aries Askar for secure storage
  • Indy VDR for Indy ledger communication
  • Indy Shared Libraries for AnonCreds

In the past, two image variants were published. These two variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.

The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.

Below is a table of all generated images and their tags:

Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"

There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.

  • Standard Image
  • Based on slim variant of Debian
  • Does NOT include libindy
  • Default user is aries
  • Uses container's system python environment rather than pyenv
  • Askar and Indy Shared libraries are installed as dependencies of ACA-Py through pip from pre-compiled binaries included in the python wrappers
  • Built from repo contents
  • Indy Image (no longer produced but included here for clarity)
  • Based on slim variant of Debian
  • Built from multi-stage build step (indy-base in the Dockerfile) which includes Indy dependencies; this could be replaced with an explicit indy-python image from the Indy SDK repo
  • Includes libindy but does NOT include the Indy CLI
  • Default user is indy
  • Uses container's system python environment rather than pyenv
  • Askar and Indy Shared libraries are installed as dependencies of ACA-Py through pip from pre-compiled binaries included in the python wrappers
  • Built from repo contents
  • Includes Indy postgres storage plugin
  • bcgovimages/aries-cloudagent
  • (Usually) based on Ubuntu
  • Based on von-image
  • Default user is indy
  • Includes libindy and Indy CLI
  • Uses pyenv
  • Askar and Indy Shared libraries built from source
  • Built from ACA-Py python package uploaded to PyPI
  • Includes Indy postgres storage plugin
"},{"location":"deploying/ContainerImagesAndGithubActions/#github-actions","title":"Github Actions","text":"
  • Tests (.github/workflows/tests.yml) - A reusable workflow that runs tests for the Standard ACA-Py variant for a given python version.
  • PR Tests (.github/workflows/pr-tests.yml) - Run on pull requests; runs tests for the Standard ACA-Py variant for a \"default\" python version. Check this workflow for the current default python version in use.
  • Nightly Tests (.github/workflows/nightly-tests.yml) - Run nightly; runs tests for the Standard ACA-Py variant for all currently supported python versions. Check this workflow for the set of currently supported versions in use.
  • Publish (.github/workflows/publish.yml) - Run on new release published or when manually triggered; builds and pushes the Standard ACA-Py variant to the Github Container Registry.
  • Integration Tests (.github/workflows/integrationtests.yml) - Run on pull requests (to the hyperledger fork only); runs BDD integration tests.
  • Black Format (.github/workflows/blackformat.yml) - Run on pull requests; checks formatting of files modified by the PR.
  • CodeQL (.github/workflows/codeql.yml) - Run on pull requests; performs CodeQL analysis.
  • Python Publish (.github/workflows/pythonpublish.yml) - Run on release created; publishes ACA-Py python package to PyPI.
  • PIP Audit (.github/workflows/pipaudit.yml) - Run when manually triggered; performs pip audit.
"},{"location":"deploying/Databases/","title":"Databases","text":"

Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.

"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"

If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n

For this configuration, a folder called wallet will be created which contains a file called sqlite.db.

"},{"location":"deploying/Databases/#postgresql","title":"PostgreSQL","text":"

The wallet can be configured to use PostgreSQL as storage.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n

In this case the hostname for the database is db on port 5432.

A docker-compose file could look like this:

# docker-compose.yml\nversion: '3'\nservices:\n  # acapy ...\n  # database\n  db:\n    image: postgres:10\n    environment:\n      POSTGRES_PASSWORD: mysecretpassword\n      POSTGRES_USER: postgres\n      POSTGRES_DB: postgres\n    ports:\n      - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"

The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.

"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"

Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.

"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"

Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.

Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.

However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.

"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"

The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)

The components are:

  1. Aries Askar: the replacement for the \u201cindy-wallet\u201d part of Indy SDK. Askar is a key management service, handling the creation and use of private keys managed by Aries agents. It\u2019s also the secure storage for DIDs, verifiable credentials, and data used by issuers of verifiable credentials for signing. As the Aries moniker indicates, Askar is suitable for use with any Aries agent, and for managing any keys, whether for use with Indy or any other Verifiable Data Registry (VDR).
  2. Indy VDR: the interface to publishing to and retrieving data from Hyperledger Indy networks. Indy VDR is scoped at the appropriate level for any client application using Hyperledger Indy networks.
  3. CredX: a Rust implementation of AnonCreds that evolved from the Indy SDK implementation. CredX is within the indy-shared-rs repository. It has significant performance enhancements over the version in the Indy SDK, particularly for Issuers.
  4. Hyperledger AnonCreds: a newer implementation of AnonCreds that is \u201cledger-agnostic\u201d \u2014 it can be used with Hyperledger Indy and any other suitable verifiable data registry.

In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.

If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.

"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"

What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.

"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"

If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar configuration parameter. You will automatically be using all of the shared components.

As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.

"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"

If you have an existing deployment, in changing the --wallet-type configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the aries-acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.

Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.

We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.

  • Prepare a way to run the Askar Upgrade script from the aries-acapy-tools repository. For example, you might want to prepare a container that you can run in the same environment that you run ACA-Py (e.g., within Kubernetes or OpenShift).
  • Shutdown your ACA-Py instance.
  • Backup the existing wallet using the usual tools you have for backing up the database.
  • If you are running in a cloud native environment such as Kubernetes, deploy the Askar Upgrade container, and as needed, update the network policies to allow the Askar Upgrade container to connect with the wallet database
  • Run the askar-upgrade script. For example:
askar-upgrade \\\n  --strategy dbpw \\\n  --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n  --wallet-name <wallet name> \\\n  --wallet-key <wallet key>\n
  • Switch the ACA-Py instance's --wallet-type configuration setting to askar
  • Start up the ACA-Py instances.
  • Trouble? Restore the initial database and revert the --wallet-type change to rollback to the pre-migration state.
  • Check the data.
  • Test the deployment.

It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.

"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"

If you have questions, comments, or suggestions about the upgrade process, please use the Aries Cloud Agent Python channel on Hyperledger Discord, or submit a GitHub issue to the ACA-Py repository.

"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"

Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.

"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"

Poetry manages virtual environments for your projects to ensure clean and isolated development environments.

"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"
poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"
poetry shell\n

Alternatively you can source the environment settings in the current shell

source $(poetry env info --path)/bin/activate\n

for powershell users this would be

(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"

When using poetry shell

exit\n

When using the activate script

deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"

Poetry uses the pyproject.toml file to manage dependencies. Add new dependencies to this file and update existing ones as needed.

"},{"location":"deploying/Poetry/#adding-a-dependency","title":"Adding a Dependency","text":"
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"
poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"
poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"
poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"

Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.

"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"
poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"
poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"

Poetry streamlines the process of building and publishing Python packages.

"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"
poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"
poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"

Extras allow you to specify additional dependencies based on project requirements.

"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"
poetry install -E extras-name\n

for example

poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"

Development dependencies are useful for tasks like testing, linting, and documentation generation.

"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"
poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":"
  • Poetry Documentation
  • PyPI: The Python Package Index
"},{"location":"deploying/RedisPlugins/","title":"ACA-Py Redis Plugins","text":""},{"location":"deploying/RedisPlugins/#aries-acapy-plugin-redis-events-redis_queue","title":"aries-acapy-plugin-redis-events redis_queue","text":"

It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configuration yaml","text":"
redis_queue:\n  connection: \n    connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n  ### For Inbound ###\n  inbound:\n    acapy_inbound_topic: \"acapy_inbound\"\n    acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n  ### For Outbound ###\n  outbound:\n    acapy_outbound_topic: \"acapy_outbound\"\n    mediator_mode: false\n\n  ### For Event ###\n  event:\n    event_topic_maps:\n      ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n      ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n      ^acapy::record::([^:])?: acapy-record-$wallet_id\n      acapy::basicmessage::received: acapy-basicmessage-received\n      acapy::problem_report: acapy-problem_report\n      acapy::ping::received: acapy-ping-received\n      acapy::ping::response_received: acapy-ping-response_received\n      acapy::actionmenu::received: acapy-actionmenu-received\n      acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n      acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n      acapy::keylist::updated: acapy-keylist-updated\n      acapy::revocation-notification::received: acapy-revocation-notification-received\n      acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n      acapy::forward::received: acapy-forward-received\n    event_webhook_topic_maps:\n      acapy::basicmessage::received: basicmessages\n      acapy::problem_report: problem_report\n      acapy::ping::received: ping\n      acapy::ping::response_received: ping\n      acapy::actionmenu::received: actionmenu\n      acapy::actionmenu::get-active-menu: get-active-menu\n      acapy::actionmenu::perform-menu-action: perform-menu-action\n      acapy::keylist::updated: keylist\n    deliver_webhook: true\n
  • redis_queue.connection.connection_url: This is required and is expected in redis://{username}:{password}@{host}:{port} format.
  • redis_queue.inbound.acapy_inbound_topic: This is the topic prefix for the inbound message queues. Recipient key of the message are also included in the complete topic name. The final topic will be in the following format acapy_inbound_{recip_key}
  • redis_queue.inbound.acapy_direct_resp_topic: Queue topic name for direct responses to inbound message.
  • redis_queue.outbound.acapy_outbound_topic: Queue topic name for the outbound messages. Used by Deliverer service to deliver the payloads to specified endpoint.
  • redis_queue.outbound.mediator_mode: Set to true, if using Redis as a http bridge when setting up a mediator agent. By default, it is set to false.
  • event.event_topic_maps: Event topic map
  • event.event_webhook_topic_maps: Event to webhook topic map
  • event.deliver_webhook: When set to true, this will deliver webhooks to endpoints specified in admin.webhook_urls. By default, set to true.
"},{"location":"deploying/RedisPlugins/#redis-plugin-usage","title":"Redis Plugin Usage","text":""},{"location":"deploying/RedisPlugins/#redis-plugin-with-docker","title":"Redis Plugin With Docker","text":"

Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.

docker-compose up --build -d\n

More details can be found here.

"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"

Installation

pip install git+https://github.com/bcgov/aries-acapy-plugin-redis-events.git\n

Startup ACA-Py with redis_queue plugin loaded

docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n    --plugin redis_queue.v1_0.events \\\n    --plugin-config plugins-config.yaml \\\n    -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n    # ... the remainder of your startup arguments\n

Regardless of the options above, you will need to startup deliverer and relay/mediator service as a bridge to receive inbound messages. Consider the following to build your docker-compose file which should also start up your redis cluster:

  • Relay + Deliverer

    relay:\n    image: redis-relay\n    build:\n        context: ..\n        dockerfile: redis_relay/Dockerfile\n    ports:\n        - 7001:7001\n        - 80:80\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7001\n        - STATUS_ENDPOINT_API_KEY=test_api_key_1\n        - INBOUND_TRANSPORT_CONFIG=[[\"http\", \"0.0.0.0\", \"80\"]]\n        - TUNNEL_ENDPOINT=http://relay-tunnel:4040\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    depends_on:\n        - redis-cluster\n        - relay-tunnel\n    networks:\n        - acapy_default\ndeliverer:\n    image: redis-deliverer\n    build:\n        context: ..\n        dockerfile: redis_deliverer/Dockerfile\n    ports:\n        - 7002:7002\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7002\n        - STATUS_ENDPOINT_API_KEY=test_api_key_2\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    depends_on:\n        - redis-cluster\n    networks:\n        - acapy_default\n
  • Mediator + Deliverer

    mediator:\n    image: acapy-redis-queue\n    build:\n        context: ..\n        dockerfile: docker/Dockerfile\n    ports:\n        - 3002:3001\n    depends_on:\n        - deliverer\n    volumes:\n        - ./configs:/home/indy/configs:z\n        - ./acapy-endpoint.sh:/home/indy/acapy-endpoint.sh:z\n    environment:\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n        - TUNNEL_ENDPOINT=http://mediator-tunnel:4040\n    networks:\n        - acapy_default\n    entrypoint: /bin/sh -c '/wait && ./acapy-endpoint.sh poetry run aca-py \"$$@\"' --\n    command: start --arg-file ./configs/mediator.yml\n\ndeliverer:\n    image: redis-deliverer\n    build:\n        context: ..\n        dockerfile: redis_deliverer/Dockerfile\n    depends_on:\n        - redis-cluster\n    ports:\n        - 7002:7002\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7002\n        - STATUS_ENDPOINT_API_KEY=test_api_key_2\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    networks:\n        - acapy_default\n

Both relay and mediator demos are also available.

"},{"location":"deploying/RedisPlugins/#aries-acapy-cache-redis-redis_cache","title":"aries-acapy-cache-redis redis_cache","text":"

ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configuration yaml","text":"
redis_cache:\n  connection: \"redis://default:test1234@172.28.0.103:6379\"\n  max_connection: 50\n  credentials:\n    username: \"default\"\n    password: \"test1234\"\n  ssl:\n    cacerts: ./ca.crt\n
  • redis_cache.connection: This is required and is expected in redis://{username}:{password}@{host}:{port} format.
  • redis_cache.max_connection: Maximum number of redis pool connections. Default: 50
  • redis_cache.credentials.username: Redis instance username
  • redis_cache.credentials.password: Redis instance password
  • redis_cache.ssl.cacerts
"},{"location":"deploying/RedisPlugins/#redis-cache-usage","title":"Redis Cache Usage","text":""},{"location":"deploying/RedisPlugins/#redis-cache-using-docker","title":"Redis Cache Using Docker","text":"
  • Running the plugin with docker is simple and straight-forward. There is an example docker-compose.yml file in the root of the project that launches both ACA-Py and an accompanying Redis instance. Running it is as simple as:

    docker-compose up --build -d\n
  • To launch ACA-Py with an accompanying redis cluster of 6 nodes (3 primaries and 3 replicas), please refer to example docker-compose.cluster.yml and run the following:

    Note: Cluster requires external docker network with specified subnet

    docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\ndocker-compose -f docker-compose.cluster.yml up --build -d\n
"},{"location":"deploying/RedisPlugins/#redis-cache-without-docker","title":"Redis Cache Without Docker","text":"

Installation

pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n

Startup ACA-Py with redis_cache plugin loaded

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config plugins-config.yaml \\\n    # ... the remainder of your startup arguments\n

or

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n    --plugin-config-value \"redis_cache.max_connections=90\" \\\n    --plugin-config-value \"redis_cache.credentials.username=username\" \\\n    --plugin-config-value \"redis_cache.credentials.password=password\" \\\n    # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"

If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue or redis_cache plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster (onto the root_profile). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.

"},{"location":"deploying/UpgradingACA-Py/","title":"Upgrading ACA-Py Data","text":"

Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.

"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"

The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.

Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.

Once an upgrade is identified as needed, the process is:

  • Collect (if any) the actions to be taken to get from the version recorded in secure storage to the current version.py
  • Execute the actions from oldest to newest.
  • If the same action is collected more than once (e.g., \"Resave the Connection Records\" is defined for two different versions), perform the action only once.
  • Store the current ACA-Py version (from version.py) in the secure storage database.
"},{"location":"deploying/UpgradingACA-Py/#forced-offline-upgrades","title":"Forced Offline Upgrades","text":"

In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.

If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.

Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!

"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"

Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade is required when running name tags based upgrade (i.e. providing --named-tag).

Tags are specified in YML file as below:

fix_issue_rev_reg:\n  fix_issue_rev_reg_records: true\n

Example:

 ./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"

With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.

There are 2 options to perform such upgrades:

  • --upgrade-all-subwallets

This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).

  • --upgrade-subwallet

This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.

Note: multiple specifications allowed

"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"

There are a couple of upgrade exception conditions to consider, as outlined in the following sections.

"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"

Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.

"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"

If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>\" and \"--force-upgrade\", the --from-version version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.

"},{"location":"deploying/deploymentModel/","title":"Deployment Model","text":""},{"location":"deploying/deploymentModel/#aries-cloud-agent-python-aca-py-deployment-model","title":"Aries Cloud Agent-Python (ACA-Py) - Deployment Model","text":"

This document is a \"concept of operations\" for an instance of an Aries cloud agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).

The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.

The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.

Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.

Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.

The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.

"},{"location":"deploying/deploymentModel/#aries-cloud-agent","title":"Aries Cloud Agent","text":"

Aries cloud agent implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.

Instances of the Aries cloud agents are configured with the following sub-components:

  • Transport Plugins - pluggable transport-specific message sender/receiver modules that interact with other agents. Messages outside the plugins are transport-agnostic JSON structures. Current modules include HTTP and WebSockets. In the future, we might add ZMQ, SMTP and so on.
  • Conductor receives inbound messages from, and sends outbound messages to, the transport plugins. After internal processing, the conductor passes inbound messages to, and receives outbound messages from, the Dispatcher. In processing the messages, the conductor manages the message\u2019s protocol instance thread state, retrieving the state on inbound messages and saving the state on outbound messages. The conductor handles generic decorators in messages such as verifying and generating signatures on message data elements, internationalization and so on.
  • Dispatcher handles the distribution of messages to the DIDComm protocol message handlers and the responses received. The dispatcher passes to the conductor the thread state to be persistance and message data (if any) to be sent out from the Aries cloud agent instance.
  • DIDComm Protocols - implement the DIDComm protocols supported by the agent instance, including the state object for the protocol, the DIDComm message handlers and the admin message handlers. Protocols are bundled as Python modules and loaded for during the agent deployment. Each protocol contributes the admin messages for the protocol to the controller REST interface. The protocols implement a number of events that invoke the controller via webhooks so that controller\u2019s business logic can respond to the event.
  • Controller REST API - a dynamically generated REST API (with a Swagger/OpenAPI user interface) based on the set of DIDComm protocols included in the agent deployment. The controller, activated via the webhooks from the protocol DIDComm message handlers, controls the Aries cloud agent by calling the REST API that invoke the protocol admin message handlers.
  • Handler API - provides abstract interfaces to various handlers needed by the protocols and core Aries cloud agent components for accessing the secure storage (wallet), other storage, the ledger and so on. The API calls the handler implementations configured into the agent deployment.
  • Handler Plugins - are the handler implementations called from the Handler API. The plugins may be internal to the Agent (in the same process space) or could be external (for example, in other processes/containers).
  • Secure Storage Plugin - the Indy SDK is embedded in the Aries cloud agent and implements the default secure storage. An Aries cloud agent can be configured to use one of a number of indy-sdk storage implementations - in-memory, SQLite and Postgres at this time.
  • Ledger Interface Plugin - In the current Aries cloud agent implementation, the Indy SDK provides an interface to an Indy-based public ledger for verifiable credential protocols. In future, ledger implementations (including those other than Indy) might be moved into the DIDComm protocol modules to be included as needed within a configured Aries cloud agent instance based on the DIDComm protocols used by the agent.
"},{"location":"deploying/deploymentModel/#controller","title":"Controller","text":"

A controller provides the personality of Aries cloud agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single Aries cloud agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.

Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding Aries cloud agent and invoking the DIDComm administration capabilities of the Aries cloud agent by calling the REST API exposed by that cloud agent. As well as responding to Aries cloud agent events, the controller initiates DIDComm protocol instances using the same REST API.

The controller and Aries cloud agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.

A controller implements the following capabilities.

  • Initiator - provides a mechanism to initiate new DIDComm protocol instances. The initiator invokes the REST API exposed by the Aries cloud agent to initiate the creation of a DIDComm protocol instance. For example, a permit-issuing service uses this mechanism to issue a Verifiable Credential associated with the issuance of a new permit.
  • Responder - subscribes to and responds to events from the Aries cloud agent protocol message handlers, providing business-driven responses. The responder might respond immediately, or the event might cause a delay while the decision is determined, perhaps by sending the request to a person to decide. The controller may persist the event response state if the event is asynchronous - for example, when the event is passed to a person who may respond when they next use the web app.
  • Configuration - manages the controller configuration data and the configuration of the Aries cloud agent. Configuration in this context includes things like:
  • Credentials and Proof Requests to be Issued/Verified (respectively) by the Aries cloud agent.
  • The configuration of the webhook handler to which the responder subscribes.

While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the Aries cloud agent it controls via the REST API exposed by that agent.

"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"

The Aries cloud agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the Aries cloud agent instance. In the most common scenario, the Aries cloud agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and Aries cloud agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the Aries cloud agent to use the Postgres wallet supports enterprise scale agent deployments.

Current examples of deployed instances of Aries cloud agent and controllers include:

  • indy-email-verification - a web app Controller that sends an email to a given email address with an embedded DIDComm invitation and on establishment of a connection, offers and provides the connected agent with an email control verifiable credential.
  • iiwbook - a web app Controller that on creation of a DIDComm connection, requests a proof of email control, and then sends to the connection a verifiable credential proving attendance at IIW. In between the proof and issuance is a human approval step using a simple web-based UI that implements a request queue.
"},{"location":"design/AnoncredsW3CCompatibility/","title":"Supporting AnonCreds in W3C VC/VP Formats in Aries Cloud Agent Python","text":"

This design proposes to extend the Aries Cloud Agent Python (ACA-PY) to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.

"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"

The pre-requisites for the work are:

  • The availability of the AnonCreds RS library supporting the generation and processing of AnonCreds VCs in W3C VC format.
  • The availability of the AnonCreds RS library supporting the generation and verification of AnonCreds VPs in W3C VP format.
  • The availability of support in the AnonCreds RS Python Wrapper for the W3C VC/VP capabilities in AnonCreds RS.
  • Agreement on the Aries Issue Credential v2.0 and Present Proof v2.0 protocol attachment formats to use when issuing AnonCreds W3C VC format credentials, and when presenting AnonCreds W3C VP format presentations.
  • For issuing, use the (proposed) RFC 0809 VC-DI Attachments
  • For presenting, use the RFC 0510 DIF Presentation Exchange Attachments

As of 2024-01-15, these pre-requisites have been met.

"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"

Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer and issue messages and when receiving request messages.

Related notes:

  • The Issue Credential v1.0 protocol will not be updated to support AnonCreds W3C VC format credentials.
  • Once an instance of the Issue Credential v2.0 protocol is started using RFC 0809 VC-DI format attachments, subsequent messages in the protocol MUST use RFC 0809 VC-DI attachments.
  • The ACA-Py maintainers are discussing the possibility of making pluggable the Issue Credential v2.0 and Present Proof v2.0 attachment formats, to simplify supporting additional formats, including RFC 0809 VC-DI.

A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.

A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.

"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"

A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request message with an RFC 0510 DIF Presentation Exchange format attachment.

If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.

An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.

"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"

A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer and issue messages, and when sending request messages.

On receiving an Issue Credential v2.0 offer message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request message.

On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.

  • The AnonCreds verifiable credential MUST support being used in both legacy AnonCreds and W3C VP format (DIF Presentation Exchange) presentations.

On receiving an RFC 0510 DIF Presentation Exchange request message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issues-to-consider","title":"Issues to consider","text":"
  • If and how the W3C VC Format attachments for the Issue Credential V2.0 and Present Proof V2 Aries DIDComm Protocols should be used when using AnonCreds W3C VC Format credentials. Anticipated triggers:
  • An Issuer Controller invokes the Admin API to trigger an Issue Credential v2.0 protocol instance such that the RFC 0809 VC-DI will be used.
  • A Holder receives an Issue Credential v2.0 offer message with an RFC 0809 VC-DI attachment.
  • A Verifier initiates a Present Proof v2.0 protocol instance with an RFC 0510 DIF Presentation Exchange that can be satisfied by AnonCreds VCs held by the holder.
  • A Holder receives a present proof request message with an RFC 0510 DIF Presentation Exchange format attachment that can be satisfied with AnonCreds credentials held by the holder.
    • How are the restrictions and revocation data elements conveyed?
  • How AnonCreds W3C VC Format verifiable credentials are stored by the holder such that they will be discoverable when needed for creating verifiable presentations.
  • How and when multiple signatures can/should be added to a W3C VC Format credential, enabling both AnonCreds and non-AnonCreds signatures on a single credential and their use in presentations. Completing a multi-signature controller is out of scope, however we want to consider and ensure the design is fundamentally compatible with multi-sig credentials.
"},{"location":"design/AnoncredsW3CCompatibility/#flow-chart","title":"Flow Chart","text":""},{"location":"design/AnoncredsW3CCompatibility/#key-questions","title":"Key Questions","text":""},{"location":"design/AnoncredsW3CCompatibility/#what-is-the-roadmap-for-delivery-what-will-we-build-first-then-second","title":"What is the roadmap for delivery? What will we build first, then second?","text":"

It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"
  1. Update Admin API endpoints to initiate an Issue Credential v2.0 protocol to issue an AnonCreds credential in W3C VC format using RFC 0809 VC-DI format attachments.
  2. Add support for the RFC 0809 VC-DI message attachment formats.
  3. Should the attachment format be made pluggable as part of this? From the maintainers: If we did make it pluggable, this would be the point where that would take place. Since these values are hard coded, it is not pluggable currently, as noted. I've been dissatisfied with how this particular piece works for a while. I think making it pluggable, if done right, could help clean it up nicely. A plugin would then define their own implementation of V20CredFormatHandler. (@dbluhm)
  4. Update the v2.0 Issue Credential protocol handler to support a \"RFC 0809 VC-DI mode\" such that when a protocol instance starts with that format, it continues with it until completion, supporting issuing AnonCreds credentials in the process. This includes both the sending and receiving of all protocol message types.
"},{"location":"design/AnoncredsW3CCompatibility/#present-proof","title":"Present Proof","text":"
  1. Adjust as needed the sending of a Present Proof request using the RFC 0510 DIF Presentation Exchange with support (to be defined) for requesting AnonCreds VCs.
  2. Adjust as needed the processing of a Present Proof request message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.
  3. AnonCreds VCs issued as legacy or W3C VC format credentials should be usable in AnonCreds W3C VP format presentations.
  4. Update the creation of an RFC 0510 DIF Presentation Exchange presentation submission to support the use of AnonCreds VCs as the source of the VPs.
  5. Update the verifier receipt of a Present Proof v2.0 presentation message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.
"},{"location":"design/AnoncredsW3CCompatibility/#what-are-the-functions-we-are-going-to-wrap","title":"What are the functions we are going to wrap?","text":"

After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject impacted by changes are as follows:

W3CCredential

  • class methods (create, load)
  • instance methods (process, to_legacy, add_non_anoncreds_integrity_proof, set_id, set_subject_id, add_context, add_type)
  • class properties (schema_id, cred_def_id, rev_reg_id, rev_reg_index)
  • bindings functions (create_w3c_credential, process_w3c_credential, _object_from_json, _object_get_attribute, w3c_credential_add_non_anoncreds_integrity_proof, w3c_credential_set_id, w3c_credential_set_subject_id, w3c_credential_add_context, w3c_credential_add_type)

W3CPresentation

  • class methods (create, load)
  • instance methods (verify)
  • bindings functions (create_w3c_presentation, _object_from_json, verify_w3c_presentation)

They will be added to __init__.py as additional exports of AnoncredsObject.

We also have to consider which classes or anoncreds objects have been modified

The classes modified according to the same PR mentioned above are:

Credential

  • added class methods (from_w3c)
  • added instance methods (to_w3c)
  • added bindings functions (credential_from_w3c, credential_to_w3c)

PresentCredential

  • modified instance methods (_get_entry, add_attributes, add_predicates)
"},{"location":"design/AnoncredsW3CCompatibility/#creating-a-w3c-vc-credential-from-credential-definition-and-issuing-and-presenting-it-as-is","title":"Creating a W3C VC credential from credential definition, and issuing and presenting it as is","text":"

The issuance, presentation and verification of legacy anoncreds are implemented in this ./aries_cloudagent/anoncreds directory. Therefore, we will also start from there.

Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer class:

Create VC_DI Credential Offer

According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,

  • binding_required
  • binding_method
  • credential_definition

could be the parameters for create_offer method.

Create VC_DI Credential

NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.

async def create_credential(\n        self,\n        credential_offer: dict,\n        credential_request: dict,\n        credential_values: dict,\n    ) -> str:\n...\n...\n  try:\n    credential = await asyncio.get_event_loop().run_in_executor(\n        None,\n        lambda: W3CCredential.create(\n            cred_def.raw_value,\n            cred_def_private.raw_value,\n            credential_offer,\n            credential_request,\n            raw_values,\n            None,\n            None,\n            None,\n            None,\n        ),\n    )\n...\n

Create VC_DI Credential Request

async def create_vc_di_credential_request(\n        self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n    ) -> Tuple[str, str]:\n...\n...\ntry:\n  secret = await self.get_master_secret()\n  (\n      cred_req,\n      cred_req_metadata,\n  ) = await asyncio.get_event_loop().run_in_executor(\n      None,\n      W3CCredentialRequest.create,\n      None,\n      holder_did,\n      credential_definition.to_native(),\n      secret,\n      AnonCredsHolder.MASTER_SECRET_ID,\n      credential_offer,\n  )\n...\n

Create VC_DI Credential Presentation

async def create_vc_di_presentation(\n        self,\n        presentation_request: dict,\n        requested_credentials: dict,\n        schemas: Dict[str, AnonCredsSchema],\n        credential_definitions: Dict[str, CredDef],\n        rev_states: dict = None,\n    ) -> str:\n...\n...\n  try:\n    secret = await self.get_master_secret()\n    presentation = await asyncio.get_event_loop().run_in_executor(\n        None,\n        Presentation.create,\n        presentation_request,\n        present_creds,\n        self_attest,\n        secret,\n        {\n            schema_id: schema.to_native()\n            for schema_id, schema in schemas.items()\n        },\n        {\n            cred_def_id: cred_def.to_native()\n            for cred_def_id, cred_def in credential_definitions.items()\n        },\n    )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"

In this case, we can use to_w3c method of Credential class to convert from legacy to w3c and to_legacy method of W3CCredential class to convert from w3c to legacy.

We could call to_w3c method like this:

vc_di_cred = Credential.to_w3c(cred_def)\n

and for to_legacy:

legacy_cred = W3CCredential.to_legacy()\n

We don't need to input any parameters to it as it in turn calls Credential.from_w3c() method under the hood.

"},{"location":"design/AnoncredsW3CCompatibility/#format-handler-for-issue_credential-v2_0-protocol","title":"Format Handler for Issue_credential V2_0 Protocol","text":"

Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI in ./protocols/issue_credential/v2_0/messages/cred_format.py -

# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n    \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n    INDY = FormatSpec(...)\n    LD_PROOF = FormatSpec(...)\n    VC_DI = FormatSpec(\n        \u201cvc_di/\u201d,\n        CredExRecordVCDI,\n        DeferLoad(\n            \u201caries_cloudagent.protocols.issue_credential.v2_0\u201d\n            \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n        ),\n    )\n

And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof

# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n    \"\"\"Credential exchange W3C detail record.\"\"\"\n\n    class Meta:\n        \"\"\"CredExRecordW3C metadata.\"\"\"\n\n        schema_class = \"CredExRecordW3CSchema\"\n\n    RECORD_ID_NAME = \"cred_ex_w3c_id\"\n    RECORD_TYPE = \"w3c_cred_ex_v20\"\n    TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n    RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n

Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -

{\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n

Assuming VCDIDetail and VCDIOptions are already in place, VCDIDetailSchema can be created like so:

# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n    \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n    class Meta:\n        \"\"\"Accept parameter overload.\"\"\"\n\n        unknown = INCLUDE\n        model_class = VCDIDetail\n\n    credential = fields.Nested(\n        CredentialSchema(),\n        required=True,\n        metadata={\n            \"description\": \"Detail of the VC_DI Credential to be issued\",\n            \"example\": {\n                \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n                \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n                \"comment\": \"<some comment>\",\n                \"formats\": [\n                    {\n                        \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"format\": \"didcomm/w3c-di-vc@v0.1\"\n                    }\n                ],\n                \"credentials~attach\": [\n                    {\n                        \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"mime-type\": \"application/ld+json\",\n                        \"data\": {\n                            \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n                        }\n                    }\n                ]\n            }\n        },\n    )\n

Then create w3c format handler with mapping like so:

# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n            CRED_20_PROPOSAL: VCDIDetailSchema,\n            CRED_20_OFFER: VCDIDetailSchema,\n            CRED_20_REQUEST: VCDIDetailSchema,\n            CRED_20_ISSUE: VerifiableCredentialSchema,\n        }\n

Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di flag to the corresponding routes.

"},{"location":"design/AnoncredsW3CCompatibility/#admin-api-attachments","title":"Admin API Attachments","text":"

To make sure that once an endpoint has been called to trigger the Issue Credential flow in 0809 W3C_DI attachment formats the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI format.

# Format specifications\nATTACHMENT_FORMAT = {\n    CRED_20_PROPOSAL: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_OFFER: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_REQUEST: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_ISSUE: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n    },\n}\n

And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:

  • _create_free_offer function that gets called in the handler function of /issue-credential-2.0/send-offer route (in addition to other offer routes)
  • credential_exchange_send_free_request handler function of /issue-credential-2.0/send-request route
  • credential_exchange_create handler function of /issue-credential-2.0/create route
  • credential_exchange_send handler function of /issue-credential-2.0/send route

The same goes for ATTACHMENT_FORMAT of Present Proof flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:

  • present_proof_send_proposal handler function of /present-proof-2.0/send-proposal route
  • present_proof_create_request handler function of /present-proof-2.0/create-request route
  • present_proof_send_free_request handler function of /present-proof-2.0/send-request route
"},{"location":"design/AnoncredsW3CCompatibility/#credential-exchange-admin-routes","title":"Credential Exchange Admin Routes","text":"
  • /issue-credential-2.0/create-offer

This route indirectly calls _formats_filters function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n            ...\n            ...\n        }\n    }\n}\n
  • /issue-credential-2.0/create

This route indirectly calls _format_result_with_details function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send-offer

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send-request

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"
  • /present-proof-2.0/send-proposal

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-present\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/create-request

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/send-request

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/records/{pres_ex_id}/send-presentation

The request body for this route might look like this:

{\n    \"presentation_definition\": <presentation_definition_schema>,\n    \"auto_remove\": true,\n    \"dif\": {\n        issuer_id: \"<issuer_id>\",\n        record_ids: {\n            \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n            \"<input descriptor id_2>\": [\"<record id>\"],\n        }\n    },\n    \"reveal_doc\": {\n        // vc_di dict\n    }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"

Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.

One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.

We will duplicate this store_credential function and modify it:

async def store_w3c_credential(...) {\n    ...\n    ...\n    try:\n        cred = W3CCredential.load(credential_data)\n    ...\n    ...\n}\n

Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?

Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"

Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy(), all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c() signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.

  • Unlike a \"typical\" non-AnonCreds W3C VC, an AnonCreds VC is never directly presented to a verifier. Rather, a derivation of the credential is generated, and it is the derivation that is shared with the verifier as a presentation. The derivation:
  • Generates presentation-specific signatures to be verified.
  • Selectively reveals attributes.
  • Generates proofs of the requested predicates.
  • Generates a proof of knowledge of the link secret blinded in the verifiable credential.
"},{"location":"design/AnoncredsW3CCompatibility/#compatibility-with-afj-how-can-we-make-sure-that-we-are-compatible","title":"Compatibility with AFJ: how can we make sure that we are compatible?","text":"

We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.

"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"

Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.

If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.

"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"

So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?

"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"

Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?

  • I don't think naming comes into play. We are creating and deriving presentations from VC Data Integrity Proofs using an AnonCreds cryptosuite. As such, these are \"stock\" W3C verifiable credentials.
"},{"location":"design/AnoncredsW3CCompatibility/#how-can-a-wallet-retain-the-capability-to-present-only-an-anoncred-credential","title":"How can a wallet retain the capability to present ONLY an anoncred credential?","text":"

If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof

How will wallets support that in a way that is developer-friendly to wallet devs?

  • The trigger for wallets to generate a W3C VP Format presentation is that they have receive a RFC 0510 DIF Presentation Exchange that can be satisfied with an AnonCreds verifiable credential in their storage. Once we decide to use one or more AnonCreds VCs to satisfy a presentation, we'll derive such a presentation and send it using the RFC 0510 DIF Presentation Exchange for the presentation message of the Present Proof v2.0 protocol.
"},{"location":"design/UpgradeViaApi/","title":"Upgrade via API Design","text":""},{"location":"design/UpgradeViaApi/#to-isolate-an-upgrade-process-and-trigger-it-via-api-the-following-pattern-was-designed-to-handle-multitenant-scenarios-it-includes-an-is_upgrading-record-in-the-walletdb-and-a-middleware-to-prevent-requests-during-the-upgrade-process","title":"To isolate an upgrade process and trigger it via API the following pattern was designed to handle multitenant scenarios. It includes an is_upgrading record in the wallet(DB) and a middleware to prevent requests during the upgrade process.","text":""},{"location":"design/UpgradeViaApi/#the-diagam-below-descripes-the-sequence-of-events-for-the-anoncreds-upgrade-process-which-it-was-designed-for-but-the-architecture-can-be-used-for-any-upgrade-process","title":"The diagam below descripes the sequence of events for the anoncreds upgrade process which it was designed for, but the architecture can be used for any upgrade process.","text":"
sequenceDiagram\n    participant A1 as Agent 1\n    participant M1 as Middleware\n    participant IAS1 as IsAnoncredsSingleton Set\n    participant UIPS1 as UpgradeInProgressSingleton Set\n    participant W as Wallet (DB)\n    participant UIPS2 as UpgradeInProgressSingleton Set\n    participant IAS2 as IsAnoncredsSingleton Set\n    participant M2 as Middleware\n    participant A2 as Agent 2\n\n    Note over A1,A2: Start upgrade for non-anoncreds wallet\n    A1->>M1: POST /anoncreds/wallet/upgrade\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is not in set\n    M1-->>UIPS1: check if wallet is in set\n    UIPS1-->>M1: wallet is not in set\n    M1->>A1: OK\n    A1-->>W: Add is_upgrading = anoncreds_in_progress record\n    A1->>A1: Upgrade wallet\n    A1-->>UIPS1: Add wallet to set\n\n    Note over A1,A2: Attempted Requests During Upgrade\n\n    Note over A1: Attempted Request\n    A1->>M1: GET /any-endpoint\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is not in set\n    M1-->>UIPS1: check if wallet is in set\n    UIPS1-->>M1: wallet is in set\n    M1->>A1: 503 Service Unavailable\n\n    Note over A2: Attempted Request\n    A2->>M2: GET /any-endpoint\n    M2-->>IAS2: check if wallet is in set\n    IAS2->>M2: wallet is not in set\n    M2-->>UIPS2: check if wallet is in set\n    UIPS2-->>M2: wallet is not in set\n    A2-->>W: Query is_upgrading = anoncreds_in_progress record\n    W-->>A2: record = anoncreds_in_progress\n    A2->>A2: Loop until upgrade is finished in seperate process\n    A2-->>UIPS2: Add wallet to set\n    M2->>A2: 503 Service Unavailable\n\n    Note over A1,A2: Agent Restart During Upgrade\n    A1-->>W: Get is_upgrading record for wallet or all subwallets\n    W-->>A1: \n    A1->>A1: Resume upgrade if in progress\n    A1-->>UIPS1: Add wallet to set\n\n    Note over A2: Same as Agent 1\n\n    Note over A1,A2: Upgrade Completes\n\n    Note over A1: Finish Upgrade\n    A1-->>W: set is_upgrading = anoncreds_finished\n    A1-->>UIPS1: Remove wallet from set\n    A1-->>IAS1: Add wallet to set\n    A1->>A1: update subwallet or restart\n\n    Note over A2: Detect Upgrade Complete\n    A2-->>W: Check is_upgrading = anoncreds_finished\n    W-->>A2: record = anoncreds_in_progress\n    A2->>A2: Wait 1 second\n    A2-->>W: Check is_upgrading = anoncreds_finished\n    W-->>A2: record = anoncreds_finished\n    A2-->>UIPS2: Remove wallet from set\n    A2-->>IAS2: Add wallet to set\n    A2->>A2: update subwallet or restart\n\n    Note over A1,A2: Restarted Agents After Upgrade\n\n    A1-->W: Get is_upgrading record for wallet or all subwallets\n    W-->>A1: \n    A1->>IAS1: Add wallet to set if record = anoncreds_finished\n\n    Note over A2: Same as Agent 1\n\n    Note over A1,A2: Attempted Requests After Upgrade\n\n    Note over A1: Attempted Request\n    A1->>M1: GET /any-endpoint\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is in set\n    M1-->>A1: OK\n\n    Note over A2: Same as Agent 1
"},{"location":"design/UpgradeViaApi/#an-example-of-the-implementation-can-be-found-via-the-anoncreds-upgrade-components","title":"An example of the implementation can be found via the anoncreds upgrade components.","text":"
- `aries_cloudagent/wallet/routes.py` in the `upgrade_anoncreds` controller \n- the upgrade code in `wallet/anoncreds_upgrade.py`\n- the middleware in `admin/server.py` in the `upgrade_middleware` function\n- the singleton sets in `wallet/singletons.py`\n- the startup process in `core/conductor.py` in the `check_for_wallet_upgrades_in_progress` function\n
"},{"location":"features/AdminAPI/","title":"ACA-Py Administration API","text":""},{"location":"features/AdminAPI/#using-the-openapi-swagger-interface","title":"Using the OpenAPI (Swagger) Interface","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py agent with the --admin {HOST} {PORT} and --admin-insecure-mode command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY} and add the X-API-Key: {KEY} header to all requests instead of using the --admin-insecure-mode parameter.

To invoke a specific method:

  • Scroll to and find that endpoint;
  • Click on the endpoint name to expand its section of the UI;
  • Click on the Try it out button;
  • Fill in any data necessary to run the command;
  • Click Execute;
  • Check the response to see if the request worked as expected.

The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.

Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.

The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.

"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"

When ACA-Py is started with the --webhook-url {URL} command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state property is updated.

When a webhook is dispatched, the record topic is appended as a path component to the URL. For example, https://webhook.host.example becomes https://webhook.host.example/topic/connections when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.

"},{"location":"features/AdminAPI/#webhooks-over-websocket","title":"Webhooks over WebSocket","text":"

ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.

Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.

  • topic: The topic of the webhook, such as connections or basicmessages
  • payload: The payload of the webhook; this is the data usually received in the request body when webhooks are delivered over HTTP
  • wallet_id: If using multitenancy, this is the wallet ID of the subwallet that emitted the webhook. This value will be omitted if not using multitenancy.

To open a WebSocket, connect to the /ws endpoint of the Admin API.

"},{"location":"features/AdminAPI/#pairwise-connection-record-updated-connections","title":"Pairwise Connection Record Updated (/connections)","text":"
  • connection_id: the unique connection identifier
  • state: init / invitation / request / response / active / error / inactive
  • my_did: the DID this agent is using in the connection
  • their_did: the DID the other agent in the connection is using
  • their_label: a connection label provided by the other agent
  • their_role: a role assigned to the other agent in the connection
  • inbound_connection_id: a connection identifier for the related inbound routing connection
  • initiator: self / external / multiuse
  • invitation_key: a verification key used to identify the source connection invitation
  • request_id: the @id property from the connection request message
  • routing_state: none / request / active / error
  • accept: manual / auto
  • error_msg: the most recent error message
  • invitation_mode: once / multi
  • alias: a local alias for the connection record
"},{"location":"features/AdminAPI/#basic-message-received-basicmessages","title":"Basic Message Received (/basicmessages)","text":"
  • connection_id: the identifier of the related pairwise connection
  • message_id: the @id of the incoming agent message
  • content: the contents of the agent message
  • state: received
"},{"location":"features/AdminAPI/#forward-message-received-forward","title":"Forward Message Received (/forward)","text":"

Enable using --monitor-forward.

  • connection_id: the identifier of the connection associated with the recipient key
  • recipient_key: the recipient key of the forward message (to field of the forward message)
  • status: The delivery status of the received forward message. Possible values:
  • sent_to_session: Message is sent directly to the connection over an active transport session
  • sent_to_external_queue: Message is sent to an external queue. No information is known on the delivery of the message
  • queued_for_delivery: Message is queued for delivery using outbound transport (recipient connection has an endpoint)
  • waiting_for_pickup: The connection has no reachable endpoint. Need to wait for the recipient to connect with return routing for delivery
  • undeliverable: The connection has no reachable endpoint, and the internal queue for messages is not enabled (--enable-undelivered-queue).
"},{"location":"features/AdminAPI/#credential-exchange-record-updated-issue_credential","title":"Credential Exchange Record Updated (/issue_credential)","text":"
  • credential_exchange_id: the unique identifier of the credential exchange
  • connection_id: the identifier of the related pairwise connection
  • thread_id: the thread ID of the previously received credential proposal or offer
  • parent_thread_id: the parent thread ID of the previously received credential proposal or offer
  • initiator: issue-credential exchange initiator self / external
  • state: proposal_sent / proposal_received / offer_sent / offer_received / request_sent / request_received / issued / credential_received / credential_acked
  • credential_definition_id: the ledger identifier of the related credential definition
  • schema_id: the ledger identifier of the related credential schema
  • credential_proposal_dict: the credential proposal message
  • credential_offer: (Indy) credential offer
  • credential_request: (Indy) credential request
  • credential_request_metadata: (Indy) credential request metadata
  • credential_id: the wallet identifier of the stored credential
  • raw_credential: the credential record as received
  • credential: the credential record as stored in the wallet
  • auto_offer: (boolean) whether to automatically offer the credential
  • auto_issue: (boolean) whether to automatically issue the credential
  • error_msg: the previous error message
"},{"location":"features/AdminAPI/#presentation-exchange-record-updated-present_proof","title":"Presentation Exchange Record Updated (/present_proof)","text":"
  • presentation_exchange_id: the unique identifier of the presentation exchange
  • connection_id: the identifier of the related pairwise connection
  • thread_id: the thread ID of the previously received presentation proposal or offer
  • initiator: present-proof exchange initiator: self / external
  • state: proposal_sent / proposal_received / request_sent / request_received / presentation_sent / presentation_received / verified
  • presentation_proposal_dict: the presentation proposal message
  • presentation_request: (Indy) presentation request (also known as proof request)
  • presentation: (Indy) presentation (also known as proof)
  • verified: (string) whether the presentation is verified: true or false
  • auto_present: (boolean) prover choice to auto-present proof as verifier requests
  • error_msg: the previous error message
"},{"location":"features/AdminAPI/#api-standard-behavior","title":"API Standard Behavior","text":"

The best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.

The routes.py file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).

The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422 HTTP response with an error message if the schema validation fails.)

API endpoints are defined using aiohttp_apispec tags (e.g. @doc, @request_schema, @response_schema etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register() method and added to the Swagger page in the post_process_routes() method.

The APIs should return the following HTTP status:

  • HTTP 200 for successful API completion, with an appropriate response
  • HTTP 400 (or appropriate 4xx code) (with an error message) for errors on input parameters (i.e., the user can retry with different parameters and potentially get a successful API call)
  • HTTP 404 if a record is expected and not found (generally for GET requests that fetch a single record)
  • HTTP 500 (or appropriate 5xx code) if there is some other processing error (i.e., it won't make any difference what parameters the user tries) with an error message

...and should not return:

  • HTTP 500 with a stack trace due to an untrapped error (we should handle error conditions with a 400 or 404 response and catch errors, providing a meaningful error message)
"},{"location":"features/AnonCredsMethods/","title":"Adding AnonCreds Methods to ACA-Py","text":"

ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.

The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.

This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.

"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"

We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.

Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.

"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"

The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:

  • Implement BaseAnonCredsResolver - here
  • Implement BaseAnonCredsRegistrar - here

The links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of aries_cloudagent/anoncreds/base.py in the main branch.

The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.

Models for the API are defined here

"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"

When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_* event (e.g., AnonCredsIssuer.finish_schema, AnonCredsIssuer.finish_cred_def, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_* to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.

"},{"location":"features/AnonCredsMethods/#questions-or-comments","title":"Questions or Comments","text":"

The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aries-cloudagent-python channel on the Hyperledger Discord Server or open an issue in this repo to get help.

Pull Requests to the ACA-Py repository to improve this content are welcome!

"},{"location":"features/AnoncredsControllerMigration/","title":"Anoncreds Controller Migration","text":"

To upgrade an agent to use anoncreds a controller should implement the required changes to endpoints and payloads in a way that is backwards compatible. The controller can then trigger the upgrade via the upgrade endpoint.

"},{"location":"features/AnoncredsControllerMigration/#step-1-endpoint-payload-and-response-changes","title":"Step 1 - Endpoint Payload and Response Changes","text":"

There is endpoint and payload changes involved with creating schema, credential definition and revocation objects. Your controller will need to implement these changes for any endpoints it uses.

A good way to implement this with backwards compatibility is to get the wallet type via /settings and handle the existing endpoints when wallet.type is askar and the new anoncreds endpoints when wallet.type is askar-anoncreds. In this way the controller will handle both types of wallets in case the upgrade fails. After the upgrade is successful and stable the controller can be updated to only handle the new anoncreds endpoints.

"},{"location":"features/AnoncredsControllerMigration/#schemas","title":"Schemas","text":""},{"location":"features/AnoncredsControllerMigration/#creating-a-schema","title":"Creating a Schema:","text":"
  • Change endpoint from POST /schemas to POST /anoncreds/schema
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"attributes\": [\"score\"],\n  \"schema_name\": \"simple\",\n  \"schema_version\": \"1.0\"\n}\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"name\": \"Example schema\",\n    \"version\": \"1.0\"\n  }\n}\n
  • options are not required
  • issuerId is the public did to be used on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n    \"schema\": {\n      \"ver\": \"1.0\",\n      \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n      \"name\": \"simple\",\n      \"version\": \"1.0\",\n      \"attrNames\": [\"score\"],\n      \"seqNo\": 541\n    }\n  },\n  \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n  \"schema\": {\n    \"ver\": \"1.0\",\n    \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n    \"name\": \"simple\",\n    \"version\": \"1.0\",\n    \"attrNames\": [\"score\"],\n    \"seqNo\": 541\n  }\n}\n

to

{\n  \"job_id\": \"string\",\n  \"registration_metadata\": {},\n  \"schema_metadata\": {},\n  \"schema_state\": {\n    \"schema\": {\n      \"attrNames\": [\"score\"],\n      \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n      \"name\": \"Example schema\",\n      \"version\": \"1.0\"\n    },\n    \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"state\": \"finished\"\n  }\n}\n

With endorsement:

{\n  \"sent\": {\n    \"schema\": {\n      \"attrNames\": [\n        \"score\"\n      ],\n      \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n      \"name\": \"schema_name\",\n      \"seqNo\": 10,\n      \"ver\": \"1.0\",\n      \"version\": \"1.0\"\n    },\n    \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\"\n  },\n  \"txn\": {...}\n}\n

to

{\n  \"job_id\": \"12cb896d648242c8b9b0fff3b870ed00\",\n  \"schema_state\": {\n    \"state\": \"wait\",\n    \"schema_id\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n    \"schema\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"attrNames\": [\n        \"score\"\n      ],\n      \"name\": \"simple\",\n      \"version\": \"1.1\"\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"schema_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#getting-schemas","title":"Getting schemas:","text":"
  • Change endpoint from GET /schemas/created to GET /anoncreds/schemas
  • Response payloads have no change
"},{"location":"features/AnoncredsControllerMigration/#getting-a-schema","title":"Getting a schema:","text":"
  • Change endpoint from GET /schemas/{schema_id} to GET /anoncreds/schema/{schema_id}
  • Response payload changed from
{\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"name\": \"schema_name\",\n    \"seqNo\": 10,\n    \"ver\": \"1.0\",\n    \"version\": \"1.0\"\n  }\n}\n

to

{\n  \"resolution_metadata\": {},\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"name\": \"Example schema\",\n    \"version\": \"1.0\"\n  },\n  \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n  \"schema_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#credential-definitions","title":"Credential Definitions","text":""},{"location":"features/AnoncredsControllerMigration/#creating-a-credential-definition","title":"Creating a credential definition:","text":"
  • Change endpoint from POST /credential-definitions to POST /anoncreds/credential-definition
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"revocation_registry_size\": 1000,\n  \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:simple:1.0\",\n  \"support_revocation\": true,\n  \"tag\": \"default\"\n}\n

to

{\n  \"credential_definition\": {\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"tag\": \"default\"\n  },\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n    \"revocation_registry_size\": 1000,\n    \"support_revocation\": true\n  }\n}\n
  • options are not required, revocation will default to false
  • issuerId is the public did to be used on the ledger
  • schemaId is the schema id on the ledger
  • The payload responses have changed

Responses

Without Endoresment:

{\n  \"sent\": {\n    \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n  },\n  \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n}\n

to

{\n  \"schema_state\": {\n    \"state\": \"finished\",\n    \"schema_id\": \"BpGaCdTwgEKoYWm6oPbnnj:2:simple:1.0\",\n    \"schema\": {\n      \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n      \"attrNames\": [\"score\"],\n      \"name\": \"simple\",\n      \"version\": \"1.0\"\n    }\n  },\n  \"registration_metadata\": {},\n  \"schema_metadata\": {\n    \"seqNo\": 555\n  }\n}\n

With Endorsement:

{\n  \"sent\": {\n    \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\"\n  },\n  \"txn\": {...}\n}\n
{\n  \"job_id\": \"7082e58aa71d4817bb32c3778596b012\",\n  \"credential_definition_state\": {\n    \"state\": \"wait\",\n    \"credential_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n    \"credential_definition\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"schemaId\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n      \"type\": \"CL\",\n      \"tag\": \"default\",\n      \"value\": {\n        \"primary\": {...},\n        \"revocation\": {...}\n      }\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"credential_definition_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#getting-credential-definitions","title":"Getting credential definitions:","text":"
  • Change endpoint from GET /credential-definitons/created to GET /anoncreds/credential-defintions
  • Response payloads have no change
"},{"location":"features/AnoncredsControllerMigration/#getting-a-credential-definition","title":"Getting a credential definition:","text":"
  • Change endpoint from GET /credential-definitons/{schema_id} to GET /anoncreds/credential-defintion/{cred_def_id}
  • Response payload changed from
{\n  \"credential_definition\": {\n    \"id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n    \"schemaId\": \"20\",\n    \"tag\": \"tag\",\n    \"type\": \"CL\",\n    \"value\": {...},\n      \"revocation\": {...}\n    },\n    \"ver\": \"1.0\"\n  }\n}\n

to

{\n  \"credential_definition\": {\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"tag\": \"default\",\n    \"type\": \"CL\",\n    \"value\": {...},\n      \"revocation\": {...}\n    }\n  },\n  \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n  \"credential_definitions_metadata\": {},\n  \"resolution_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#revocation","title":"Revocation","text":"

Most of the changes with revocation endpoints only require prepending /anoncreds to the path. There are some other subtle changes listed below.

"},{"location":"features/AnoncredsControllerMigration/#create-and-publish-registry-definition","title":"Create and publish registry definition","text":"
  • The endpoints POST /revocation/create-registry and POST /revocation/registry/{rev_reg_id}/definition have been replaced by the single endpoint POST /anoncreds/revocation-registry-definition
  • Instead of creating the registry with POST /revocation/create-registry and payload
{\n  \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n  \"max_cred_num\": 1000\n}\n
  • And then publishing with POST /revocation/registry/{rev_reg_id}/definition
params\n - conn_id\n - create_transaction_for_endorser\n
  • Use POST /anoncreds/revocation-registry-definition with payload
{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"revocation_registry_definition\": {\n    \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"maxCredNum\": 777,\n    \"tag\": \"default\"\n  }\n}\n
  • options are not required, revocation will default to false
  • issuerId is the public did to be used on the ledger
  • credDefId is the cred def id on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n  },\n  \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n}\n

to

{\n  \"revocation_registry_definition_state\": {\n    \"state\": \"finished\",\n    \"revocation_registry_definition_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\",\n    \"revocation_registry_definition\": {\n      \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n      \"revocDefType\": \"CL_ACCUM\",\n      \"credDefId\": \"BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default\",\n      \"tag\": \"default\",\n      \"value\": {...}\n    }\n  },\n  \"registration_metadata\": {},\n  \"revocation_registry_definition_metadata\": {\n    \"seqNo\": 569\n  }\n}\n

With endorsement:

{\n  \"sent\": {\n    \"result\": {\n      \"created_at\": \"2021-12-31T23:59:59Z\",\n      \"cred_def_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n      \"error_msg\": \"Revocation registry undefined\",\n      \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\",\n      \"max_cred_num\": 1000,\n      \"pending_pub\": [\n        \"23\"\n      ],\n      \"record_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n      \"revoc_def_type\": \"CL_ACCUM\",\n      \"revoc_reg_def\": {\n        \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n        \"id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n        \"revocDefType\": \"CL_ACCUM\",\n        \"tag\": \"string\",\n        \"value\": {...},\n        \"ver\": \"1.0\"\n      },\n      \"revoc_reg_entry\": {...},\n      \"revoc_reg_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n      \"state\": \"active\",\n      \"tag\": \"string\",\n      \"tails_hash\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\",\n      \"tails_local_path\": \"string\",\n      \"tails_public_uri\": \"string\",\n      \"updated_at\": \"2021-12-31T23:59:59Z\"\n    }\n  },\n  \"txn\": {...}\n}\n

to

{\n  \"job_id\": \"25dac53a1fb84cb8a5bf1b4362fbca11\",\n  \"revocation_registry_definition_state\": {\n    \"state\": \"wait\",\n    \"revocation_registry_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:4:RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default:CL_ACCUM:default\",\n    \"revocation_registry_definition\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"revocDefType\": \"CL_ACCUM\",\n      \"credDefId\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n      \"tag\": \"default\",\n      \"value\": {...}\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"revocation_registry_definition_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#send-revocation-entry-or-list-to-ledger","title":"Send revocation entry or list to ledger","text":"
  • Changes from POST /revocation/registry/{rev_reg_id}/entry to POST /anoncreds/revocation-list
  • Change from
params\n - conn_id\n - create_transaction_for_endorser\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"rev_reg_def_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\"\n}\n
  • options are not required
  • rev_reg_def_id is the revocation registry definition id on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n  },\n  \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n}\n

to

\n
"},{"location":"features/AnoncredsControllerMigration/#get-current-active-registry","title":"Get current active registry:","text":"
  • Change from GET /revocation/active-registry/{cred_def_id} to GET /anoncreds/revocation/active-registry/{cred_def_id}
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#rotate-active-registry","title":"Rotate active registry","text":"
  • Change from POST /revocation/active-registry/{cred_def_id}/rotate to POST /anoncreds/revocation/active-registry/{cred_def_id}/rotate
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-credential-revocation-status","title":"Get credential revocation status","text":"
  • Change from GET /revocation/credential-record to GET /anoncreds/revocation/credential-record
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#publish-revocations","title":"Publish revocations","text":"
  • Change from POST /revocation/publish-revocations to POST /anoncreds/revocation/publish-revocations
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"rrid2crid\": {\n    \"additionalProp1\": [\"12345\"],\n    \"additionalProp2\": [\"12345\"],\n    \"additionalProp3\": [\"12345\"]\n  }\n}\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"rrid2crid\": {\n    \"additionalProp1\": [\"12345\"],\n    \"additionalProp2\": [\"12345\"],\n    \"additionalProp3\": [\"12345\"]\n  }\n}\n
  • options are not required
"},{"location":"features/AnoncredsControllerMigration/#get-registries","title":"Get registries","text":"
  • Change from GET /revocation/registries/created to GET /anoncreds/revocation/registries
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-registry","title":"Get registry","text":"
  • Changes from GET /revocation/registry/{rev_reg_id} to GET /anoncreds/revocation/registry/{rev_reg_id}
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#fix-reocation-state","title":"Fix reocation state","text":"
  • Changes from POST /revocation/registry/{rev_reg_id}/fix-revocation-entry-state to POST /anoncreds/revocation/registry/{rev_reg_id}/fix-revocation-state
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-number-of-issued-credentials","title":"Get number of issued credentials","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued to GET /anoncreds/revocation/registry/{rev_reg_id}/issued
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-credential-details","title":"Get credential details","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued/details to GET /anoncreds/revocation/registry/{rev_reg_id}/issued/details
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-revoked-credential-details","title":"Get revoked credential details","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued/indy_recs to GET /anoncreds/revocation/registry/{rev_reg_id}/issued/indy_recs
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#set-state-manually","title":"Set state manually","text":"
  • Changes from PATCH /revocation/registry/{rev_reg_id}/set-state to PATCH /anoncreds/revocation/registry/{rev_reg_id}/set-state
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#upload-tails-file","title":"Upload tails file","text":"
  • Changes from PUT /revocation/registry/{rev_reg_id}/tails-file to PUT /anoncreds/registry/{rev_reg_id}/tails-file
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#download-tails-file","title":"Download tails file","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/tails-file to GET /anoncreds/revocation/registry/{rev_reg_id}/tails-file
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#revoke-a-credential","title":"Revoke a credential","text":"
  • Changes from POST /revocation/revoke to POST /anoncreds/revocation/revoke
  • Change payload and parameters from
"},{"location":"features/AnoncredsControllerMigration/#clear-pending-revocations","title":"Clear pending revocations","text":"
  • POST /revocation/clear-pending-revocations has been removed.
"},{"location":"features/AnoncredsControllerMigration/#delete-tails-file","title":"Delete tails file","text":"
  • Endpoint DELETE /revocation/delete-tails-server has been removed
"},{"location":"features/AnoncredsControllerMigration/#update-tails-file","title":"Update tails file","text":"
  • Endpoint PATCH /revocation/registry/{rev_reg_id} has been removed
"},{"location":"features/AnoncredsControllerMigration/#additional-endpoints","title":"Additional Endpoints","text":"
  • PUT /anoncreds/registry/{rev_reg_id}/active is available to set the active registry
"},{"location":"features/AnoncredsControllerMigration/#step-2-trigger-the-upgrade-via-the-upgrade-endpoint","title":"Step 2 - Trigger the upgrade via the upgrade endpoint","text":"

The upgrade endpoint is at POST /anoncreds/wallet/upgrade.

You need to be careful doing this, as there is no way to downgrade the wallet. It is recommended highly recommended to back-up any wallets and to test the upgrade in a development environment before upgrading a production wallet.

Params: wallet_name is the name of the wallet to upgrade. Used to prevent accidental upgrades.

The behavior for a base wallet (standalone) or admin wallet in multitenant mode is slightly different from the behavior of a subwallet (or tenant) in multitenancy mode. However, the upgrade process is the same.

  1. Backup the wallet
  2. Scale down any controller instances on old endpoints
  3. Call the upgrade endpoint
  4. Scale up the controller instances to handle new endpoints
"},{"location":"features/AnoncredsControllerMigration/#base-wallet-standalone-or-admin-wallet-in-multitenant-mode","title":"Base wallet (standalone) or admin wallet in multitenant mode:","text":"

The agent will get a 503 error during the upgrade process. Any agent instance will shut down when the upgrade is complete. It is up to the aca-py agent to start up again. After the upgrade is complete the old endpoints will no longer be available and result in a 400 error.

The aca-py agent will work after the restart. However, it will receive a warning for having the wrong wallet type configured. It is recommended to change the wallet-type to askar-anoncreds in the agent configuration file or start-up command.

"},{"location":"features/AnoncredsControllerMigration/#subwallet-tenant-in-multitenancy-mode","title":"Subwallet (tenant) in multitenancy mode:","text":"

The sub-tenant which is in the process of being upgraded will get a 503 error during the upgrade process. All other sub-tenants will continue to operate normally. After the upgrade is complete the sub-tenant will be able to use the new endpoints. The old endpoints will no longer be available and result in a 403 error. Any aca-py agents will remain running after the upgrade and it's not required that the aca-py agent restarts.

"},{"location":"features/AnoncredsProofValidation/","title":"Anoncreds Proof Validation in ACA-Py","text":"

ACA-Py performs pre-validation when verifying Anoncreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the anoncreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.

The list of possible verification messages can be found here, and consists of:

class PresVerifyMsg(str, Enum):\n    \"\"\"Credential verification codes.\"\"\"\n\n    RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n    RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n    TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n    CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n    PRES_VALUE_ERROR = \"VALUE_ERROR\"\n    PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n

If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid (which means the attribute identified by 19_uuid contained a timestamp outside of the non-revocation interval (this is just a warning)).

A presentation verification may include multiple messages, for example:

    ...\n    \"verified\": \"true\",\n    \"verified_msgs\": [\n        \"TS_OUT_NRI::18_uuid\",\n        \"TS_OUT_NRI::18_id_GE_uuid\",\n        \"TS_OUT_NRI::18_busid_GE_uuid\"\n    ],\n    ...\n

... or it may include a single message, for example:

    ...\n    \"verified\": \"false\",\n    \"verified_msgs\": [\n        \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n    ],\n    ...\n

... or the verified_msgs may be null or an empty array.

"},{"location":"features/AnoncredsProofValidation/#presentation-modifications-and-warnings","title":"Presentation Modifications and Warnings","text":"

The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:

  • \"RMV_RFNT_NRI\": Referent contains a non-revocation interval for a non-revocable credential (timestamp is removed)
  • \"RMV_GLB_NRI\": Presentation contains a global interval for a non-revocable credential (timestamp is removed)
  • \"TS_OUT_NRI\": Presentation contains a non-revocation timestamp outside of the requested non-revocation interval (warning)
  • \"UNRVL_ATTR\": Presentation contains attributes with unrevealed values (warning)
"},{"location":"features/AnoncredsProofValidation/#presentation-pre-validation-errors","title":"Presentation Pre-validation Errors","text":"

The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:

VALUE_ERROR::<description of the failed validation>\n

These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...) in the code.

A summary of the possible errors includes:

  • Information missing in presentation exchange record
  • Timestamp provided for irrevocable credential
  • Referenced revocation registry not found on ledger
  • Timestamp outside of reasonable range (future date or pre-dates revocation registry)
  • Mismatch between provided and requested timestamps for non-revocation
  • Mismatch between requested and provided attributes or predicates
  • Self-attested attribute provided for a requested attribute with restrictions
  • Encoded value doesn't match raw value
"},{"location":"features/AnoncredsProofValidation/#anoncreds-verification-exceptions","title":"Anoncreds Verification Exceptions","text":"

Typically, when you call the anoncreds verifier_verify_proof() method, it will return a True or False based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:

VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.

ACA-Py provides a DIDMethods registry holding all the DID methods supported for storage in a wallet

Askar and InMemory are the only wallets supporting this registry.

"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"

By default, ACA-Py supports did:key and did:sov. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web to the registry from a plugin setup method.

WEB = DIDMethod(\n    name=\"web\",\n    key_types=[ED25519, BLS12381G2],\n    rotation=True,\n    holder_defined_did=HolderDefinedDid.REQUIRED  # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n    methods = context.inject(DIDMethods)\n    methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"

POST /wallet/did/create can be provided with parameters for any registered DID method. Here's a follow-up to the did:web method example:

{\n    \"method\": \"web\",\n    \"options\": {\n        \"did\": \"did:web:doma.in\",\n        \"key_type\": \"ed25519\"\n    }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"

For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.

"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.

A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.

For example, given the DID did:example:1234abcd, a DID Resolver that supports did:example might return:

{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n  \"id\": \"did:example:1234abcd#keys-1\",\n  \"type\": \"Ed25519VerificationKey2018\",\n  \"controller\": \"did:example:1234abcd\",\n  \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n  \"id\": \"did:example:1234abcd#did-communication\",\n  \"type\": \"did-communication\",\n  \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n

For more details on DIDs and DID Resolution, see the W3C DID Specification.

In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.

"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver","text":"

In ACA-Py, the DIDResolver provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers list. This registry enables additional resolvers to be loaded via plugin.

"},{"location":"features/DIDResolution/#example-usage","title":"Example usage","text":"
class ExampleMessageHandler:\n    async def handle(context: RequestContext, responder: BaseResponder):\n    \"\"\"Handle example message.\"\"\"\n    resolver = await context.inject(DIDResolver)\n\n    doc: dict = await resolver.resolve(\"did:example:123\")\n    assert doc[\"id\"] == \"did:example:123\"\n\n    verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n    # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"

On DIDResolver.resolve or DIDResolver.dereference, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:

  • Type. The resolver's type falls into one of two categories: native or non-native. A \"native\" resolver will perform all resolution steps directly. A \"non-native\" resolver delegates all or part of resolution to another service or entity.
  • Self-reported supported DIDs. Each method resolver implements a supports method or a supported_did_regex method. These methods are used to determine whether the given DID can be handled by the method resolver.

The selection algorithm roughly follows the following steps:

  1. Filter out all resolvers where resolver.supports(did) returns false.
  2. Partition remaining resolvers by type with all native resolvers followed by non-native resolvers (registration order preserved within partitions).
  3. For each resolver in the resulting list, attempt to resolve the DID and return the first successful result.
"},{"location":"features/DIDResolution/#resolver-plugins","title":"Resolver Plugins","text":"

Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool, writing your own should require minimal overhead.

"},{"location":"features/DIDResolution/#writing-a-resolver-plugin","title":"Writing a resolver plugin","text":"

Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver class as the base. BaseDIDResolver is an abstract base class that defines the interface that the core DIDResolver expects for Method resolvers.

The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py will be in charge of injecting the plugin, and example_resolver.py will have the logic implementation to resolve for a fabricated did:example method.

"},{"location":"features/DIDResolution/#__init-__py","title":"__init __.py","text":"

```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver

from .example_resolver import ExampleResolver

async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)

#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n    \"\"\"ExampleResolver class.\"\"\"\n\n    def __init__(self):\n        super().__init__(ResolverType.NATIVE)\n        # Alternatively, ResolverType.NON_NATIVE\n        self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n    @property\n    def supported_did_regex(self) -> Pattern:\n        \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n        return self._supported_did_regex\n\n    async def setup(self, context):\n        \"\"\"Setup the example resolver (none required).\"\"\"\n\n    async def _resolve(self, profile: Profile, did: str) -> dict:\n        \"\"\"Resolve example DIDs.\"\"\"\n        if did != \"did:example:1234abcd\":\n            raise DIDNotFound(\n                \"We only actually resolve did:example:1234abcd. Sorry!\"\n            )\n\n        return {\n            \"@context\": \"https://www.w3.org/ns/did/v1\",\n            \"id\": \"did:example:1234abcd\",\n            \"verificationMethod\": [{\n                \"id\": \"did:example:1234abcd#keys-1\",\n                \"type\": \"Ed25519VerificationKey2018\",\n                \"controller\": \"did:example:1234abcd\",\n                \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n            }],\n            \"service\": [{\n                \"id\": \"did:example:1234abcd#did-communication\",\n                \"type\": \"did-communication\",\n                \"serviceEndpoint\": \"https://agent.example.com/\"\n            }]\n        }\n

"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"

There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.

  • ResolverError
  • Base class for resolver exceptions.
  • DIDNotFound
  • Raised when DID is not found using DID method specific algorithm.
  • DIDMethodNotSupported
  • Raised when no resolver is registered for a given did method.
"},{"location":"features/DIDResolution/#using-resolver-plugins","title":"Using Resolver Plugins","text":"

In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github DIDs.

The resolution algorithm is simple: for the github DID did:github:dbluhm, the method specific identifier dbluhm (a GitHub username) is used to lookup an index.jsonld file in the ghdid repository in that GitHub users profile. See GitHub DID Method Specification for more details.

To use this plugin, first install it into your project's python environment:

pip install git+https://github.com/dbluhm/acapy-resolver-github\n

Then, invoke ACA-Py as you normally do with the addition of:

$ aca-py start \\\n    --plugin acapy_resolver_github \\\n    # ... the remainder of your startup arguments\n

Or add the following to your configuration file:

plugin:\n  - acapy_resolver_github\n

The following is a fully functional Dockerfile encapsulating this setup:

```dockerfile= FROM ghcr.io/hyperledger/aries-cloudagent-python:py3.9-0.12.1 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github

CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]

To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n

"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":"
  • Github Resolver
  • Universal Resolver
  • DIDComm Resolver
"},{"location":"features/DIDResolution/#references","title":"References","text":"

https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/

"},{"location":"features/DevReadMe/","title":"Developer's Read Me for Hyperledger Aries Cloud Agent - Python","text":"

See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.

"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":"
  • Introduction
  • Developer Demos
  • Running
  • Configuring ACA-PY: Command Line Parameters
  • Docker
  • Locally Installed
  • About ACA-Py Command Line Parameters
  • Provisioning Secure Storage
  • Mediation
  • Multi-tenancy
  • JSON-LD Credentials
  • Developing
  • Prerequisites
  • Running In A Dev Container
  • Running Locally
  • Logging
  • Running Tests
  • Running Aries Agent Test Harness Tests
  • Development Workflow
  • Publishing Releases
  • Dynamic Injection of Services
"},{"location":"features/DevReadMe/#introduction","title":"Introduction","text":"

Aries Cloud Agent Python (ACA-Py) is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.

The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.

"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"

To put ACA-Py through its paces at the command line, checkout our demos page.

"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"

ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help option to discover the available command line parameters. There are a lot of them--for good and bad.

"},{"location":"features/DevReadMe/#docker","title":"Docker","text":"

To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"

If you installed the PyPi package, the executable aca-py should be available on your PATH.

Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n

If you get an error about a missing module indy (e.g. ModuleNotFoundError: No module named 'indy') when running aca-py, you will need to install the Indy libraries from the command line:

pip install python3_indy\n

Once that completes successfully, you should be able to run aca-py --version and the other examples above.

"},{"location":"features/DevReadMe/#about-aca-py-command-line-parameters","title":"About ACA-Py Command Line Parameters","text":"

ACA-Py invocations are separated into two types - initially provisioning an agent (provision) and starting a new agent process (start). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.

When starting an agent instance, at least one inbound and one outbound transport MUST be specified.

For example:

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --outbound-transport http\n

or

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --inbound-transport ws 0.0.0.0 8001 \\\n                --outbound-transport ws \\\n                --outbound-transport http\n

ACA-Py ships with both inbound and outbound transport drivers for http and ws (websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.

Most configuration parameters are provided to the agent at startup. Refer to the Running sections above for details on listing the available command line parameters.

"},{"location":"features/DevReadMe/#provisioning-secure-storage","title":"Provisioning Secure Storage","text":"

It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...).

aca-py provision --wallet-type askar --seed $SEED\n

For additional provision options, execute aca-py provision --help.

Additional information about secure storage options and configuration settings can be found here.

"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"

ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.

"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"

ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.

"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"

ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.

"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"

Docker must be installed to run software locally and to run the test suite.

"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"

The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.

One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.

"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"

Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.

./scripts/run_docker start <args>\n

For example:

./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

To enable the ptvsd Python debugger for Visual Studio/VSCode use the --debug command line parameter.

Any ports you will be using from the docker container should be published using the PORTS environment variable. For example:

PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

Refer to the previous section for instructions on how to run ACA-Py.

"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"

You can find more details about logging and log levels here.

"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"

To run the ACA-Py test suite, use the following script:

./scripts/run_tests\n

To run the ACA-Py test suite with ptvsd debugger enabled:

./scripts/run_tests --debug\n

To run specific tests pass parameters as defined by pytest:

./scripts/run_tests aries_cloudagent/protocols/connections\n

To run the tests including Indy SDK and related dependencies, run the script:

./scripts/run_tests_indy\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"

You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).

Check out and run AATH tests as follows (this tests the aca-py main branch):

git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n

The manage script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.

"},{"location":"features/DevReadMe/#development-workflow","title":"Development Workflow","text":"

We use Ruff to enforce a coding style guide.

We use Black to automatically format code.

Please write tests for the work that you submit.

Tests should reside in a directory named tests alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_ to be automatically picked up the test runner.

There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!

The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.

Please also refer to the contributing guidelines and code of conduct.

"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"

The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.

"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"

The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext instance, currently within conductor.py. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass); for instance the wallet instance may be injected using wallet = context.inject(BaseWallet). The inject method normally throws an exception if no implementation of the base class is provided, but can be called with required=False for optional dependencies (in which case a value of None may be returned).

Providers are registered with either context.injector.bind_instance(BaseClass, instance) for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider) for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.

The BaseProvider classes in the config.provider module include ClassProvider, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet). ClassProvider accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass), allowing dynamic injection of dependencies when the class instance is instantiated.

"},{"location":"features/Endorser/","title":"Transaction Endorser Support","text":"

ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.

"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"

Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.

Once the connection is established and active, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info endpoint.

"},{"location":"features/Endorser/#requesting-transaction-endorsement","title":"Requesting Transaction Endorsement","text":"

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id (of the Endorser connection) and create_transaction_for_endorser.

(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)

If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:

Protocol Step Author Endorser Request Endorsement /transactions/create-request Endorse Transaction /transactions/{tran_id}/endorse Write Transaction /transactions/{tran_id}/write

Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.

Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).

"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"

The following start-up parameters are supported by ACA-Py:

Endorsement:\n  --endorser-protocol-role <endorser-role>\n                        Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n                        Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n                        the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n  --endorser-public-did <endorser-public-did>\n                        For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n                        DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n  --endorser-alias <endorser-alias>\n                        For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n  --auto-request-endorsement\n                        For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n                        transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n  --auto-endorse-transactions\n                        For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n                        [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n  --auto-write-transactions\n                        For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n                        var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n  --auto-create-revocation-transactions\n                        For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n                        the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n  --auto-promote-author-did\n                        For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"

Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:

  • a routes.py file exposes the admin endpoints
  • handler files implement responses to any received Endorse protocol messages
  • a manager.py file implements common functionality that is called from both the routes.py and handler classes (as well as from other classes that need to interact with Endorser functionality)

The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).

The overall architecture can be illustrated as:

"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"

An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:

You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).

At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.

For example:

  • Once the credential definition is created, a revocation registry must be created (for revocable cred defs)
  • Once the revocation registry is created, a revocation entry must be created
  • Potentially, the cred def status could be updated once the revocation entry is completed

Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.

"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"

... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:

You can see the same endorsement processes in this sequence.

Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.

"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"

By design Hyperledger Aries is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, Indy and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.

"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":"
  • General Concept
  • BBS+
  • Preparing to Issue a Credential
  • JSON-LD Context
    • Writing JSON-LD Contexts
  • Signature Suite
  • Did Method
    • did:sov
    • did:key
  • Issuing Credentials
  • Retrieving Issued Credentials
  • Present Proof
  • VC-API
"},{"location":"features/JsonLdCredentials/#general-concept","title":"General Concept","text":"

The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:

  • Verifiable Credentials Data Model
  • JSON-LD Articles and Presentations
  • Linked Data Proofs
"},{"location":"features/JsonLdCredentials/#bbs","title":"BBS+","text":"

BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.

Some other resources that can help you get started with BBS+ credentials:

  • BBS+ Signatures 2020
  • Video: BBS+ Credential Exchange in Hyperledger Aries
"},{"location":"features/JsonLdCredentials/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.

"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"

It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:

<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]

For credentials the https://www.w3.org/2018/credentials/v1 context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1 URL MUST be present in the context. For convenience this URL will be automatically added to the @context of the credential if not present.

{\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://other-contexts.com\"\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"

Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:

  • Vaccination Certificate Vocabulary
  • Citizenship Vocabulary
  • Traceability Vocabulary

Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.

For the remainder of this guide, we will be using the example UniversityDegreeCredential type and https://www.w3.org/2018/credentials/examples/v1 context from the Verifiable Credential Data Model. You should not use this for production use cases.

"},{"location":"features/JsonLdCredentials/#signature-suite","title":"Signature Suite","text":"

Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports three signature suites for issuing credentials:

  • Ed25519Signature2018 - Very well supported. No zero knowledge proofs or selective disclosure.
  • Ed25519Signature2020 - Updated version of 2018 suite.
  • BbsBlsSignature2020 - Newer, but supports zero knowledge proofs and selective disclosure.

Generally you should always use BbsBlsSignature2020 as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.

"},{"location":"features/JsonLdCredentials/#did-method","title":"DID Method","text":"

Besides the JSON-LD context, we need a DID to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:

  • did:sov - Can only be used for Ed25519Signature2018 signature suite.
  • did:key - Can be used for both Ed25519Signature2018 and BbsBlsSignature2020 signature suites.
"},{"location":"features/JsonLdCredentials/#didsov","title":"did:sov","text":"

When using did:sov you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public endpoint. For backwards compatibility the did is returned without did:sov prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB becomes did:sov:DViYrCMPWfuLiY7LLs8giB)

"},{"location":"features/JsonLdCredentials/#didkey","title":"did:key","text":"

A did:key did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.

You can create a did:key using the /wallet/did/create endpoint with the following body. Use ed25519 for Ed25519Signature2018, bls12381g2 for BbsBlsSignature2020.

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj

"},{"location":"features/JsonLdCredentials/#issuing-credentials","title":"Issuing Credentials","text":"

Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0)

The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.

All endpoints in API use the aries/ld-proof-vc-detail@v1.0. We'll use the /issue-credential-2.0/send as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.

The detail should be included under the filter.ld_proof property. To issue a credential call the /issue-credential-2.0/send endpoint, with the example body below and the connection_id and issuer keys replaced. The value of issuer should be the did that you created in the Did Method paragraph above.

If you don't have auto-respond-credential-offer and auto-store-credential enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request and /issue-credential-2.0/records/{cred_ex_id}/store to finalize the credential issuance.

See the example body
{\n  \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"

After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.

Call the /credentials/w3c endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.

See the example response
{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/bbs/v1\"\n      ],\n      \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n      \"subject_ids\": [],\n      \"proof_types\": [\"BbsBlsSignature2020\"],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n          \"created\": \"2021-05-03T12:31:28.561945\",\n          \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n    }\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"

\u26a0\ufe0f TODO: https://github.com/hyperledger/aries-cloudagent-python/pull/1125

"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"

In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.

These endpoints include:

  • GET /vc/credentials -> returns a list of all stored json-ld credentials
  • GET /vc/credentials/{id} -> returns a json-ld credential based on it's ID
  • POST /vc/credentials/issue -> signs a credential
  • POST /vc/credentials/verify -> verifies a credential
  • POST /vc/credentials/store -> stores an issued credential
  • POST /vc/presentations/prove -> proves a presentation
  • POST /vc/presentations/verify -> verifies a presentation

To learn more about using these endpoints, please refer to the available postman collection.

"},{"location":"features/JsonLdCredentials/#external-suite-provider","title":"External Suite Provider","text":"

It is possible to extend the signature suite support, including outsourcing signing JSON-LD Credentials to some other component (KMS, HSM, etc.), using the ExternalSuiteProvider interface. This interface can be implemented and registered via plugin. The plugged in provider will be used by ACA-Py's LDP-VC subsystem to create a LinkedDataProof object, which is responsible for signing normalized credential values.

This interface enables taking advantage of ACA-Py's JSON-LD processing to construct and format the credential while exposing a simple interface to a plugin to make it responsible for signatures. This can also be combined with plugged in DID Methods, VerificationKeyStrategy, and other pluggable components.

See this example project here for more details on the interface and its usage: https://github.com/dbluhm/acapy-ld-signer

"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":"
  • DIDComm Message Forwarding - Sending an encrypted message to its recipient by first sending it to a third party responsible for forwarding the message on. Message contents are encrypted once for the recipient then wrapped in a forward message encrypted to the third party.
  • Mediator - An agent that forwards messages to a client over a DIDComm connection.
  • Mediated Agent or Mediation client - The agent(s) to which a mediator is willing to forward messages.
  • Mediation Request - A message from a client to a mediator requesting mediation or forwarding.
  • Keylist - The list of public keys used by the mediator to lookup to which connection a forward message should be sent. Each mediated agent is responsible for maintaining the keylist with the mediator.
  • Keylist Update - A message from a client to a mediator informing the mediator of changes to the keylist.
  • Default Mediator - A mediator to be used with every newly created DIDComm connection.
  • Mediation Connection - Connection between the mediator and the mediated agent or client. Agents can use as many mediators as the identity owner sees fit. Requests for mediation are handled on a per connection basis.
  • See Aries RFC 0211: Coordinate Mediation Protocol for additional details on message attributes and more.
"},{"location":"features/Mediation/#command-line-arguments","title":"Command Line Arguments","text":"
  • --open-mediation - Instructs mediators to automatically grant all incoming mediation requests.
  • --mediator-invitation - Receive invitation, send mediation request and set as default mediator.
  • --mediator-connections-invite - Connect to mediator through a connection invitation. If not specified, connect using an OOB invitation.
  • --default-mediator-id - Set pre-existing mediator as default mediator.
  • --clear-default-mediator - Clear the stored default mediator.

The minimum set of arguments required to enable mediation are:

aca-py start ... \\\n    --open-mediation\n

To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):

aca-py start ... \\\n    --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n

If a default mediator has already been established, then the --default-mediator-id argument can be used instead of the --mediator-invitation.

"},{"location":"features/Mediation/#didcomm-messages","title":"DIDComm Messages","text":"

See Aries RFC 0211: Coordinate Mediation Protocol.

"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":"
  • GET mediation/requests
  • Return a list of all mediation records. Filter by conn_id, state, mediator_terms and recipient_terms.
  • GET mediation/requests/{mediation_id}
  • Retrieve a mediation record by id.
  • DELETE mediation/requests/{mediation_id}
  • Delete mediation record by id.
  • POST mediation/requests/{mediation_id}/grant
  • As a mediator, grant a stored mediation request and send granted message to client.
  • POST mediation/requests/{mediation_id}/deny
  • As a mediator, deny a stored mediation request and send denied message to client.
  • POST mediation/request/{conn_id}
  • Send a mediation request to connection identified by the given connection ID.
  • GET mediation/keylists
  • Returns key list associated with a connection. Filter on client for keys mediated by other agents and server for keys mediated by this agent.
  • POST mediation/keylists/{mediation_id}/send-keylist-update
  • Send keylist update message to mediator identified by the given mediation ID. Updates contained in body of request.
  • POST mediation/keylists/{mediation_id}/send-keylist-query
  • Send keylist query message to mediator identified by the given mediation ID.
  • GET mediation/default-mediator (PR pending)
  • Retrieve the currently set default mediator.
  • PUT mediation/{mediation_id}/default-mediator (PR pending)
  • Set the mediator identified by the given mediation ID as the default mediator.
  • DELETE mediation/default-mediator (PR pending)
  • Clear the currently set default mediator (mediation status is maintained and remains functional, just not used as the default).
"},{"location":"features/Mediation/#mediator-message-flow-overview","title":"Mediator Message Flow Overview","text":""},{"location":"features/Mediation/#using-a-mediator","title":"Using a Mediator","text":"

After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.

"},{"location":"features/Multicredentials/","title":"Multi-Credentials","text":"

It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.

With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of Aries Cloud Agent Python being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.

Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using Aries Cloud Agent Python as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.

In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.

"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"

Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger is supported. Configurable write ledgers can be assigned using is_write in the configuration or using any of the --genesis-url, --genesis-file, and --genesis-transactions startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError is raised.

More background information including problem statement, design (algorithm) and more can be found here.

"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":"
  • Usage
  • Example config file
  • Config properties
  • Multi-ledger Admin API
  • Ledger Selection
  • Read Requests
    • For checking ledger in parallel
  • Write Requests
  • A Special Warning for TAA Acceptance
  • Impact on other ACA-Py function
  • Known Issues
"},{"location":"features/Multiledger/#usage","title":"Usage","text":"

Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list startup parameter. This parameter accepts a string which is the path to the YAML configuration file. For example:

--genesis-transactions-list ./aries_cloudagent/config/multi_ledger_config.yml

If --genesis-transactions-list is specified, then --genesis-url, --genesis-file, --genesis-transactions should not be specified.

"},{"location":"features/Multiledger/#example-config-file","title":"Example config file","text":"
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n  endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n  endorser_alias: \"endorser_test\"\n- id: greenlightDev\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: is_write property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev ledgers are write configurable. By default, on startup bcovrinTest will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger endpoint, either greenlightDev and bcovrinTest can be set as the write ledger.

Note 2: The greenlightDev ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.

- id: localVON\n  is_production: false\n  is_write: true\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: For instance with regards to example config above, localVON will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.

"},{"location":"features/Multiledger/#config-properties","title":"Config properties","text":"

For each ledger, the required properties are as following:

  • id*: The id (or name) of the ledger, can also be used as the pool name if none provided
  • is_production*: Whether the ledger is a production ledger. This is used by the pool selector algorithm to know which ledger to use for certain interactions (i.e. prefer production ledgers over non-production ledgers)

For connecting to ledger, one of the following needs to be specified:

  • genesis_file: The path to the genesis file to use for connecting to an Indy ledger.
  • genesis_transactions: String of genesis transactions to use for connecting to an Indy ledger.
  • genesis_url: The url from which to download the genesis transactions to use for connecting to an Indy ledger.
  • is_write: Whether this ledger is writable. At least one write ledger must be specified, unless running in read-only mode. Multiple write ledgers can be specified in config.

Optional properties:

  • pool_name: name of the indy pool to be opened
  • keepalive: how many seconds to keep the ledger open
  • socks_proxy
  • endorser_did: Endorser public DID registered on the ledger, needed for supporting Endorser protocol at multi-ledger level.
  • endorser_alias: Endorser alias for this ledger, needed for supporting Endorser protocol at multi-ledger level.

Note: Both endorser_did and endorser_alias are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did and endorser.endorser_alias profile setting respectively.

"},{"location":"features/Multiledger/#multi-ledger-admin-api","title":"Multi-ledger Admin API","text":"

Multi-ledger related actions are grouped under the ledger topic in the SwaggerUI.

  • GET /ledger/config: Returns the multiple ledger configuration currently in use
  • GET /ledger/get-write-ledger: Returns the current active/set write_ledger's ledger_id
  • GET /ledger/get-write-ledgers: Returns list of available write_ledger's ledger_id
  • PUT /ledger/{ledger_id}/set-write-ledger: Set active write_ledger's ledger_id
"},{"location":"features/Multiledger/#ledger-selection","title":"Ledger Selection","text":""},{"location":"features/Multiledger/#read-requests","title":"Read Requests","text":"

The following process is executed for these functions in ACA-Py:

  1. get_schema
  2. get_credential_definition
  3. get_revoc_reg_def
  4. get_revoc_reg_entry
  5. get_key_for_did
  6. get_all_endpoints_for_did
  7. get_endpoint_for_did
  8. get_nym_role
  9. get_revoc_reg_delta

If multiple ledgers are configured then IndyLedgerRequestsExecutor service extracts DID from the record identifier and executes the check below, else it returns the BaseLedger instance.

"},{"location":"features/Multiledger/#for-checking-ledger-in-parallel","title":"For checking ledger in parallel","text":"
  • lookup_did_in_configured_ledgers function
  • If the calling function (above) is in items 1-4, then check the DID in cache for a corresponding applicable ledger_id. If found, return the ledger info, else continue.
  • Otherwise, launch parallel _get_ledger_by_did tasks for each of the configured ledgers.
  • As these tasks get finished, construct applicable_prod_ledgers and applicable_non_prod_ledgers dictionaries, each with self_certified and non_self_certified inner dict which are sorted by the original order or index.
  • Order/preference for selection: self_certified > production > non_production
    • Checks production ledger where the DID is self_certified
    • Checks non_production ledger where the DID is self_certified
    • Checks production ledger where the DID is not self_certified
    • Checks non_production ledger where the DID is not self_certified
  • Return an applicable ledger if found, else raise an exception.
  • _get_ledger_by_did function
  • Build and submit GET_NYM
  • Wait for a response for 10 seconds, if timed out return None
  • Parse response
  • Validate state proof
  • Check if DID is self certified
  • Returns ledger info to lookup_did_in_configured_ledgers
"},{"location":"features/Multiledger/#write-requests","title":"Write Requests","text":"

On startup, the first configured applicable ledger is assigned as the write_ledger (BaseLedger), the selection is dependent on the order (top-down) and whether it is production or non_production. For instance, considering this example configuration, ledger bcovrinTest will be set as write_ledger as it is the topmost production ledger. If no production ledgers are included in configuration then the topmost non_production ledger is selected.

"},{"location":"features/Multiledger/#a-special-warning-for-taa-acceptance","title":"A Special Warning for TAA Acceptance","text":"

When you run in multi-ledger mode, ACA-Py will use the pool-name (or id) specified in the ledger configuration file for each ledger.

(When running in single-ledger mode, ACA-Py uses default as the ledger name.)

If you are running against a ledger in write mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name as a key.

This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:

  • set the id for your writable ledger to default (in your ledgers.yaml file)

or:

  • re-accept the TAA once you restart your ACA-Py in multi-ledger mode

Once you re-start ACA-Py, you can check the GET /ledger/taa endpoint to verify your TAA acceptance status.

"},{"location":"features/Multiledger/#impact-on-other-aca-py-function","title":"Impact on other ACA-Py function","text":"

There should be no impact/change in functionality to any ACA-Py protocols.

IndySdkLedger was refactored by replacing wallet: IndySdkWallet instance variable with profile: Profile and accordingly .aries_cloudagent/indy/credex/verifier, .aries_cloudagent/indy/models/pres_preview, .aries_cloudagent/indy/sdk/profile.py, .aries_cloudagent/indy/sdk/verifier, ./aries_cloudagent/indy/verifier were also updated.

Added build_and_return_get_nym_request and submit_get_nym_request helper functions to IndySdkLedger and IndyVdrLedger.

Best practice/feedback emerging from Askar session deadlock issue and endorser refactoring PR was also addressed here by not leaving sessions open unnecessarily and changing context.session to context.profile.session, etc.

These changes are made here:

  • ./aries_cloudagent/ledger/routes.py
  • ./aries_cloudagent/messaging/credential_definitions/routes.py
  • ./aries_cloudagent/messaging/schemas/routes.py
  • ./aries_cloudagent/protocols/actionmenu/v1_0/routes.py
  • ./aries_cloudagent/protocols/actionmenu/v1_0/util.py
  • ./aries_cloudagent/protocols/basicmessage/v1_0/routes.py
  • ./aries_cloudagent/protocols/coordinate_mediation/v1_0/handlers/keylist_handler.py
  • ./aries_cloudagent/protocols/coordinate_mediation/v1_0/routes.py
  • ./aries_cloudagent/protocols/endorse_transaction/v1_0/routes.py
  • ./aries_cloudagent/protocols/introduction/v0_1/handlers/invitation_handler.py
  • ./aries_cloudagent/protocols/introduction/v0_1/routes.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_issue_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_offer_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_proposal_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_request_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/routes.py
  • ./aries_cloudagent/protocols/issue_credential/v2_0/routes.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_proposal_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_request_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/routes.py
  • ./aries_cloudagent/protocols/trustping/v1_0/routes.py
  • ./aries_cloudagent/resolver/routes.py
  • ./aries_cloudagent/revocation/routes.py
"},{"location":"features/Multiledger/#known-issues","title":"Known Issues","text":"
  • When in multi-ledger mode and switching ledgers (e.g.: the agent is registered on Ledger A and has published its DID there, and now wants to \"move\" to Ledger B) there is an issue that will cause the registration to the new ledger to fail.
"},{"location":"features/Multitenancy/","title":"Multi-tenancy in ACA-Py","text":"

Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.

This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.

"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":"
  • General Concept
  • Base and Sub Wallets
  • Usage
  • Multi-tenant Admin API
  • Managed vs Unmanaged Mode
  • Managed Mode
  • Unmanaged Mode
  • Mode Usage
  • Message Routing
  • Relaying
  • Mediation
  • Webhooks
  • Webhook URLs
  • Identifying the wallet
  • Authentication
  • Getting a token
    • Method 1: Register new tenant
    • Method 2: Get tenant token
  • JWT Secret
  • SwaggerUI
  • Tenant Management
  • Update a tenant
  • Remove a tenant
  • Per tenant settings
"},{"location":"features/Multitenancy/#general-concept","title":"General Concept","text":"

When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.

"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"

Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.

The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.

The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.

"},{"location":"features/Multitenancy/#usage","title":"Usage","text":"

Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin startup parameter. See Multi-tenant Admin API below for more info on the admin API.

The --jwt-secret startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.

Example:

# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#multi-tenant-admin-api","title":"Multi-tenant Admin API","text":"

The multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the Authorization header as specified in Authentication).

Multi-tenancy related actions are grouped under the /multitenancy path or the multitenancy topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the --multitenant-admin startup parameter can be used.

See the SwaggerUI for the exact API definition for multi-tenancy.

"},{"location":"features/Multitenancy/#managed-vs-unmanaged-mode","title":"Managed vs Unmanaged Mode","text":"

Multi-tenancy in ACA-Py is designed with two key management modes in mind.

"},{"location":"features/Multitenancy/#managed-mode","title":"Managed Mode","text":"

In managed mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.

"},{"location":"features/Multitenancy/#unmanaged-mode","title":"Unmanaged Mode","text":"

In unmanaged mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See Authentication for more info.

It is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.

Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.

"},{"location":"features/Multitenancy/#mode-usage","title":"Mode Usage","text":"

The mode used can be specified when creating a wallet using the key_management_mode parameter.

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"

In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. Hyperledger Aries defines two types of routing methods, mediation and relaying.

See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.

"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"

In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.

"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"

ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:

  1. The base wallet has a default mediator set that will be used by sub wallets.
  2. Use --mediator-invitation to connect to the mediator, request mediation, and set it as the default mediator
  3. Use default-mediator-id if you're already connected to the mediator and mediation is granted (e.g. after restart).
  4. When a sub wallet creates a connection or key it will be registered at the mediator via the base wallet connection. The base wallet will still act as a relay and route the messages to the correct sub wallets.
  5. Pro: Not every wallet needs to create a connection with the mediator
  6. Con: Sub wallets have no control over the mediator.
  7. Sub wallet creates a connection with mediator and requests mediation
  8. Use mediation as you would in a non-multi-tenant agent, however, the base wallet will still act as a relay.
  9. You can set the default mediator to use for connections (using the mediation API).
  10. Pro: Sub wallets have control over the mediator.
  11. Con: Every wallet

The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.

A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.

+---------------------+      +----------------------+      +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+      +----------------------+      +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"

ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.

When creating a wallet wallet_dispatch_type be used to specify how webhooks for the wallet should be dispatched. The options are:

  • default: Dispatch only to webhooks associated with this wallet.
  • base: Dispatch only to webhooks associated with the base wallet.
  • both: Dispatch to both webhook targets.

If either default or both is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls option.

Example:

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_webhook_urls\": [\n    \"https://webhook-url.com/path\",\n    \"https://another-url.com/site\"\n  ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"

When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.

For HTTP events the wallet id is included as the x-wallet-id header. For WebSockets, the wallet id is included in the enclosing JSON object.

HTTP example:

POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n    // event payload\n}\n

WebSocket example:

{\n  \"topic\": \"{topic}\",\n  \"wallet_id\": \"{wallet_id}\",\n  \"payload\": {\n    // event payload\n  }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"

When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key header. As there is only a single wallet, this provides sufficient authentication and authorization.

For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.

Example

GET /connections [headers=\"Authorization: Bearer {token}]\n

The Authorization header is in addition to the Admin API key. So if the admin-api-key is enabled (which should be enabled in production) both the Authorization and the x-api-key headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key should be provided.

"},{"location":"features/Multitenancy/#getting-a-token","title":"Getting a token","text":"

A token can be obtained in two ways. The first method is the token parameter from the response of the create wallet (POST /multitenancy/wallet) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token) endpoint.

"},{"location":"features/Multitenancy/#method-1-register-new-tenant","title":"Method 1: Register new tenant","text":"

This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token as well as other useful information like your wallet id will be returned to you.

Example

new_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample.png\",\n  \"key_management_mode\": \"managed\",\n  \"label\": \"example-label-02\",\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_key\": \"example-encryption-key-02\",\n  \"wallet_name\": \"example-name-02\",\n  \"wallet_type\": \"askar\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook\"\n  ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02\",\n    \"image_url\": \"https://aries.ca/images/sample.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"

This method allows you to retrieve a tenant token for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key and the wallet_id of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d { \"wallet_key\": \"example-encryption-key-02\" }\n

Response

{\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n

In unmanaged mode, the get token endpoint also requires the wallet_key parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.

{\n  \"wallet_id\": \"wallet_id\",\n  // \"wallet_key\" in only present in unmanaged mode\n  \"wallet_key\": \"wallet_key\"\n}\n

In unmanaged mode, sending the wallet_key to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.

"},{"location":"features/Multitenancy/#jwt-secret","title":"JWT Secret","text":"

For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret param is added to the ACA-Py agent that will be used for JWT creation and verification.

"},{"location":"features/Multitenancy/#swaggerui","title":"SwaggerUI","text":"

When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize button at the top to set the correct authentication headers. Make sure to also include the Bearer part in the input field. This won't be automatically added.

"},{"location":"features/Multitenancy/#tenant-management","title":"Tenant Management","text":"

After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.

"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"

The following properties can be updated: image_url, label, wallet_dispatch_type, and wallet_webhook_urls for tenants of a multitenancy wallet. To update these properties you will PUT a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID} admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.

Example

update_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n  \"label\": \"example-label-02-updated\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook/updated\"\n  ]\n}'\n
echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook/updated\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02-updated\",\n    \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n

An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error

"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"

The following information is required to delete a tenant:

  • wallet_id
  • wallet_key
  • {Admin_Api_Key} if admin is protected

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n

Response

{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"

To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:

Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_role
  • POST /multitenancy/wallet

Added extra_settings dict field to request schema. extra_settings can be configured in the request body as below:

Example Request

{\n    \"wallet_name\": \" ... \",\n    \"default_label\": \" ... \",\n    \"wallet_type\": \" ... \",\n    \"wallet_key\": \" ... \",\n    \"key_management_mode\": \"managed\",\n    \"wallet_webhook_urls\": [],\n    \"wallet_dispatch_type\": \"base\",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"public-invites\": true\n    },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n  -H \"Content-Type: application/json\" \\\n  -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n  -d @-\n
  • PUT /multitenancy/wallet/{wallet_id}

Added extra_settings dict field to request schema.

Example Request

  {\n    \"wallet_webhook_urls\": [ ... ],\n    \"wallet_dispatch_type\": \"default\",\n    \"label\": \" ... \",\n    \"image_url\": \" ... \",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"ACAPY_PUBLIC_INVITES\": false\n    },\n  }\n
  echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: Aca-Py Plug-Ins","text":""},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-it-work","title":"What's in a Plug-In and How does it Work?","text":"

Plug-ins are loaded on Aca-Py startup based on the following parameters:

  • --plugin - identifies the plug-in library to load
  • --block-plugin - identifies plug-ins (including built-ins) that are not to be loaded
  • --plugin-config - identify a configuration parameter for a plug-in
  • --plugin-config-value - identify a value for a plug-in configuration

The --plug-in parameter specifies a package that is loaded by Aca-Py at runtime, and extends Aca-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.

The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py routes.py (to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).

You can discover which plug-ins are installed in an aca-py instance by calling (in the \"server\" section) the GET /plugins endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config to inspect the Aca-Py configuration, which will include the configuration for the external plug-ins.)

"},{"location":"features/PlugIns/#setup-method","title":"setup method","text":"

If a setup method is provided, it will be called. If not, the message_types.py and routes.py will be explicitly loaded.

This would be in the package/module __init__.py:

async def setup(context: InjectionContext):\n    pass\n

TODO I couldn't find an implementation of a custom setup in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.

"},{"location":"features/PlugIns/#message_typespy","title":"message_types.py","text":"

When loading a plug-in, if there is a message_types.py available, Aca-Py will check the following attributes to initialize the protocol(s):

  • MESSAGE_TYPES - identifies message types supported by the protocol
  • CONTROLLERS - identifies protocol controllers
"},{"location":"features/PlugIns/#routespy","title":"routes.py","text":"

If routes.py is available, then Aca-Py will call the following functions to initialize the Admin endpoints:

  • register() - registers routes for the new Admin endpoints
  • register_events() - registers an events this package will listen for/respond to
"},{"location":"features/PlugIns/#definitionpy","title":"definition.py","text":"

If definition.py is available, Aca-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):

versions = [\n    {\n        \"major_version\": 1,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v1_0\",\n    },\n    {\n        \"major_version\": 2,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v2_0\",\n    },\n]\n

The attributes are:

  • major_version - specifies the protocol major version
  • current_minor_version - specifies the protocol minor version
  • minimum_minor_version - specifies the minimum supported version (if a lower version is installed in another agent)
  • path - specifies the sub-path within the package for this version
"},{"location":"features/PlugIns/#loading-aca-py-plug-ins-at-runtime","title":"Loading Aca-Py Plug-Ins at Runtime","text":"

The load sequence for a plug-in (the \"Startup\" class depends on how Aca-Py is running - upgrade, provision or start):

sequenceDiagram\n  participant Startup\n  Note right of Startup: Configuration is loaded on startup<br/>from aca-py config params\n    Startup->>+ArgParse: configure\n    ArgParse->>settings:  [\"external_plugins\"]\n    ArgParse->>settings:  [\"blocked_plugins\"]\n\n    Startup->>+Conductor: setup()\n      Note right of Conductor: Each configured plug-in is validated and loaded\n      Conductor->>DefaultContext:  build_context()\n      DefaultContext->>DefaultContext:  load_plugins()\n      DefaultContext->>+PluginRegistry:  register_package() (for built-in protocols)\n        PluginRegistry->>PluginRegistry:  register_plugin() (for each sub-package)\n      DefaultContext->>PluginRegistry:  register_plugin() (for non-protocol built-ins)\n      loop for each external plug-in\n      DefaultContext->>PluginRegistry:  register_plugin()\n      alt if a setup method is provided\n        PluginRegistry->>ExternalPlugIn:  has setup\n      else if routes and/or message_types are provided\n        PluginRegistry->>ExternalPlugIn:  has routes\n        PluginRegistry->>ExternalPlugIn:  has message_types\n      end\n      opt if definition is provided\n        PluginRegistry->>ExternalPlugIn:  definition()\n      end\n      end\n      DefaultContext->>PluginRegistry:  init_context()\n        loop for each external plug-in\n        alt if a setup method is provided\n          PluginRegistry->>ExternalPlugIn:  setup()\n        else if a setup method is NOT provided\n          PluginRegistry->>PluginRegistry:  load_protocols()\n          PluginRegistry->>PluginRegistry:  load_protocol_version()\n          PluginRegistry->>ProtocolRegistry:  register_message_types()\n          PluginRegistry->>ProtocolRegistry:  register_controllers()\n        end\n        PluginRegistry->>PluginRegistry:  register_protocol_events()\n      end\n\n      Conductor->>Conductor:  load_transports()\n\n      Note right of Conductor: If the admin server is enabled, plug-in routes are added\n      Conductor->>AdminServer:  create admin server if enabled\n\n    Startup->>Conductor: start()\n      Conductor->>Conductor:  start_transports()\n      Conductor->>AdminServer:  start()\n\n    Note right of Startup: the following represents an<br/>admin server api request\n    Startup->>AdminServer:  setup_context() (called on each request)\n      AdminServer->>PluginRegistry:  register_admin_routes()\n      loop for each external plug-in\n        PluginRegistry->>ExternalPlugIn:  routes.register() (to register endpoints)\n      end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"

When developing a new plug-in:

  • If you are providing a new protocol or defining message types, you should include a definition.py file.
  • If you are providing a new protocol or defining message types, you should include a message_types.py file.
  • If you are providing additional Admin endpoints, you should include a routes.py file.
  • If you are providing any other functionality, you should provide a setup.py file to initialize the custom functionality. No guidance is currently available for this option.
"},{"location":"features/PlugIns/#pip-vs-poetry-support","title":"PIP vs Poetry Support","text":"

Most Aca-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.

"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"

TBD

"},{"location":"features/PlugIns/#aca-py-plug-ins","title":"Aca-Py Plug-ins","text":"

This list was originally published in this hackmd document.

Maintainer Name Features Last Update Link BCGov Redis Events Inbound/Outbound message queue Sep 2022 https://github.com/bcgov/aries-acapy-plugin-redis-events Hyperledger Aries Toolbox UI for ACA-py Aug 2022 https://github.com/hyperledger/aries-toolbox Hyperledger Aries ACApy Plugin Toolbox Protocol Handlers Aug 2022 https://github.com/hyperledger/aries-acapy-plugin-toolbox Indicio Data Transfer Specific Data import Aug 2022 https://github.com/Indicio-tech/aries-acapy-plugin-data-transfer Indicio Question & Answer Non-Aries Protocol Aug 2022 https://github.com/Indicio-tech/acapy-plugin-qa Indicio Acapy-plugin-pickup Fetching Messages from Mediator Aug 2022 https://github.com/Indicio-tech/acapy-plugin-pickup Indicio Machine Readable GF Governance Framework Mar 2022 https://github.com/Indicio-tech/mrgf Indicio Cache Redis Cache for Scalability Jul 2022 https://github.com/Indicio-tech/aries-acapy-cache-redis SICPA Dlab Kafka Events Event Bus Integration Aug 2022 https://github.com/sicpa-dlab/aries-acapy-plugin-kafka-events SICPA Dlab DidComm Resolver Universal Resolver for DIDComm Aug 2022 https://github.com/sicpa-dlab/acapy-resolver-didcomm SICPA Dlab Universal Resolver Multi-ledger Reading Jul 2021 https://github.com/sicpa-dlab/acapy-resolver-universal DDX mydata-did-protocol Oct 2022 https://github.com/decentralised-dataexchange/acapy-mydata-did-protocol BCGov Basic Message Storage Basic message storage (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/basicmessage_storage BCGov Multi-tenant Provider Multi-tenant Provider (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/multitenant_provider BCGov Traction Innkeeper Innkeeper (traction) Feb 2023 https://github.com/bcgov/traction/tree/develop/plugins/traction_innkeeper"},{"location":"features/PlugIns/#references","title":"References","text":"

The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)

Configuration params:

  • https://github.com/hyperledger/aries-cloudagent-python/issues/1121
  • https://hackmd.io/ROUzENdpQ12cz3UB9qk1nA
  • https://github.com/hyperledger/aries-cloudagent-python/pull/1226

Loading plug-ins:

  • https://github.com/hyperledger/aries-cloudagent-python/pull/1086

Versioning for plug-ins:

  • https://github.com/hyperledger/aries-cloudagent-python/pull/443
"},{"location":"features/QualifiedDIDs/","title":"Qualified DIDs In ACA-Py","text":""},{"location":"features/QualifiedDIDs/#context","title":"Context","text":"

In the past, ACA-Py has used \"unqualified\" DIDs by convention established early on in the Aries ecosystem, before the concept of Peer DIDs, or DIDs that existed only between peers and were not (necessarily) published to a distributed ledger, fully matured. These \"unqualified\" DIDs were effectively Indy Nyms that had not been published to an Indy network. Key material and service endpoints were communicated by embedding the DID Document for the \"DID\" in DID Exchange request and response messages.

For those familiar with the DID Core Specification, it is a stretch to refer to these unqualified DIDs as DIDs. Usage of these DIDs will be phased out, as dictated by Aries RFC 0793: Unqualified DID Transition. These DIDs will be phased out in favor of the did:peer DID Method. ACA-Py's support for this method and it's use in DID Exchange and DID Rotation is dictated below.

"},{"location":"features/QualifiedDIDs/#did-exchange","title":"DID Exchange","text":"

When using DID Exchange as initiated by an Out-of-Band invitation:

  • POST /out-of-band/create-invitation accepts two parameters (in addition to others):
  • use_did_method: a DID Method (options: did:peer:2 did:peer:4) indicating that a DID of that type is created (if necessary), and used in the invitation. If a DID of the type has to be created, it is flagged as the \"invitation\" DID and used in all future invitations so that connection reuse is the default behaviour.
    • This is the recommend approach, and we further recommend using did:peer:4.
  • use_did: a complete DID, which will be used for the invitation being established. This supports the edge case of an entity wanting to use a new DID for every invitation. It is the responsibility of the controller to create the DID before passing it in.
  • If not provided, the 0.11.0 behaviour of an unqualified DID is used.
    • We expect this behaviour will change in a later release to be that use_did_method=\"did:peer:4\" is the default, which is created and (re)used.
  • The provided handshake protocol list must also include didexchange/1.1. Optionally, didexchage/1.0 may also be provided, thus enabling backwards compatibility with agents that do not yet support didexchage/1.0 and use of unqualified DIDs.

When receiving an OOB invitation or creating a DID Exchange request to a known Public DID:

  • POST /didexchange/create-request and POST /didexchange/{conn_id}/accept-invitation accepts two parameters (in addition to others):
  • use_did_method: a DID Method (options: did:peer:2 did:peer:4) indicating that a DID of that type should be created and used for the connection.
    • This is the recommend approach, and we further recommend using did:peer:4.
  • use_did: a complete DID, which will be used for the connection being established. This supports the edge case of an entity wanting to use the same DID for more than one connection. It is the responsibility of the controller to create the DID before passing it in.
  • If neither option is provided, the 0.11.0 behaviour of an unqualified DID is created if DID Exchange 1.0 is used, and a DID Peer 4 is used if DID Exchange 1.1 is used.
    • We expect this behaviour will change in a later release to be that a did:peer:4 is created and DID Exchange 1.1 is always used.
  • When auto-accept is used with DID Exchange, then an unqualified DID is created if DID Exchange 1.0 is being used, and a DID Peer 4 is used if DID Exchange 1.1 is used.

With these changes, an existing ACA-Py installation using unqualified DIDs can upgrade to use qualified DIDs:

  • Reactively in 0.12.0 and later, by using like DIDs from the other agent.
  • Proactively, by adding the use_did or use_did_method parameter on the POST /out-of-band/create-invitation, POST /didexchange/create-request. and POST /didexchange/{conn_id}/accept_invitation endpoints and specifying did:peer:2 or did_peer:4.
  • The other agent must be able to process the selected DID Method.
  • Proactively, by updating to use DID Exchange v1.1 and having the other side auto-accept the connection.
"},{"location":"features/QualifiedDIDs/#did-rotation","title":"DID Rotation","text":"

As part of the transition to qualified DIDs, existing connections may be updated to qualified DIDs using the DID Rotate protocol. This is not strictly required; since DIDComm v1 depends on recipient keys for correlating a received message back to a connection, the DID itself is mostly ignored. However, as we transition to DIDComm v2 or if it is desired to update the keys associated with a connection, DID Rotate may be used to update keys and service endpoints.

The steps to do so are:

  • The rotating party creates a new DID using POST /wallet/did/create (or through the endpoints provided by a plugged in DID Method, if relevant).
  • For example, the rotating party will likely create a new did:peer:4.
  • The rotating party initiates the rotation with POST /did-rotate/{conn_id}/rotate providing the created DID as the to_did in the body of the Admin API request.
  • If the receiving party supports DID rotation, a did_rotate webhook will be emitted indicating success.
"},{"location":"features/SelectiveDisclosureJWTs/","title":"SD-JWT Implementation in ACA-Py","text":"

This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.

This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.

In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.

"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"

The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss, iat, exp, and cnf are always visible.

The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:

{\n    \"birthdate\": \"1940-01-01\",\n    \"address\": {\n        \"street_address\": \"123 Main St\",\n        \"locality\": \"Anytown\",\n        \"region\": \"Anystate\",\n        \"country\": \"US\",\n    },\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\"

The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.

"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"

The issuer can decide to treat the address claim in the above example payload as a block that can either be disclosed completely or not at all.

The issuer lists out all the claims inside \"address\" in the non_sd_list, but not address itself:

non_sd_list = [\n    \"address.street_address\",\n    \"address.locality\",\n    \"address.region\",\n    \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"

The issuer may instead decide to make the address claim contents selectively disclosable individually.

The issuer lists only \"address\" in the non_sd_list.

non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"

The issuer may also decide to make the address claim contents selectively disclosable recursively, i.e., the address claim is made selectively disclosable as well as its sub-claims.

The issuer lists neither address nor the subclaims of address in the non_sd_list, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list need not be defined explicitly.

"},{"location":"features/SelectiveDisclosureJWTs/#walk-through-of-sd-jwt-implementation","title":"Walk-Through of SD-JWT Implementation","text":""},{"location":"features/SelectiveDisclosureJWTs/#signing-sd-jwts","title":"Signing SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtsign-endpoint","title":"Example input to /wallet/sd-jwt/sign endpoint","text":"
{\n  \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n  \"headers\": {},\n  \"payload\": {\n    \"sub\": \"user_42\",\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"email\": \"johndoe@example.com\",\n    \"phone_number\": \"+1-202-555-0101\",\n    \"phone_number_verified\": true,\n    \"address\": {\n      \"street_address\": \"123 Main St\",\n      \"locality\": \"Anytown\",\n      \"region\": \"Anystate\",\n      \"country\": \"US\"\n    },\n    \"birthdate\": \"1940-01-01\",\n    \"updated_at\": 1570000000,\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000\n  },\n  \"non_sd_list\": [\n    \"given_name\",\n    \"family_name\",\n    \"nationalities\"\n  ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"
\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n

The sd_jwt_sign() method:

  • Creates the list of claims that are selectively disclosable
  • Uses the non_sd_list compared against the list of JSON paths for all claims to create the list of JSON paths for selectively disclosable claims
  • Separates list splices if necessary
  • Sorts the sd_list so that the claims deepest in the structure are handled first
    • Since we will wrap the selectively disclosable claim keys, the JSON paths for nested structures do not work properly when the claim key is wrapped in an object
  • Uses the JSON paths in the sd_list to find each selectively disclosable claim and wrap it in the SDObj defined by the sd-jwt Python library and removes/replaces the original entry
  • For list items, the element itself is wrapped
  • For other objects, the dictionary key is wrapped
  • With this modified payload, the SDJWTIssuerACAPy.issue() method:
  • Checks if there are selectively disclosable claims at any level in the payload
  • Assembles the SD-JWT payload and creates the disclosures
  • Calls SDJWTIssuerACAPy._create_signed_jws(), which is redefined in order to use the ACA-Py jwt_sign method and which creates the JWT
  • Combines and returns the signed JWT with its disclosures and option key binding JWT, as indicated in the specification
"},{"location":"features/SelectiveDisclosureJWTs/#verifying-sd-jwts","title":"Verifying SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtverify-endpoint","title":"Example input to /wallet/sd-jwt/verify endpoint","text":"

Using the output from the /wallet/sd-jwt/sign example above, we have decided to only reveal two of the selectively disclosable claims (user and updated_at) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.

{\n  \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"

Note that attributes in the non_sd_list (given_name, family_name, and nationalities), as well as essential verification data (iss, iat, exp) are visible directly within the payload. The disclosures include only the values for the user and updated_at claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"] list.

{\n  \"headers\": {\n    \"typ\": \"JWT\",\n    \"alg\": \"EdDSA\",\n    \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n  },\n  \"payload\": {\n    \"_sd\": [\n      \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n      \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n      \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n      \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n      \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n      \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n      \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n    ],\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"nationalities\": [\n      {\n        \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n      },\n      {\n        \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n      },\n      {\n        \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n      }\n    ],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000,\n    \"_sd_alg\": \"sha-256\"\n  },\n  \"valid\": true,\n  \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n  \"disclosures\": [\n    [\n      \"xvDX00fjZferiNiPod51qQ\",\n      \"updated_at\",\n      1570000000\n    ],\n    [\n      \"X99s3_LixBcor_hntREZcg\",\n      \"sub\",\n      \"user_42\"\n    ]\n  ]\n}\n

The sd_jwt_verify() method:

  • Parses the SD-JWT presentation into its component parts: JWT, disclosures, and optional key binding
  • The JWT payload is parsed from its headers and signature
  • Creates a list of plaintext disclosures
  • Calls SDJWTVerifierACAPy._verify_sd_jwt, which is redefined in order to use the ACA-Py jwt_verify method, and which returns the verified JWT
  • If key binding is used, the key binding JWT is verified and checked against the expected audience and nonce values
"},{"location":"features/SupportedRFCs/","title":"Aries AIP and RFCs Supported in Aries Cloud Agent Python","text":"

This document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on Hyperledger Discord or through an issue in this repo.

Last Update: 2024-05-01, Release 0.12.1

The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.

"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"

See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other Aries Frameworks and Agents.

AIP Version Supported Notes AIP 1.0 Fully supported. AIP 2.0 Fully supported, with a couple of very minor exceptions noted below.

A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.

"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at ghcr.io/hyperledger/aries-cloudagent-python. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using the Ed25519Signature2018, BbsBlsSignature2020 and BbsBlsSignatureProof2020 signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Deprecated Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov DID published on an Indy network. did:sov did:web Resolution only did:key did:peer Algorithms 2/3 and 4 Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type will eventually be the same as askar when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Deprecated To be removed in the next Major/Minor release of ACA-Py Full support for the features of the \"indy-wallet\" secure storage capabilities found in the Indy SDK.

New installations of ACA-Py should NOT use the Indy SDK. Existing deployments using the Indy SDK should transition to Aries Askar and related components as soon as possible.

"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Invitations using peer dids supporting connection reuse Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"

All RFCs listed in AIP 1.0 are fully supported in ACA-Py. The following table provides notes about the implementation of specific RFCs.

RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol The agent supports Connection/DID exchange initiated from both plaintext invitations and public DIDs that enable bypassing the invitation message."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"

All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.

RFC Supported Notes Fully Supported"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

The running agent provides a Swagger User Interface that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.

"},{"location":"features/UsingOpenAPI/#aca-py-openapi-raw-output-characteristics","title":"ACA-Py, OpenAPI Raw Output Characteristics","text":"

ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json or from the browser Swagger User Interface directly.

The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec. This tool starts ACA-Py, retrieves the swagger.json file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json language output. The output of this tool enables comparison with the checked-in open-api/swagger.json and open-api/openapi.json, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation is turned off via the open-api/openAPIJSON.config file, so warning messages are printed for non-conformance, but the json is still output. Most of the warnings reported by generate-open-api-spec relate to missing operationId fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId annotations via tags.

The generate-open-api-spec tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript. It is recommended that the generate-open-api-spec is run prior to each release, and the resulting open-api/openapi.json file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.

"},{"location":"features/UsingOpenAPI/#generating-language-wrappers-for-aca-py","title":"Generating Language Wrappers for ACA-Py","text":"

There are inevitably differences around best practice for method naming based on coding language and organization standards.

Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId fields. This allows the greatest flexibility in conforming to external naming requirements.

Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.

The OpenAPI Tools was found to offer some nice features when generating Typescript. It creates separate files for each class and allows the use of a .openapi-generator-ignore file to override generation if there is a spec file issue that needs to be maintained manually.

If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.

Another suggestion for code generation is to keep the modelPropertyNaming set to original when generating code. Although it is tempting to try and enable marshalling into standard naming formats such as camelCase, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model shows when looking at the ACA-Py Swagger UI in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshalling correct in all circumstances when changing model name format.

"},{"location":"features/UsingOpenAPI/#existing-language-wrappers-for-aca-py","title":"Existing Language Wrappers for ACA-Py","text":""},{"location":"features/UsingOpenAPI/#python","title":"Python","text":"
  • Aries Cloud Controller Python (GitHub / didx-xyz)
  • Aries Cloud Controller (PyPi)
  • Traction (GitHub / bcgov)
  • acapy-client (GitHub / Indicio-tech)
"},{"location":"features/UsingOpenAPI/#go","title":"Go","text":"
  • go-acapy-client (GitHub / Idej)
"},{"location":"features/UsingOpenAPI/#java","title":"Java","text":"
  • ACA-Py Java Client Library (GitHub / hyperledger-labs)
"},{"location":"features/devcontainer/","title":"ACA-Py Development with Dev Container","text":"

The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer and will use VS Code to illustrate.

By no means is ACA-Py limited to these tools; they are merely examples.

For information on running demos and tests using provided shell scripts, see DevReadMe readme.

"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"

The primary use case for this devcontainer is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.

There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose all within this container.

"},{"location":"features/devcontainer/#files","title":"Files","text":"

The .devcontainer folder contains the devcontainer.json file which defines this container. We are using a Dockerfile and post-install.sh to build and configure the container run image. The Dockerfile is simple but in place for simplifying image enhancements (ex. adding poetry to the image). The post-install.sh will install some additional development libraries (including for BDD support).

"},{"location":"features/devcontainer/#devcontainer","title":"Devcontainer","text":"

What are Development Containers?

A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.

see https://containers.dev.

In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:

  • Docker Version: 20.10.24
  • VS Code Version: 1.79.0
  • Dev Container Extension Version: v0.295.0
"},{"location":"features/devcontainer/#open-aca-py-in-the-devcontainer","title":"Open ACA-Py in the devcontainer","text":"

To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:

  1. Open Visual Studio Code, and use the Command Palette and use Dev Containers: Open Folder in Container...
  2. Open Visual Studio Code and File|Open Folder..., you should be prompted to Reopen in Container.

NOTE follow any prompts to install Python Extension or reload window for Pylance when first building the container.

ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window as some extensions seem to require this in order to work as expected.

"},{"location":"features/devcontainer/#devcontainerjson","title":"devcontainer.json","text":"

When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.9 image (bash shell) and loading it with all the ACA-Py requirements (and black). We also load a few Visual Studio settings (for running Pytests and formatting with Flake and Black).

"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"

The Python libraries / dependencies are installed using poetry. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run black .). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.

In VS Code, open a Terminal, you should be able to run the following commands:

python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\nblack . --check\npoetry --version\n

The first command should show you that aries_cloudagent module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit installed) and Pull Requests.

When running ruff check . in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13) - that's ok. If there are actual ruff errors, you should see something like:

error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"

We have added Black formatter and Ruff extensions. Although we have added launch settings for both ruff and black, you can also use the extension commands from the command palette.

  • Ruff: Format Document
  • Ruff: Fix all auto-fixable problems

More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window to ensure the extensions are loaded correctly.

"},{"location":"features/devcontainer/#running-docker-in-docker-demos","title":"Running docker-in-docker demos","text":"

Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n

If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":"
  • Development Containers (devcontainers): https://containers.dev
  • Visual Studio Code: https://code.visualstudio.com
  • Dev Containers Extension: marketplace.visualstudio.com
  • Docker: https://www.docker.com
  • Docker Compose: https://docs.docker.com/compose/
"},{"location":"features/devcontainer/#aca-py-debugging","title":"ACA-Py Debugging","text":"

To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json and settings.json, please cut and paste what you want/need.

cp -R .vscode-sample .vscode\n

This will add a launch.json, settings.json and multiple ACA-Py configuration files for developing with different scenarios.

  • Faber: Simple agent to simulate an issuer
  • Alice: Simple agent to simulate a holder
  • Endorser: Simulates the endorser agent in an endorsement required environment
  • Author: Simulates an author agent in a endorsement required environment
  • Multitenant Admin: Includes settings for a multitenant/wallet scenario

Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.

For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url config. If you want to use a non localhost server then you will need to change the url.

"},{"location":"features/devcontainer/#faber","title":"Faber","text":"
  • admin api url = http://localhost:9041
  • study the demo to understand the steps to have the agent in the correct state. Make your public dids and schemas, cred-defs, etc.
"},{"location":"features/devcontainer/#alice","title":"Alice","text":"
  • admin api url = http://localhost:9011
  • study the demo to get a connection with faber
"},{"location":"features/devcontainer/#endorser","title":"Endorser","text":"
  • admin api url = http://localhost:9031
  • This config is useful if you want to develop in an environment that requires endorsement. You can run the demo with ./run_demo faber --endorser-role author to see all the steps to become and endorser.
"},{"location":"features/devcontainer/#author","title":"Author","text":"
  • admin api url = http://localhost:9021
  • This config is useful if you want to develop in an environment that requires endorsement. You can run the demo with ./run_demo faber --endorser-role author to see all the steps to become and author. You need to uncomment the configurations for automating the connection to endorser.
"},{"location":"features/devcontainer/#multitenant-admin","title":"Multitenant-Admin","text":"
  • admin api url = http://localhost:9051
  • This is for a multitenant environment where you can create multiple tenants with subwallets with one agent. See Multitenancy
"},{"location":"features/devcontainer/#try-running-faber-and-alice-at-the-same-time-and-add-break-points-and-recreate-the-demo","title":"Try running Faber and Alice at the same time and add break points and recreate the demo","text":"

To run your ACA-Py code in debug mode, go to the Run and Debug view, select the agent(s) you want to start and click Start Debugging (F5).

This will start your source code as a running ACA-Py instance, all configuration is in the *.yml files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000). You could change this or another ledger such as http://test.bcovrin.vonx.io. These are purposefully, very simple configurations.

For example, open aries_cloudagent/admin/server.py and set a breakpoint in async def status_handler(self, request: web.BaseRequest):, then call GET /status in the Admin Console and hit your breakpoint.

"},{"location":"features/devcontainer/#pytest","title":"Pytest","text":"

Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests will scan and find the tests.

See Python Testing for more details, and Test Commands for usage.

WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File to start debugging.

WARNING: the project configuration found in pyproject.toml include performing ruff checks when we run pytest. Including ruff does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.

"},{"location":"features/devcontainer/#next-steps","title":"Next Steps","text":"

At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal. This isn't a panacea but should get you going in the right direction and provide you with some development tools.

"},{"location":"gettingStarted/","title":"Becoming an Indy/Aries Developer","text":"

This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own Aries agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an Aries agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.

Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.

  • I've heard of Indy, but I don't know the basics
  • I know about Indy, but what is Aries?
  • Demos - Business Level
  • Aries Agents in Context: The Big Picture
  • Aries Internals - Deployment Components
  • An overview of Aries messaging
  • Demos - Aries Developer
  • Establishing a connection between Aries Agents
  • Issuing an AnonCreds credential: From Issuer to Holder/Prover
  • Presenting an Indy credential: From Holder/Prover to Verifier
  • Next steps: Creating your own Aries Agent
  • What should I work on? Options for Aries/Indy Developers
  • Deeper Dive: DIDComm Messages
  • Deeper Dive: DIDComm Message Routing and Encryption
  • Deeper Dive: Routing Example
  • To Do: Deeper Dive: Running and Connecting to an Indy Network
  • Steps and APIs to support credential revocation with Aries agent
  • Deeper Dive: Aca-Py Plug-Ins

Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.

"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between Aries Agents","text":"

Use an ACA-Py issuer/verifier to establish a connection with an Aries mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/AriesAgentArchitecture/","title":"Aries Cloud Agent Internals: Agent and Controller","text":"

This section talks in particular about the architecture of this Aries cloud agent implementation. An instance of an Aries agent is actually made up of to two parts - the agent itself and a controller.

The agent handles all of the core Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.

Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.

As such, the agent is just a configured dependency in an Aries cloud agent deployment. Thus, the vast majority of Aries developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of Aries cloud agent maintainers will focus on adding and maintaining the agent dependency.

Want more details about the agent and controller internals? Take a look at the Aries cloud agent deployment model document.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBasics/","title":"What is Aries?","text":"

Hyperledger Aries provides a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interaction between those clients.

A Hyperledger Aries agent (such as the one in this repository):

  • enables establishing connections with other DIDComm-based agents (using DIDComm encryption envelopes),
  • exchanges messages between connected agents to execute message protocols (using DIDComm protocols)
  • sends notifications about protocol events to a controller, and
  • exposes an API for responses from the controller with direction in handling protocol events.

The concepts and features that make up the Aries project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs. The Aries Working Group meets weekly to expand the design and components of Aries.

The Aries Cloud Agent Python currently only supports Hyperledger Indy-based verifiable credentials and public ledger. Longer term (as we'll see later in this guide) protocols will be extended or added to support other verifiable credential implementations and public ledgers.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBigPicture/","title":"Aries Agents in context: The Big Picture","text":"

Aries agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.

The agents in the picture shares many attributes:

  • They have some sort of storage for keys and other data related to their role as an agent
  • They interact with other agents using secure. peer-to-peer messaging protocols
  • They have some associated mechanism to provide \"business rules\" to control the behavior of the agent
  • That is often a person for phone, tablet, laptop, etc. based agents
  • That is often backend enterprise systems for enterprise agents
  • Business rules for cloud agents are often about the routing of messages to and from edge agents

While there can be many other agent setups, the picture above shows the most common ones - edge agents for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.

Misleading in the picture is that (almost) all agents connect directly to the Ledger network. In this picture it's the Sovrin ledger, but that could be any Indy network (e.g. set of nodes running indy-node software) and in future, ledgers from other providers. That implies most agents embed the ledger SDK (e.g. indy-sdk) and makes calls to the ledger SDK to interact with the ledger and other SDK controlled resources (e.g. secure storage). Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the ledger - they do it directly. Super small IoT devices are an instance of an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the ledger.

While current Aries agents currently only support Indy-based ledgers, the intention is to add support for other ledgers.

The (most common) purpose of cloud agents is to enable secure and privacy preserving routing of messages between edge agents. Rather than messages going directly from edge agent to edge agent (which is often impossible - for example sending to a mobile agent), messages sent from edge agent to edge agent are routed through a sequence of cloud agents. Some of those cloud agents might be controlled by the sender, some by the receiver and others might be gateways owned by agent vendors (called \"Agencies\"). In all cases, an edge agent tells routing agents \"here's how to send messages to me\", so a routing agent sending a message only has to know how to send a peer-to-peer message. While quite complicated, the protocols used by the agents largely take care of this complexity, and most developers don't have to know much about it.

Note the many caveats in this section - \"most common\", \"commonly\", etc. There are many small building blocks available in Aries and underlying components that can be combined in infinite ways. We recommend not worrying about the alternate use cases for now. Focus on understanding the common use cases while remembering that other configurations are possible.

We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how indy-node or indy-sdk work to be able to build your first Aries-based application. Later in this guide we'll covering the starting point you do need to know.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesDeveloperDemos/","title":"Developer Demos and Samples of Aries Agent","text":"

Here are some demos that developers can use to get up to speed on Aries. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.

"},{"location":"gettingStarted/AriesDeveloperDemos/#open-api-demo","title":"Open API demo","text":"

This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.

Collaborating Agents OpenAPI Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"

Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.

Python-based Alice/Faber Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/AriesDeveloperDemos/#indicio-developer-demo","title":"Indicio Developer Demo","text":"

Minimal Aca-Py demo that can be used by developers to isolat and test features:

  • Minimal Setup (everything runs in containers)
  • Quickly reproduce an issue or demonstrate a feature by writing one simple script or pytest tests.

Indicio Aca-Py Minimal Example

"},{"location":"gettingStarted/AriesMessaging/","title":"An overview of Aries messaging","text":"

Aries Agents communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. Aries agents use (an early instance of) the did:peer DID method, which uses DIDs that are not published to a public ledger, but only shared privately between the communicating parties - usually just two agents.

Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:

  • The \"establish connection\" protocol enables two agents to establish a connection through a series of messages - an invitation, a connection request and a connection response.
  • The \"issue credential\" protocol enables an agent to issue a credential to another agent.
  • The \"present proof\" protocol enables an agent to request and receive a proof from another agent.

Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.

Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.

Developers building Aries agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesRoutingExample/","title":"Aries Routing - an example","text":"

In this example, we'll walk through an example of complex routing in Aries, outlining some of the possibilities that can be implemented.

We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.

What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?

"},{"location":"gettingStarted/AriesRoutingExample/#the-scenario","title":"The Scenario","text":"

Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.

"},{"location":"gettingStarted/AriesRoutingExample/#all-the-dids","title":"All the DIDs","text":"

A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:

  • Bob and Alice
  • Bob and his Routing Agent
  • Bob and Agency
  • Bob's Routing Agent and Agency

That's a lot more than just the Bob and Alice relationship we usually think about!

"},{"location":"gettingStarted/AriesRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"

From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):

  • The public keys for agents referenced in the routing
  • The services of type did-communication, including:
  • the one serviceEndpoint
  • the recipientKeys array of referenced keys for the ultimate target(s) of the message
  • the routingKeys array of referenced keys for the mediators

Let's look at the did-communication service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:

  • Bob and Alice:
  • The serviceEndpoint that Bob tells Alice about is the endpoint for the Agency.

    • We'll use for the endpoint the Agency's public DID. That way the Agency can change rotate the keys for the endpoint without all of its clients from having to update every DIDDoc with the new key.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for Alice.

  • The routingKeys entries is a reference to the public key for the Routing Agent.

  • Bob and his Routing Agent:

  • The serviceEndpoint is empty because Bob's iPhone has no endpoint. See the note below for more on this.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for the Routing Agent.
  • The routingKeys array is empty.

  • Bob and Agency:

  • The serviceEndpoint is the endpoint for Bob's Routing Agent.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for the Agency.
  • The routingKeys is a single entry for the key reference for the Routing Agent key.

  • Bob's Routing Agent and Agency:

  • The serviceEndpoint is the endpoint for Bob's Routing Agent.
  • The recipientKeys entry is a key reference for Bob's Routing Agent specifically for the Agency.
  • The routingKeys array is empty.

The null serviceEndpoint for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.

"},{"location":"gettingStarted/AriesRoutingExample/#preparing-bobs-diddoc-for-alice","title":"Preparing Bob's DIDDoc for Alice","text":"

Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".

We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:

  • Bob's iPhone agent generates a new DID for Alice and prepares, and partially completes, a DIDDoc
  • Bob messages the Routing Agent to send the newly created DID and to get a new public key for the Alice relationship.
  • The Routing Agent records the DID for Alice and the keypair to be used for messages from Alice.
  • The Routing Agent sends the DID to the Agency to let the Agency know that messages for the new DID are to go to the Routing Agent.
  • The Routing Agent sends the data to Bob's iPhone agent.
  • Bob's iPhone agent fills in the rest of the DIDDoc:
  • the public key for the Routing Agent for the Alice relationship
  • the did-communication service endpoint is set to the Agency public DID and
  • the routing keys array with the values of the Agency public DID key reference and the Routing Agent key reference

Note: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.

With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.

At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.

Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:

  • Prepare the message for Bob's Agent.
  • Encrypt and place that message into a \"Forward\" message for Bob's Routing Agent.
  • Encrypt and send the \"Forward\" message to Bob's Agency endpoint.
"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"

To be completed.

"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"

Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?

Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:

  • Issuers indicate that a credential will support revocation when creating the credential definition (CredDef).
  • Issuers create a Revocation Registry definition object of a given size (MaxSize -- the number of credentials that can use the RevReg) and publish it to the ledger (or more precisely, the verifiable data registry). In doing that, a Tails file is also created and published somewhere on the Internet, accessible to all Holders.
  • Issuers create and publish an initial Revocation Registry Entry that defines the state of all credentials within the RevReg, either all active or all revoked. It's a really bad idea to create a RevReg starting with \"all revoked\", so don't do that.
  • Issuers issue credentials and note the \"revocation ID\" of each credential. The \"revocation Id\" is a compound key consisting of the RevRegId from which the credential was issued, and the index within that registry of that credential. An index (from 1 to Max Size of the registry -- or perhaps 0 to Max Size - 1) can only be associated with one issued credential.
  • At some point, a RevReg is all used up (full), and the Issuer must create another one. Ideally, this does not cause an extra delay in the process of issuing credentials.
  • At some point, the Issuer revokes the credential of a holder, using the revocation Id of the relevant credential.
  • At some point, either in conjunction with each revocation, or for a batch of revocations, the Issuer publishes the RevReg(s) associated with a CredDef to the ledger. If there are multiple revocations spread across multiple RevRegs, there may be multiple writes to the ledger.

Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:

  • Issuers flag that revocation will be used when creating the CredDef and the desired size of the RevReg. ACA-Py takes case of creating the initial RevReg(s) without further action by the controller.
  • Two RevRegs are initially created, so there is no delay when one fills up, and another is needed. In ongoing operations, when one RevReg fills up, the other active RevReg is used, and a new RevReg is created.
  • On creation of each RevReg, its corresponding tails file is published by ACA-Py.
  • On Issuance, the controller receives the logical \u201crevocation ID\" (combination of RevRegId+Index) of the issued credential to track.
  • On Revocation, the controller passes in the logical \u201crevocation ID\" of the credential to be revoked, including a \u201cnotify holder\u201d flag. ACA-Py records the revocation as pending and, if asked, sends a notification to the holder using a DIDComm message (Aries RFC 0183: Revocation Notification).
  • The Issuer requests that the revocations for a CredDefId be published. ACA-Py figures out what RevRegs contain pending revocation and so need to be published, and publishes each.

That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.

From experience, we\u2019ve added to two extra features to deal with unexpected conditions:

  • When using an Indy (or similar) ledger, if the local copy of a RevReg gets out of sync with the ledger copy (perhaps due to a failed ledger write), the Framework can create an update transaction to \u201cfix\u201d the issue. This is needed for a revocation state using deltas-type solution (like Indy), but not for a ledger that publishes revocation states containing the entire state of each credential.
  • From time to time there may be a need to \u201crotate\u201d a RevReg \u2014 to mark existing, active RevRegs as \u201cdecommissioned\u201d, and create new ones in their place. We\u2019ve added an endpoint (api call) for that.
"},{"location":"gettingStarted/CredentialRevocation/#using-aca-py-revocation","title":"Using ACA-Py Revocation","text":"

The following are the ACA-Py steps and APIs involved in handling credential revocation.

To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.

Include the command line parameter --tails-server-base-url <indy-tails-server url>

  1. Publish credential definition

    Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.

    POST /credential-definitions\n{\n  \"schema_id\": schema_id,\n  \"support_revocation\": true,\n  # Only needed if support_revocation is true. Defaults to 100\n  \"revocation_registry_size\": size_int,\n  \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n  \"credential_definition_id\": \"credential_definition_id\"\n}\n
  2. Issue credential

    This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.

    POST /issue-credential/send-offer\n{\n    \"cred_def_id\": credential_definition_id,\n    \"revoc_reg_id\": revocation_registry_id\n    \"auto_remove\": False, # We need the credential exchange record when revoking\n    ...\n}\nResponse\n{\n    \"credential_exchange_id\": credential_exchange_id\n}\n
  3. Revoking credential

    POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>\n}\n

    If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations to publish pending revocations in batches. Revocation are not written to ledger until this is called.

  4. When asking for proof, specify the time span when the credential is NOT revoked

     POST /present-proof/send-request\n {\n   \"connection_id\": ...,\n   \"proof_request\": {\n     \"requested_attributes\": [\n       {\n         \"name\": ...\n         \"restrictions\": ...,\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"requested_predicates\": [\n       {\n         \"name\": ...\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"non_revoked\": # Optional, only check revocation if specified\n     {\n       \"from\": <seconds from Unix Epoch> # Optional, default is 0\n       \"to\": <seconds from Unix Epoch>\n     }\n   }\n }\n
"},{"location":"gettingStarted/CredentialRevocation/#revocation-notification","title":"Revocation Notification","text":"

ACA-Py supports Revocation Notification v1.0.

Note: The optional ~please_ack is not currently supported.

"},{"location":"gettingStarted/CredentialRevocation/#issuer-role","title":"Issuer Role","text":"

To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:

  • notify - A boolean value indicating whether or not a notification should be sent. If the argument --notify-revocation is used on startup, this value defaults to true. Otherwise, it will default to false. This value overrides the --notify-revocation flag; the value of notify always takes precedence.
  • connection_id - Connection ID for the connection of the credential holder. This is required when notify is true.
  • thread_id - Message Thread ID of the credential exchange message that resulted in the credential now being revoked. This is required when notify is true
  • comment - An optional comment presented to the credential holder as part of the revocation notification. This field might contain the reason for revocation or some other human readable information about the revocation.

Your request might look something like:

POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>,\n    \"notify\": true,\n    \"connection_id\": <connection id>,\n    \"thread_id\": <thread id>,\n    \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"

On receipt of a revocation notification, an event with topic acapy::revocation-notification::received and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.

If the argument --monitor-revocation-notification is used on startup, a webhook with the topic revocation-notification and a payload containing the thread ID and comment is emitted to registered webhook urls.

"},{"location":"gettingStarted/CredentialRevocation/#manually-creating-revocation-registries","title":"Manually Creating Revocation Registries","text":"

NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.

The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.

However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.

There are several endpoints that must be called, and they must be called in this order:

  1. Create revoc registry POST /revocation/create-registry

  2. you need to provide the credential definition id and the size of the registry

  3. Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}

  4. here you need to provide the full URI that will be written to the ledger, for example:

{\n  \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
  1. Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition

  2. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  3. Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file

  4. the tails server will check that the registry definition is already written to the ledger

  5. Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry

  6. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  7. this operation MUST be performed on the the new revoc registry def BEFORE any revocation operations are performed
"},{"location":"gettingStarted/CredentialRevocation/#revocation-registry-rotation","title":"Revocation Registry Rotation","text":"

From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init, generated, posted, active, full, decommissioned. When issuing revocable credentials, the work is done with the active registry record. There are always 2 active registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active registry. When rotating, all registry records (except records in init state) are decommissioned and a new pair of active registry records are created.

Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate

It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id} after rotation and before issuance (if possible).

"},{"location":"gettingStarted/DIDcommMsgs/","title":"Deeper Dive: DIDComm Messaging","text":"

DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type. The namespace is currently defined to be a public DID that should be globally resolvable to a protocol specification. Currently, \"core\" messages use a DID that is not yet globally resolvable - Daniel Hardman has the keys associated with the DID.

Link: Message Types

As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.

"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"

In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"

The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"

Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"

Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.

"},{"location":"gettingStarted/IndyAriesDevOptions/","title":"What should I work on? Options for Aries/Indy Developers","text":"

Now that you know the basics of the Indy/Aries eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.

This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node (the Indy public ledger) or the indy-sdk. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.

In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.

"},{"location":"gettingStarted/IndyAriesDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"

If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in this repository (aries-cloudagent-python).

If you want to build a mobile agent, there are open source options available, including Aries-MobileAgent-Xamarin (aka \"Aries MAX\"), which is built on Aries Framework .NET, and Aries Mobile Agent React Native, which is built on Aries Framework JavaScript.

As a developer building applications that use/embed Aries agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.

Note that if building apps is what you want to do, you don't need to do a deep dive into the Aries SDK, the Indy SDK or the Indy Node public ledger. You need to know the concepts, but it's not a requirement that know the code base intimately.

"},{"location":"gettingStarted/IndyAriesDevOptions/#contributing-to-aries-cloudagent-python","title":"Contributing to aries-cloudagent-python","text":"

Of course as you build applications using aries-cloudagent-python, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.

"},{"location":"gettingStarted/IndyAriesDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"

aries-cloudagent-python currently supports only Hyperledger Indy-based public ledgers and verifiable credentials exchange. A goal of Hyperledger Aries is to be ledger-agnostic, and to support other ledgers. We're experimenting with adding support for other ledgers, and would welcome assistance in doing that.

"},{"location":"gettingStarted/IndyAriesDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"

Although controllers for an aries-cloudagent-python instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the aries-cloudagent-python, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader Aries community so that this and other codebases improve.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-aries-sdk","title":"Improving Aries SDK","text":"

This code base and other Aries agent implementations currently embed the indy-sdk. However, much of the code in the indy-sdk is being migrated into a variety of Aries language specific repositories. How this migration is to be done is still being decided, but it makes sense that the agent-type things be moved to Aries repositories. A number of language specific Aries SDK repos have been created and are being populated.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-the-indy-sdk","title":"Improving the Indy SDK","text":"

Dropping down a level from Aries and into Indy, the indy-sdk needs to continue to evolve. The code base is robust, of high quality and well thought out, but it needs to continue to add new capabilities and improve existing features. The indy-sdk is implemented in Rust, to produce a C-callable library that can be used by client libraries built in a variety of languages.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-indy-node","title":"Improving Indy Node","text":"

If you are interested in getting into the public ledger part of Indy, particularly if you are going to be a Sovrin Steward, you should take a deep look into indy-node. Like the indy-sdk, indy-node is robust, of high quality and is well thought out. As the network grows, use cases change and new cryptographic primitives move into the mainstream, indy-node capabilities will need to evolve. indy-node is coded in Python.

"},{"location":"gettingStarted/IndyAriesDevOptions/#working-in-cryptography","title":"Working in Cryptography","text":"

Finally, at the deepest level, and core to all of the projects is the cryptography in Hyperledger Ursa. If you are a cryptographer, that's where you want to be - and we want you there.

"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"

NOTE: If you are developer building apps on top of Aries and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. Aries takes care of those details for you. The introduction linked here should be sufficient.

If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).

Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - almost a year! We've got much more relevant examples later in this guide.

As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and Aries.

"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"

Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"

Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"

Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"

Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:

  • The agent is on a mobile device that does not have a persistent connection and so uses a cloud agent.
  • The person does not want to allow correlation of their agent across relationships and so they use a shared, common endpoint (e.g. https://agents-R-Us.ca) that they are \"hidden in a crowd\".
  • An enterprise wants a single gateway to the many enterprise agents they have in their organization.

Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.

The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).

"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"

To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:

  • Prepare the message to be sent to the recipient
  • Successively encrypt and wrap the message for each intermediate mediator in a \"forward\" message - an envelope.
  • Encrypt and send the message to the first agent in the routing

Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.

Link: DIDDoc conventions for inbound routing

"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"

Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.

Link: Mediators and Relays

"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"

The DIDComm encryption handling is handling within the Aries agent, and not really something a developer building applications using an agent needs to worry about. Further, within an Aries agent, the handling of the encryption is left to libraries to handle - ultimately calling dependencies from Hyperledger Ursa. To encrypt a message, the agent code calls a pack() function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack() function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.

Much thought has also gone into repudiable and non-repudiable messaging, as described here.

"},{"location":"gettingStarted/YourOwnAriesAgent/","title":"Creating Your Own Aries Agent","text":"

Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.

"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"

The aca-py agent supports message tracing, according to the Tracing RFC.

Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.

Tracing is configured globally for the agent.

"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"

The following options can be specified when starting the aca-py agent:

  --trace               Generate tracing events.\n  --trace-target <trace-target>\n                        Target for trace events (\"log\", \"message\", or http\n                        endpoint).\n  --trace-tag <trace-tag>\n                        Tag to be included when logging events.\n  --trace-label <trace-label>\n                        Label (agent name) used logging events.\n

The --trace option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log).

Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...} in the JSON payload to the API call (for credential and proof exchanges).

"},{"location":"testing/AgentTracing/#enabling-tracing-in-the-alicefaber-demo","title":"Enabling Tracing in the Alice/Faber Demo","text":"

The run_demo script supports the following parameters and environment variables.

Environment variables:

TRACE_ENABLED          Flag to enable tracing\n\nTRACE_TARGET_URL       Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET             Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG              Tag to be included in all logged trace events\n

Parameters:

--trace-log            Enables tracing to the standard log output\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http           Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n

When running the Faber controller, tracing can be enabled using the T menu option:

Faber      | Connected\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X]\n

When Exchange Tracing is ON, all exchanges will include tracing.

"},{"location":"testing/AgentTracing/#logging-trace-events-to-an-elk-stack","title":"Logging Trace Events to an ELK Stack","text":"

You can use the ELK stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:

DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"

ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url option, which requires the WEBHOOK_URL environment variable. Configure an end point running on the docker host system, port 8888, use the following:

WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/INTEGRATION-TESTS/","title":"Integration Tests for Aca-py using Behave","text":"

Integration tests for aca-py are implemented using Behave functional tests to drive aca-py agents based on the alice/faber demo framework.

If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness tests before you submit your pull requests.

"},{"location":"testing/INTEGRATION-TESTS/#getting-started","title":"Getting Started","text":"

To run the aca-py Behave tests, open a bash shell run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\n./run_bdd -t ~@taa_required\n

Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).

Note also that some tests require a ledger with TAA enabled, how to run these tests will be described later.

By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:

# run the above commands, up to cd aries-cloudagent-python/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

To run the tests against the back-end askar libraries (as opposed to indy-sdk) run the following:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n

(Note that wallet-type is currently the only extra argument supported.)

You can run individual tests by specifying the tag(s):

./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/INTEGRATION-TESTS/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"

To run a local von-network with TAA enabled,run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n

You can then run the TAA-enabled tests as follows:

./run_bdd -t @taa_required\n

or:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To override the default port settings:

AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tags>\n

(Note that since the test run multiple agents you require up to 60 available ports.)

"},{"location":"testing/INTEGRATION-TESTS/#aca-py-integration-tests-vs-aries-agent-test-harness-aath","title":"Aca-py Integration Tests vs Aries Agent Test Harness (AATH)","text":"

Aca-py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running aca-py agent (or in the case of AATH, against any compatible Aries agent), however the aca-py integration tests focus on aca-py specific features.

AATH:

  • Main purpose is to test interoperability between Aries agents
  • Implements detailed tests based on Aries RFC's (runs different scenarios, tests exception paths, etc.)
  • Runs Aries agents using Docker images (agents run for the duration of the tests)
  • Uses a standard \"backchannel\" to support integration of any Aries agent

Aca-py integration tests:

  • Main purpose is to test aca-py
  • Implements tests based on Aries RFC's, but not to the level of detail as AATH (runs (mostly) happy path scenarios against multiple agent configurations)
  • Tests aca-py specific configurations and features
  • Starts and stops agents for each tests to test different aca-py configurations
  • Uses the same Python framework as used for the interactive Alice/Faber demo
"},{"location":"testing/INTEGRATION-TESTS/#configuration-driven-tests","title":"Configuration-driven Tests","text":"

Aca-py integration tests use the same configuration approach as AATH, documented here.

In addition to support for external schemas, credential data etc, the aca-py integration tests support configuration of the aca-py agents that are used to run the test. For example:

Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n  Given \"3\" agents\n     | name  | role     | capabilities        |\n     | Acme  | issuer   | <Acme_capabilities> |\n     | Faber | verifier | <Acme_capabilities> |\n     | Bob   | prover   | <Bob_capabilities>  |\n  And \"<issuer>\" and \"Bob\" have an existing connection\n  And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n  ...\n\n  Examples:\n     | issuer | Acme_capabilities        | Bob_capabilities | Schema_name    | Credential_data          | Proof_request  |\n     | Acme   | --public-did             |                  | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n     | Faber  | --public-did  --mediator | --mediator       | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n

In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.

The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.

"},{"location":"testing/INTEGRATION-TESTS/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All Aca-py Agents Under Test","text":"

You can specify parameters that are applied to all aca-py agents using the ACAPY_ARG_FILE environment variable, for example:

ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

... will apply the parameters in the postgres-indy-args.yml file (which just happens to configure a postgres wallet) to all agents under test.

Or the following:

ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n

... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).

Any aca-py argument can be included in the yml file, and order-of-precedence applies (see https://pypi.org/project/ConfigArgParse/).

"},{"location":"testing/INTEGRATION-TESTS/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"

Aca-py integration tests support the following environment-driven configuration:

  • LEDGER_URL - specify the ledger url
  • TAILS_NETWORK - specify the docker network the tails server is running on
  • PUBLIC_TAILS_URL - specify the public url of the tails server
  • ACAPY_ARG_FILE - specify global aca-py parameters (see above)
"},{"location":"testing/INTEGRATION-TESTS/#running-specific-test-scenarios","title":"Running specific test scenarios","text":"

Behave tests are tagged using the same standard tags as used in AATH.

To run a specific set of Aca-py integration tests (or exclude specific tests):

./run_bdd -t tag1 -t ~tag2\n

(All command line parameters are passed to the behave command, so all parameters supported by behave can be used.)

"},{"location":"testing/INTEGRATION-TESTS/#aries-agent-test-harness-aca-py-tests","title":"Aries Agent Test Harness ACA-Py Tests","text":"

This video is a presentation by Aries Cloud Agent Python (ACA-Py) developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.

"},{"location":"testing/Logging/","title":"Logging docs","text":"

ACA_Py supports multiple configurations of logging.

"},{"location":"testing/Logging/#log-level","title":"Log level","text":"

ACA-Py's logging is based on python's logging lib. Log levels DEBUG, INFO and WARNING are available. Other log levels fall back to WARNING.

"},{"location":"testing/Logging/#per-tenant-logging","title":"Per Tenant Logging","text":"

Supports writing of log messages to a file with wallet_id as the tenant identifier for each. To enable this, both multitenant mode (--multitenant) and writing to log file option (--log-file) are required. If both --multitenant and --log-file are not passed when starting up ACA-Py, then it will use default_logging_config.ini config (backward compatible) and not log at a per tenant level.

"},{"location":"testing/Logging/#command-line-arguments","title":"Command Line Arguments","text":"
  • --log-level - The log level to log on std out
  • --log-file - Enables writing of logs to file. The provided value becomes path to a file to log to. If no value or empty string is provided then it will try to get the path from the config file
  • --log-config - Specifies a custom logging configuration file

Example:

./bin/aca-py start --log-level debug --log-file acapy.log --log-config aries_cloudagent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./aries_cloudagent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"

The log level can be configured using the environment variable ACAPY_LOG_LEVEL. The log file can be set by ACAPY_LOG_FILE. The log config can be set by ACAPY_LOG_CONFIG.

Example:

ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#acapy-config-file","title":"Acapy Config File","text":"

Following parameters can be used in a configuration file like this.

log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n

Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.

"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"

The path to config file is provided via --log-config.

Find an example in default_logging_config.ini.

You can find more detail description in the logging documentation.

For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler and StreamHandler handlers. Custom TimedRotatingFileMultiProcessHandler handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name, when, interval and backupCount can be passed as args=('acapy.log', 'd', 7, 1,) (also shown below). Note: backupCount of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here

[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n

For DictConfig (dict logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini file.

version: 1\nformatters:\n  default:\n    format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n  console:\n    class: logging.StreamHandler\n    level: DEBUG\n    formatter: default\n    stream: ext://sys.stderr\n  rotating_file:\n    class: logging.handlers.TimedRotatingFileMultiProcessHandler\n    level: DEBUG\n    filename: 'acapy.log'\n    when: 'd'\n    interval: 7\n    backupCount: 1\n    formatter: default\nroot:\n  level: INFO\n  handlers:\n    - console\n    - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting Aries Cloud Agent Python","text":"

This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.

Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.

"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":"
  • Unable to Connect to Ledger
  • Local ledger running?
  • Any Firewalls
  • Damaged, Unpublishable Revocation Registry
"},{"location":"testing/Troubleshooting/#unable-to-connect-to-ledger","title":"Unable to Connect to Ledger","text":"

The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.

"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"

Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:

  • Any errors in the startup of von-network?
  • Is the von-network webserver (usually at https:/localhost:9000) accessible? If so, can you click on and see the Genesis File?
  • Do you even need a local ledger? If not, you can use a public sandbox ledger, such as the BCovrin Test ledger, likely by just prefacing your ACA-Py command with LEDGER_URL=http://test.bcovrin.vonx.io. For example, when running the Alice-Faber demo in the demo folder, you can run (for example), the Faber agent using the command: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber
"},{"location":"testing/Troubleshooting/#any-firewalls","title":"Any Firewalls","text":"

Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.

"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"

We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, aca-py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (Andrew/s PR # 1804 (merged) should mitigate but probably won't completely eliminate this from happening).

For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:

To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:

  • GET /revocation/registry/<id>/issued - counts of the number of issued/revoked within a registry
  • GET /revocation/registry/<id>/issued/details - details of all credentials issued/revoked within a registry
  • GET /revocation/registry/<id>/issued/indy_recs - calculated rev_reg_delta from the ledger
  • This is used to compare ledger revoked vs wallet revoked credentials, which is essentially the state of the RevReg on the ledger and in ACA-Py. Where there is a difference, we have an error.
  • PUT /revocation/registry/<id>/fix-revocation-entry-state - publish an update to the RevReg state on the ledger to bring it into alignment with what is in the ACA-Py instance.
  • There is a boolean parameter (apply_ledger_update) to control whether the ledger entry actually gets published so, if you are so inclined, you can call the endpoint to see what the transaction would be, before you actually try to do a ledger update. This will return:
    • rev_reg_delta - same as the \".../indy_recs\" endpoint
    • accum_calculated - transaction to write to ledger
    • accum_fixed - If apply_ledger_update, the transaction actually written to the ledger

Note that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.

We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.

We add an integration test that demonstrates/tests this issue here.

To run the scenario either manually or using the integration tests, you can do the following:

  • Start von-network in TAA mode:
  • ./manage start --taa-sample --logs
  • Start the tails server as usual:
  • ./manage start --logs
  • To run the scenario manually, start faber and let the agent know it needs to TAA-accept before doing any ledger writes:
  • ./run_demo faber --revocation --taa-accept, and then you can run through all the transactions using the Swagger page.
  • To run the scenario via an integration test, run:
  • ./run_bdd -t @taa_required
"},{"location":"testing/UnitTests/","title":"ACA-Py Unit Tests","text":"

The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.

This video is a presentation of the material covered in this document by developer @shaangill025.

"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":"
  • ./scripts/run_tests
  • ./scripts/run_tests aries_cloudagent/protocols/out_of_band/v1_0/tests
  • ./scripts/run_tests_indy includes Indy specific tests
"},{"location":"testing/UnitTests/#pytest","title":"Pytest","text":"

Example: aries_cloudagent/core/tests/test_event_bus.py

@pytest.fixture\ndef event_bus():\n    yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n    yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n    event = Event(topic=\"anything\", payload=\"payload\")\n    yield event\n\nclass MockProcessor:\n    def __init__(self):\n        self.profile = None\n        self.event = None\n\n    async def __call__(self, profile, event):\n        self.profile = profile\n        self.event = event\n\n\n@pytest.fixture\ndef processor():\n    yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n    \"\"\"Test subscribe and unsubscribe.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    assert event_bus.topic_patterns_to_subscribers\n    assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n    event_bus.unsubscribe(re.compile(\".*\"), processor)\n    assert not event_bus.topic_patterns_to_subscribers\n

From aries_cloudagent/core/event_bus.py

class EventBus:\n    def __init__(self):\n        self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n        if pattern not in self.topic_patterns_to_subscribers:\n            self.topic_patterns_to_subscribers[pattern] = []\n        self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n    if pattern in self.topic_patterns_to_subscribers:\n        try:\n            index = self.topic_patterns_to_subscribers[pattern].index(processor)\n        except ValueError:\n            return\n        del self.topic_patterns_to_subscribers[pattern][index]\n        if not self.topic_patterns_to_subscribers[pattern]:\n            del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n    \"\"\"Test subscriber receives event.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    await event_bus.notify(profile, event)\n    assert processor.profile == profile\n    assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n    partials = []\n    for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n        match = pattern.match(event.topic)\n\n        if not match:\n            continue\n\n        for subscriber in subscribers:\n            partials.append(\n                partial(\n                    subscriber,\n                    profile,\n                    event.with_metadata(EventMetadata(pattern, match)),\n                )\n            )\n\n    for processor in partials:\n        try:\n            await processor()\n        except Exception:\n            LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"

From: aries_cloudagent/protocols/didexchange/v1_0/tests/test.manager.py

class TestDidExchangeManager(AsyncTestCase, TestConfig):\n    async def setUp(self):\n        self.responder = MockResponder()\n\n        self.oob_mock = async_mock.MagicMock(\n            clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n        )\n\n        self.route_manager = async_mock.MagicMock(RouteManager)\n        ...\n        self.profile = InMemoryProfile.test_profile(\n            {\n                \"default_endpoint\": \"http://aries.ca/endpoint\",\n                \"default_label\": \"This guy\",\n                \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n                \"debug.auto_accept_invites\": True,\n                \"debug.auto_accept_requests\": True,\n                \"multitenant.enabled\": True,\n                \"wallet.id\": True,\n            },\n            bind={\n                BaseResponder: self.responder,\n                OobMessageProcessor: self.oob_mock,\n                RouteManager: self.route_manager,\n                ...\n            },\n        )\n        ...\n\n    async def test_receive_invitation_no_auto_accept(self):\n        async with self.profile.session() as session:\n            mediation_record = MediationRecord(\n                role=MediationRecord.ROLE_CLIENT,\n                state=MediationRecord.STATE_GRANTED,\n                connection_id=self.test_mediator_conn_id,\n                routing_keys=self.test_mediator_routing_keys,\n                endpoint=self.test_mediator_endpoint,\n            )\n            await mediation_record.save(session)\n            with async_mock.patch.object(\n                self.multitenant_mgr, \"get_default_mediator\"\n            ) as mock_get_default_mediator:\n                mock_get_default_mediator.return_value = mediation_record\n                invi_rec = await self.oob_manager.create_invitation(\n                    my_endpoint=\"testendpoint\",\n                    hs_protos=[HSProto.RFC23],\n                )\n\n                invitee_record = await self.manager.receive_invitation(\n                    invi_rec.invitation,\n                    auto_accept=False,\n                )\n                assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n    self,\n    invitation: OOBInvitationMessage,\n    their_public_did: Optional[str] = None,\n    auto_accept: Optional[bool] = None,\n    alias: Optional[str] = None,\n    mediation_id: Optional[str] = None,\n) -> ConnRecord:\n    ...\n    accept = (\n        ConnRecord.ACCEPT_AUTO\n        if (\n            auto_accept\n            or (\n                auto_accept is None\n                and self.profile.settings.get(\"debug.auto_accept_invites\")\n            )\n        )\n        else ConnRecord.ACCEPT_MANUAL\n    )\n    service_item = invitation.services[0]\n    # Create connection record\n    conn_rec = ConnRecord(\n        invitation_key=(\n            DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n            if isinstance(service_item, OOBService)\n            else None\n        ),\n        invitation_msg_id=invitation._id,\n        their_label=invitation.label,\n        their_role=ConnRecord.Role.RESPONDER.rfc23,\n        state=ConnRecord.State.INVITATION.rfc23,\n        accept=accept,\n        alias=alias,\n        their_public_did=their_public_did,\n        connection_protocol=DIDX_PROTO,\n    )\n\n    async with self.profile.session() as session:\n        await conn_rec.save(\n            session,\n            reason=\"Created new connection record from invitation\",\n            log_params={\n                \"invitation\": invitation,\n                \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n            },\n        )\n\n        # Save the invitation for later processing\n        ...\n\n    return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":"
  • Error catching
  with self.assertRaises(DIDXManagerError) as ctx:\n     ...\n  assert \" ... error ...\" in str(ctx.exception)\n
  • function.assert_called_once_with(parameters) function.assert_called_once()

  • pytest.mark setup in setup.cfg can be attributed at function or class level. Example, @pytest.mark.askar

  • Code coverage

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Hyperledger Aries Cloud Agent - Python","text":"

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

Full access to an organized set of all of the ACA-Py documents is available at https://aca-py.org. Check it out! It's much easier to navigate than this GitHub repo for reading the documentation.

"},{"location":"#overview","title":"Overview","text":"

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using DIDComm messaging and Hyperledger Aries protocols. The \"cloud\" in the name means that ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.

ACA-Py is built on the Aries concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py\u2019s supported Aries protocols include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.

To use ACA-Py you create a business logic controller that \"talks to\" an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the Aries and DIDComm protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type Aries protocols.

This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.

"},{"location":"#multi-tenant","title":"Multi-Tenant","text":"

ACA-Py supports \"multi-tenant\" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an \"issuer-as-a-service\", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a \"cloud wallet\" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.

"},{"location":"#mediator-service","title":"Mediator Service","text":"

Startup options allow the use of an ACA-Py as an Aries mediator using core Aries protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to Aries agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio Technologies. Learn more about deploying a mediator here. See the Aries Mediator Service for a \"best practices\" configuration of an Aries mediator.

"},{"location":"#indy-transaction-endorsing","title":"Indy Transaction Endorsing","text":"

ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.

"},{"location":"#scaled-deployments","title":"Scaled Deployments","text":"

ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.

"},{"location":"#vc-api-endpoints","title":"VC-API Endpoints","text":"

A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.

"},{"location":"#example-uses","title":"Example Uses","text":"

The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:

  • An interface to a legacy system to issue verifiable credentials
  • An authentication service based on the presentation of verifiable credential proofs
  • An enterprise wallet to hold and present verifiable credentials about that enterprise
  • A user interface for a person to use a wallet not stored on a mobile device
  • An application embedded in an IoT device, capable of issuing verifiable credentials about collected data
  • A persistent connection to other agents that enables secure messaging and notifications
  • Custom code to implement a new service.
"},{"location":"#getting-started","title":"Getting Started","text":"

For those new to SSI, Aries and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.

  • Identity in Hyperledger: Indy, Aries and Ursa
  • Becoming a Hyperledger Aries Developer

The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.

Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You\u2019ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.

"},{"location":"#understanding-the-architecture","title":"Understanding the Architecture","text":"

There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.

You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the Aries ACA-Py Plugins repository. Check them out -- it might have the very plugin you need!

"},{"location":"#installation-and-usage","title":"Installation and Usage","text":"

Use the \"install and go\" page for developers if you are comfortable with Trust over IP and Aries concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the repository /demo folder there is a full set of demos for developers to use in getting up to speed quickly. Start with the Traction Workshop to go through a complete ACA-Py-based Issuer-Holder-Verifier flow in about 20 minutes. Next, the Alice-Faber Demo is a great way for developers try a zero-install example of how to use the ACA-Py API to operate a couple of Aries Agents. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.

If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging, if you are unfamiliar with poetry please see our cheat sheet

"},{"location":"#about-the-aca-py-admin-api","title":"About the ACA-Py Admin API","text":"

The overview of ACA-Py\u2019s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.

An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.

Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.

"},{"location":"#troubleshooting","title":"Troubleshooting","text":"

There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the \"aries-cloudagent-python\" channel on the Hyperledger Discord chat server (invitation here).

"},{"location":"#credit","title":"Credit","text":"

The initial implementation of ACA-Py was developed by the Government of British Columbia\u2019s Digital Trust Team in Canada. To learn more about what\u2019s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.

See the MAINTAINERS.md file for a list of the current ACA-Py maintainers, and the guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing \u2014\u00a0guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.

"},{"location":"#license","title":"License","text":"

Apache License Version 2.0

"},{"location":"CHANGELOG/","title":"Aries Cloud Agent Python Changelog","text":""},{"location":"CHANGELOG/#0121","title":"0.12.1","text":""},{"location":"CHANGELOG/#april-26-2024","title":"April 26, 2024","text":"

Release 0.12.1 is a small patch to cleanup some edge case issues in the handling of Out of Band invitations, revocation notification webhooks, and connection querying uncovered after the 0.12.0 release. Fixes and improvements were also made to the generation of ACA-Py's OpenAPI specifications.

"},{"location":"CHANGELOG/#0121-breaking-changes","title":"0.12.1 Breaking Changes","text":"

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#0121-categorized-list-of-pull-requests","title":"0.12.1 Categorized List of Pull Requests","text":"
  • Out of Band Invitations and Connection Establishment updates/fixes:

    • \ud83d\udc1b Fix ServiceDecorator parsing in oob record handling #2910 ff137
    • fix: consider all resolvable dids in invites \"public\" #2900 dbluhm
    • fix: oob record their_service should be updatable #2897 dbluhm
    • fix: look up conn record by invite msg id instead of key #2891 dbluhm
  • OpenAPI/Swagger updates, fixes and cleanups:

    • Fix api schema mixup in revocation routes #2909 jamshale
    • \ud83c\udfa8 fix typos #2898 ff137
    • \u2b06\ufe0f Upgrade codegen tools used in generate-open-api-specols #2899 ff137
    • \ud83d\udc1b Fix IndyAttrValue model that was dropped from openapi spec #2894 ff137
  • Test and Demo updates:

    • fix Faber demo to use oob with aip10 to support connection reuse #2903 ianco
    • fix: integration tests should use didex 1.1 #2889 dbluhm
  • Credential Exchange updates and fixes:

    • fix: rev notifications on publish pending #2916 dbluhm
  • Endorsement of Indy Transactions fixes:

    • Prevent 500 error when re-promoting DID with endorsement #2885 jamshale
    • Fix ack during for auto endorsement #2883 jamshale
  • Documentation publishing process updates:

    • Some updates to the mkdocs publishing process #2888 swcurran
    • Update GHA so that broken image links work on docs site - without breaking them on GitHub #2852 swcurran
  • Dependencies and Internal Updates:

    • chore(deps): Bump psf/black from 24.4.0 to 24.4.2 in the all-actions group #2924 dependabot bot
    • fix: fixes a regression that requires a log file in multi-tenant mode #2918 amanji
    • Update AnonCreds to 0.2.2 #2917 swcurran
    • chore(deps): Bump aiohttp from 3.9.3 to 3.9.4 dependencies python #2902 dependabot bot
    • chore(deps): Bump idna from 3.4 to 3.7 in /demo/playground/examples dependencies python #2886 dependabot bot
    • chore(deps): Bump psf/black from 24.3.0 to 24.4.0 in the all-actions group dependencies github_actions #2893 dependabot bot
    • chore(deps): Bump idna from 3.6 to 3.7 dependencies python #2887 dependabot bot
    • refactor: logging configs setup #2870 amanji
  • Release management pull requests:

    • 0.12.1 #2926 swcurran
    • 0.12.1rc1 #2921 swcurran
    • 0.12.1rc0 #2912 swcurran
"},{"location":"CHANGELOG/#0120","title":"0.12.0","text":""},{"location":"CHANGELOG/#april-11-2024","title":"April 11, 2024","text":"

Release 0.12.0 is a large release with many new capabilities, feature improvements, upgrades, and bug fixes. Importantly, this release completes the ACA-Py implementation of Aries Interop Profile v2.0, and enables the elimination of unqualified DIDs. While only deprecated for now, all deployments of ACA-Py SHOULD move to using only fully qualified DIDs as soon as possible.

Much progress has been made on did:peer support in this release, with the handling of inbound DID Peer 1 added, and inbound and outbound support for DID Peer 2 and 4. Much attention was also paid to making sure that the Peer DID and DID Exchange capabilities match those of Credo-TS (formerly Aries Framework JavaScript). The completion of that work eliminates the remaining places where \"unqualified\" DIDs were being used, and to enable the \"connection reuse\" feature in the Out of Band protocol when using DID Peer 2 and 4 DIDs in invitations. See the document Qualified DIDs for details about how to control the use of DID Peer 2 or 4 in an ACA-Py deployment, and how to eliminate the use of unqualified DIDs. Support for DID Exchange v1.1 has been added to ACA-Py, with support for DID Exchange v1.0 retained, and we've added support for DID Rotation.

Work continues towards supporting ledger agnostic AnonCreds, and the new Hyperledger AnonCreds Rust library. Some of that work is in this release, the rest will be in the next release.

Attention was given in the release to simplifying the handling of JSON-LD Data Integrity Verifiable Credentials.

An important change in this release is the re-organization of the ACA-Py documentation, moving the vast majority of the documents to the folders within the docs folder -- a long overdue change that will allow us to soon publish the documents on https://aca-py.org directly from the ACA-Py repository, rather than from the separate aries-acapy-docs currently being used.

A big developer improvement is a revamping of the test handling to eliminate ~2500 warnings that were previously generated in the test suite. Nice job @ff137!

"},{"location":"CHANGELOG/#0120-breaking-changes","title":"0.12.0 Breaking Changes","text":"

A deployment of this release that uses DID Peer 2 and 4 invitations may encounter problems interacting with agents deployed using older Aries protocols. Led by the Aries Working Group, the Aries community is encouraging the upgrade of all ecosystem deployments to accept all commonly used qualified DIDs, including DID Peer 2 and 4. See the document Qualified DIDs for more details about the transition to using only qualified DIDs. If deployments you interact with are still using unqualified DIDs, please encourage them to upgrade as soon as possible.

Specifically for those upgrading their ACA-Py instance that create Out of Band invitations with more than one handshake_protocol, the protocol for the connection has been removed. See [Issue #2879] contains the details of this subtle breaking change.

New deprecation notices were added to ACA-Py on startup and in the OpenAPI/Swagger interface. Those added are listed below. As well, we anticipate 0.12.0 being the last ACA-Py release to include support for the previously deprecated Indy SDK.

  • RFC 0036 Issue Credential v1
    • Migrate to use RFC 0453 Issue Credential v2
  • RFC 0037 Present Proof v2
    • Migrate to use RFC 0454 Present Proof v2
  • RFC 0169 Connections
    • Migrate to use RFC 0023 DID Exchange and 0434 Out-of-Band
  • The use of did:sov:... as a Protocol Doc URI
    • Migrate to use https://didcomm.org/.
"},{"location":"CHANGELOG/#0120-categorized-list-of-pull-requests","title":"0.12.0 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes

    • fix: conn proto in invite webhook if known #2880 dbluhm
    • Emit the OOB done event even for multi-use invites #2872 ianco
    • refactor: introduce use_did and use_did_method #2862 dbluhm
    • fix(credo-interop): various didexchange and did:peer related fixes 1.0.0 #2748 dbluhm
    • Change did \u2194 verkey logging on connections #2853 jamshale
    • fix: did exchange multiuse invites respond in kind #2850 dbluhm
    • Support connection re-use for did:peer:2/4 #2823 ianco
    • feat: did-rotate #2816 amanji
    • Author subwallet setup automation #2791 jamshale
    • fix: save multi_use to the DB for OOB invitations #2694 frostyfrog
    • Connection and DIDX Problem Reports #2653 usingtechnology
  • DID Peer and DID Resolver Updates and Fixes

    • Integration test for did:peer #2713 ianco
    • Feature/emit did peer 4 #2696 Jsyro
    • did peer 4 resolution #2692 Jsyro
    • Emit did:peer:2 for didexchange #2687 Jsyro
    • Add did web method type as a default option #2684 PatStLouis
    • feat: add did:jwk resolver #2645 dbluhm
    • feat: support resolving did:peer:1 received in did exchange #2611 dbluhm
  • AnonCreds and Ledger Agnostic AnonCreds RS Changes

    • Prevent revocable cred def being created without tails server #2849 jamshale
    • Anoncreds - support for anoncreds and askar wallets concurrently #2822 jamshale
    • Send revocation list instead of rev_list object - Anoncreds #2821 jamshale
    • Fix anoncreds non-endorsement revocation #2814 jamshale
    • Get and create anoncreds profile when using anoncreds subwallet #2803 jamshale
    • Add anoncreds multitenant endorsement integration tests #2801 jamshale
    • Anoncreds revoke and publish-revocations endorsement #2782 jamshale
    • Upgrade anoncreds to version 0.2.0-dev11 #2763 jamshale
    • Update anoncreds to 0.2.0-dev10 #2758 jamshale
    • Anoncreds - Cred Def and Revocation Endorsement #2752 jamshale
    • Upgrade anoncreds to 0.2.0-dev9 #2741 jamshale
    • Upgrade anoncred-rs to version 0.2.0-dev8 #2734 jamshale
    • Upgrade anoncreds to 0.2.0.dev7 #2719 jamshale
    • Improve api documentation and error handling #2690 jamshale
    • Add unit tests for anoncreds revocation #2688 jamshale
    • Return 404 when schema not found #2683 jamshale
    • Anoncreds - Add unit testing #2672 jamshale
    • Additional anoncreds integration tests AnonCreds #2660 ianco
    • Update integration tests for anoncreds-rs AnonCreds #2651 ianco
    • Initial migration of anoncreds revocation code AnonCreds #2643 ianco
    • Integrate Anoncreds rs into credential and presentation endpoints AnonCreds #2632 ianco
    • Initial code migration from anoncreds-rs branch AnonCreds #2596 ianco
  • Hyperledger Indy ledger related updates and fixes

    • Remove requirement for write ledger in read-only mode. #2836 esune
    • Add known issues section to Multiledger.md documentation #2788 esune
    • fix: update constants in TransactionRecord #2698 amanji
    • Cache TAA by wallet name #2676 jamshale
    • Fix: RevRegEntry Transaction Endorsement 0.11.0 #2558 shaangill025
  • JSON-LD Verifiable Credential/DIF Presentation Exchange updates

    • Add missing VC-DI/LD-Proof verification method option #2867 PatStLouis
    • Revert profile injection for VcLdpManager on vc-api endpoints #2794 PatStLouis
    • Add cached copy of BBS v1 context #2749 andrewwhitehead
    • Update BBS+ context to bypass redirections #2739 swcurran
    • feat: make VcLdpManager pluggable #2706 dbluhm
    • fix: minor type hint corrections for VcLdpManager #2704 dbluhm
    • Remove if condition which checks if the credential.type array is equal to 1 #2670 PatStLouis
    • Feature Suggestion: Include a Reason When Constraints Cannot Be Applied #2630 Ennovate-com
    • refactor: make ldp_vc logic reusable #2533 dbluhm
  • Credential Exchange (Issue, Present) Updates

    • Allow for crids in event payload to be integers #2819 jamshale
    • Create revocation notification after list entry written to ledger #2812 jamshale
    • Remove exception on connectionless presentation problem report handler #2723 loneil
    • Ensure \"preserve_exchange_records\" flags are set. #2664 usingtechnology
    • Slight improvement to credx proof validation error message #2655 ianco
    • Add ConnectionProblemReport handler #2600 usingtechnology
  • Multitenancy Updates and Fixes

    • feature/per tenant settings #2790 amanji
    • Improve Per Tenant Logging: Fix issues around default log file path #2659 shaangill025
  • Other Fixes, Demo, DevContainer and Documentation Fixes

    • chore: propose official deprecations of a couple of features #2856 dbluhm
    • feat: external signature suite provider interface #2835 dbluhm
    • Update GHA so that broken image links work on docs site - without breaking them on GitHub #2852 swcurran
    • Minor updates to the documentation - links #2848 swcurran
    • Update to run_demo script to support Apple M1 CPUs #2843 swcurran
    • Add functionality for building and running agents seprately #2845 sarthakvijayvergiya
    • Cleanup of docs #2831 swcurran
    • Create AnonCredsMethods.md #2832 swcurran
    • FIX: GHA update for doc publishing, fix doc file that was blanked #2820 swcurran
    • More updates to get docs publishing #2810 swcurran
    • Eliminate the double workflow event #2811 swcurran
    • Publish docs GHActions tweak #2806 swcurran
    • Update publish-docs to operate on main and on branches prefixed with docs-v #2804 swcurran
    • Add index.html redirector to gh-pages branch #2802 swcurran
    • Demo description of reuse in establishing a connection #2787 swcurran
    • Reorganize the ACA-Py Documentation Files #2765 swcurran
    • Tweaks to MD files to enable aca-py.org publishing #2771 swcurran
    • Update devcontainer documentation #2729 jamshale
    • Update the SupportedRFCs Document to be up to date #2722 swcurran
    • Fix incorrect Sphinx search library version reference #2716 swcurran
    • Update RTD requirements after security vulnerability recorded #2712 swcurran
    • Update legacy bcgovimages references. #2700 WadeBarnes
    • fix: link to raw content change from master to main #2663 Ennovate-com
    • fix: open-api generator script #2661 dbluhm
    • Update the ReadTheDocs config in case we do another 0.10.x release #2629 swcurran
  • Dependencies and Internal Updates

    • Add wallet.type config to /settings endpoint #2877 jamshale
    • chore(deps): Bump pillow from 10.2.0 to 10.3.0 dependencies python #2869 dependabot bot
    • Fix run_tests script #2866 ianco
    • fix: states for discovery record to emit webhook #2858 dbluhm
    • Increase promote did retries #2854 jamshale
    • chore(deps-dev): Bump black from 24.1.1 to 24.3.0 dependencies python #2847 dependabot bot
    • chore(deps): Bump the all-actions group with 1 update dependencies github_actions #2844 dependabot bot
    • patch for #2781: User Agent header in doc loader #2824 gmulhearn-anonyome
    • chore(deps): Bump jwcrypto from 1.5.4 to 1.5.6 dependencies python #2833 dependabot bot
    • bot chore(deps): Bump cryptography from 42.0.3 to 42.0.4 dependencies python #2805 dependabot
    • bot chore(deps): Bump the all-actions group with 3 updates dependencies github_actions #2815 dependabot
    • Change middleware registration order #2796 PatStLouis
    • Bump pyld version to 2.0.4 #2795 PatStLouis
    • Revert profile inject #2789 jamshale
    • Move emit events to profile and delay sending until after commit #2760 ianco
    • fix: partial revert of ConnRecord schema change 1.0.0 #2746 dbluhm
    • chore(deps): Bump aiohttp from 3.9.1 to 3.9.2 dependencies #2745 dependabot bot
    • bump pydid to v 0.4.3 #2737 PatStLouis
    • Fix subwallet record removal #2721 andrewwhitehead
    • chore(deps): Bump jinja2 from 3.1.2 to 3.1.3 dependencies #2707 dependabot bot
    • feat: inject profile #2705 dbluhm
    • Remove tiny-vim from being added to the container image to reduce reported vulnerabilities from scanning #2699 swcurran
    • chore(deps): Bump jwcrypto from 1.5.0 to 1.5.1 dependencies #2689 dependabot bot
    • Update dependencies #2686 andrewwhitehead
    • Fix: Change To Use Timezone Aware UTC datetime #2679 Ennovate-com
    • fix: update broken demo dependency #2638 mrkaurelius
    • Bump cryptography from 41.0.5 to 41.0.6 dependencies #2636 dependabot bot
    • Bump aiohttp from 3.8.6 to 3.9.0 dependencies #2635 dependabot bot
  • CI/CD, Testing, and Developer Tools/Productivity Updates

    • Fix deprecation warnings #2756 ff137
    • chore(deps): Bump the all-actions group with 10 updates dependencies #2784 dependabot bot
    • Add Dependabot configuration #2783 WadeBarnes
    • Implement B006 rule #2775 jamshale
    • \u2b06\ufe0f Upgrade pytest to 8.0 #2773 ff137
    • \u2b06\ufe0f Update pytest-asyncio to 0.23.4 #2764 ff137
    • Remove asynctest dependency and fix \"coroutine not awaited\" warnings #2755 ff137
    • Fix pytest collection errors when anoncreds package is not installed #2750 andrewwhitehead
    • chore: pin black version #2747 dbluhm
    • Tweak scope of GHA integration tests #2662 ianco
    • Update snyk workflow to execute on Pull Request #2658 usingtechnology
  • Release management pull requests

    • 0.12.0 #2882 swcurran
    • 0.12.0rc3 #2878 swcurran
    • 0.12.0rc2 #2825 swcurran
    • 0.12.0rc1 #2800 swcurran
    • 0.12.0rc1 #2799 swcurran
    • 0.12.0rc0 #2732 swcurran
"},{"location":"CHANGELOG/#0110","title":"0.11.0","text":""},{"location":"CHANGELOG/#november-24-2023","title":"November 24, 2023","text":"

Release 0.11.0 is a relatively large release of new features, fixes, and internal updates. 0.11.0 is planned to be the last significant update before we begin the transition to using the ledger agnostic AnonCreds Rust in a release that is expected to bring Admin/Controller API changes. We plan to do patches to the 0.11.x branch while the transition is made to using [Anoncreds Rust].

An important addition to ACA-Py is support for signing and verifying SD-JWT verifiable credentials. We expect this to be the first of the changes to extend ACA-Py to support OpenID4VC protocols.

This release and Release 0.10.5 contain a high priority fix to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details. Anyone using JSON-LD presentations is recommended to upgrade to one of these versions of ACA-Py as soon as possible.

In the CI/CD realm, substantial changes were applied to the source base in switching from:

  • pip to Poetry for packaging and dependency management,
  • Flake8 to Ruff for linting,
  • asynctest to IsolatedAsyncioTestCase and AsyncMock objects now included in Python's builtin unittest package for unit testing.

These are necessary and important modernization changes, with the latter two triggering many (largely mechanical) changes to the codebase.

"},{"location":"CHANGELOG/#0110-breaking-changes","title":"0.11.0 Breaking Changes","text":"

In addition to the impacts of the change for developers in switching from pip to Poetry, the only significant breaking change is the (overdue) transition of ACA-Py to always use the new DIDComm message type prefix, changing the DID Message prefix from the old hardcoded did:sov:BzCbsNYhMrjHiqZDTUASHg;spec to the new hardcoded https://didcomm.org value, and using the new DIDComm MIME type in place of the old. The vast majority (all?) Aries deployments have long since been updated to accept both values, so this change just forces the use of the newer value in sending messages. In updating this, we retained the old configuration parameters most deployments were using (--emit-new-didcomm-prefix and --emit-new-didcomm-mime-type) but updated the code to set the configuration parameters to true even if the parameters were not set. See [PR #2517].

The JSON-LD verifiable credential handling of JSON-LD contexts has been updated to pre-load the base contexts into the repository code so they are not fetched at run time. This is a security best practice for JSON-LD, and prevents errors in production when, from time to time, the JSON-LD contexts are unavailable because of outages of the web servers where they are hosted. See [PR #2587].

A Problem Report message is now sent when a request for a credential is received and there is no associated Credential Exchange Record. This may happen, for example, if an issuer decides to delete a Credential Exchange Record that has not be answered for a long time, and the holder responds after the delete. See [PR #2577].

"},{"location":"CHANGELOG/#0110-categorized-list-of-pull-requests","title":"0.11.0 Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • Change arg_parse to always set --emit-new-didcomm-prefix and --emit-new-didcomm-mime-type to true #2517 swcurran
  • DID Handling and Connection Establishment Updates/Fixes
    • Goal and Goal Code in invitation URL. #2591 usingtechnology
    • refactor: use did-peer-2 instead of peerdid #2561 dbluhm
    • Fix: Problem Report Before Exchange Established #2519 Ennovate-com
    • fix: issue #2434: Change DIDExchange States to Match rfc160 #2461 anwalker293
  • DID Peer and DID Resolver Updates and Fixes
    • fix: unique ids for services in legacy peer #2476 dbluhm
    • peer did \u2154 resolution enhancement #2472 Jsyro
    • feat: add timeout to did resolver resolve method #2464 dbluhm
  • ACA-Py as a DIDComm Mediator Updates and Fixes
    • fix: routing behind mediator #2536 dbluhm
    • fix: mediation routing keys as did key #2516 dbluhm
    • refactor: drop mediator_terms and recipient_terms #2515 dbluhm
  • Fixes to Upgrades
    • \ud83d\udc1b fix wallet_update when only extra_settings requested #2612 ff137
  • Hyperledger Indy ledger related updates and fixes
    • fix: taa rough timestamp timezone from datetime #2554 dbluhm
    • \ud83c\udfa8 clarify LedgerError message when TAA is required and not accepted #2545 ff137
    • Feat: Upgrade from tags and fix issue with legacy IssuerRevRegRecords [<=v0.5.2] #2486 shaangill025
    • Bugfix: Issue with write ledger pool when performing Accumulator sync #2480 shaangill025
    • Issue #2419 InvalidClientTaaAcceptanceError time too precise error if container timezone is not UTC #2420 Ennovate-com
  • OpenID4VC / SD-JWT Updates
    • chore: point to official sd-jwt lib release #2573 dbluhm
    • Feat/sd jwt implementation #2487 cjhowland
  • JSON-LD Verifiable Credential/Presentation updates
    • fix: report presentation result #2615 dbluhm
    • Fix Issue #2589 TypeError When There Are No Nested Requirements #2590 Ennovate-com
    • feat: use a local static cache for commonly used contexts #2587 chumbert
    • Issue #2488 KeyError raised when Subject ID is not a URI #2490 Ennovate-com
  • Credential Exchange (Issue, Present) Updates
    • Default connection_id to None to account for Connectionless Proofs #2605 popkinj
    • Send Problem report when CredEx not found #2577 usingtechnology
    • fix: clean up requests and invites #2560 dbluhm
  • Multitenancy Updates and Fixes
    • Feat: Support subwallet upgradation using the Upgrade command #2529 shaangill025
  • Other Fixes, Demo, DevContainer and Documentation Fixes
    • fix: wallet type help text out of date #2618 dbluhm
    • fix: typos #2614 omahs
    • black formatter extension configuration update #2603 usingtechnology
    • Update Devcontainer pytest ruff black #2602 usingtechnology
    • Issue 2570 devcontainer ruff, black and pytest #2595 usingtechnology
    • chore: correct type hints on base record #2604 dbluhm
    • Playground needs optionally external network #2564 usingtechnology
    • Issue 2555 playground scripts readme #2563 usingtechnology
    • Update demo/playground scripts #2562 usingtechnology
    • Update .readthedocs.yaml #2548 swcurran
    • Update .readthedocs.yaml #2547 swcurran
    • fix: correct minor typos #2544 Ennovate-com
    • Update steps for Manually Creating Revocation Registries #2491 WadeBarnes
  • Dependencies and Internal Updates
    • chore: bump pydid version #2626 dbluhm
    • chore: dependency updates #2565 dbluhm
    • chore(deps): Bump urllib3 from 2.0.6 to 2.0.7 dependencies #2552 dependabot bot
    • chore(deps): Bump urllib3 from 2.0.6 to 2.0.7 in /demo/playground/scripts dependencies #2551 dependabot bot
    • chore: update pydid #2527 dbluhm
    • chore(deps): Bump urllib3 from 2.0.5 to 2.0.6 dependencies #2525 dependabot bot
    • chore(deps): Bump urllib3 from 2.0.2 to 2.0.6 in /demo/playground/scripts dependencies #2524 dependabot bot
    • Avoid multiple open wallet connections #2521 andrewwhitehead
    • Remove unused dependencies #2510 andrewwhitehead
    • Use correct rust log level in dockerfiles #2499 loneil
    • fix: run tests script copying local env #2495 dbluhm
    • Update devcontainer to read version from aries-cloudagent package #2483 usingtechnology
    • Update Python image version to 3.9.18 #2456 WadeBarnes
    • Remove old routing protocol code #2466 dbluhm
  • CI/CD, Testing, and Developer Tools/Productivity Updates
    • fix: drop asynctest 0.11.0 #2566 dbluhm
    • Dockerfile.indy - Include aries_cloudagent code into build #2584 usingtechnology
    • fix: version should be set by pyproject.toml #2471 dbluhm
    • chore: add black back in as a dev dep #2465 dbluhm
    • Swap out flake8 in favor of Ruff #2438 dbluhm
  • Release management pull requests
    • 0.11.0 #2627 swcurran
    • 0.11.0rc2 #2613 swcurran
    • 0.11.0-rc1 #2576 swcurran
    • 0.11.0-rc0 #2575 swcurran
"},{"location":"CHANGELOG/#2289-migrate-to-poetry-2436-gavinok","title":"2289 Migrate to Poetry #2436 Gavinok","text":""},{"location":"CHANGELOG/#0105","title":"0.10.5","text":""},{"location":"CHANGELOG/#november-21-2023","title":"November 21, 2023","text":"

Release 0.10.5 is a high priority patch release to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details.

Anyone using JSON-LD presentations is recommended to upgrade to this version of ACA-Py as soon as possible.

"},{"location":"CHANGELOG/#0105-categorized-list-of-pull-requests","title":"0.10.5 Categorized List of Pull Requests","text":"
  • JSON-LD Credential Exchange (Issue, Present) Updates
    • fix(backport): report presentation result #2622 dbluhm
  • Release management pull requests
    • 0.10.5 #2623 swcurran
"},{"location":"CHANGELOG/#0104","title":"0.10.4","text":""},{"location":"CHANGELOG/#october-9-2023","title":"October 9, 2023","text":"

Release 0.10.4 is a patch release to correct an issue with the handling of did:key routing keys in some mediator scenarios, notably with the use of [Aries Framework Kotlin]. See the details in the PR and [Issue #2531 Routing for agents behind a aca-py based mediator is broken].

Thanks to codespree for raising the issue and providing the fix.

Aries Framework Kotlin

"},{"location":"CHANGELOG/#0104-categorized-list-of-pull-requests","title":"0.10.4 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes
    • fix: routing behind mediator #2536 dbluhm
  • Release management pull requests
    • 0.10.4 #2539 swcurran
"},{"location":"CHANGELOG/#0103","title":"0.10.3","text":""},{"location":"CHANGELOG/#september-29-2023","title":"September 29, 2023","text":"

Release 0.10.3 is a patch release to add an upgrade process for very old versions of Aries Cloud Agent Python (circa 0.5.2). If you have a long time deployment of an issuer that uses revocation, this release could correct internal data (tags in secure storage) related to revocation registries. Details of the about the triggering problem can be found in [Issue #2485].

The upgrade is applied by running the following command for the ACA-Py instance to be upgraded:

./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg

"},{"location":"CHANGELOG/#0103-categorized-list-of-pull-requests","title":"0.10.3 Categorized List of Pull Requests","text":"
  • Credential Exchange (Issue, Present) Updates
    • Feat: Upgrade from tags and fix issue with legacy IssuerRevRegRecords [<=v0.5.2] #2486 shaangill025
  • Release management pull requests
    • 0.10.3 #2522 swcurran
"},{"location":"CHANGELOG/#0102","title":"0.10.2","text":""},{"location":"CHANGELOG/#september-22-2023","title":"September 22, 2023","text":"

Release 0.10.2 is a patch release for 0.10.1 that addresses three specific regressions found in deploying Release 0.10.1. The regressions are to fix:

  • An ACA-Py instance upgraded to 0.10.1 that had an existing connection to another Aries agent where the connection has both an http and ws (websocket) service endpoint with the same ID cannot message that agent. A scenario is an ACA-Py issuer connecting to an Endorser with both http and ws service endpoints. The updates made in 0.10.1 to improve ACA-Py DID resolution did not account for this scenario and needed a tweak to work ([Issue #2474], [PR #2475]).
  • The \"fix revocation registry\" endpoint used to fix scenarios an Issuer's local revocation registry state is out of sync with the ledger was broken by some code being added to support a single ACA-Py instance writing to different ledgers ([Issue #2477], [PR #2480]).
  • The version of the PyDID library we were using did not handle some unexpected DID resolution use cases encountered with mediators. The PyDID library version dependency was updated in [PR #2500].
"},{"location":"CHANGELOG/#0102-categorized-list-of-pull-requests","title":"0.10.2 Categorized List of Pull Requests","text":"
  • DID Handling and Connection Establishment Updates/Fixes
    • LegacyPeerDIDResolver: erroneously assigning same ID to multiple services #2475 dbluhm
    • fix: update pydid #2500 dbluhm
  • Credential Exchange (Issue, Present) Updates
    • Bugfix: Issue with write ledger pool when performing Accumulator sync #2480 shaangill025
  • Release management pull requests
    • 0.10.2 #2509 swcurran
    • 0.10.2-rc0 #2484 swcurran
    • 0.10.2 Patch Release - fix issue #2475, #2477 #2482 shaangill025
"},{"location":"CHANGELOG/#0101","title":"0.10.1","text":""},{"location":"CHANGELOG/#august-29-2023","title":"August 29, 2023","text":"

Release 0.10.1 contains a breaking change, an important fix for a regression introduced in 0.8.2 that impacts certain deployments, and a number of fixes and updates. Included in the updates is a significant internal reorganization of the DID and connection management code that was done to enable more flexible uses of different DID Methods, such as being able to use did:web DIDs for DIDComm messaging connections. The work also paves the way for coming updates related to support for did:peer DIDs for DIDComm. For details on the change see [PR #2409], which includes some of the best pull request documentation ever created.

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

The regression fix is for ACA-Py deployments that use multi-use invitations but do NOT use the --auto-accept-connection-requests flag/processing. A change in 0.8.2 (PR [#2223]) suppressed an extra webhook event firing during the processing after receiving a connection request. An unexpected side effect of that change was that the subsequent webhook event also did not fire, and as a result, the controller did not get any event signalling a new connection request had been received via the multi-use invitation. The update in this release ensures the proper event fires and the controller receives the webhook.

See below for the breaking changes and a categorized list of the pull requests included in this release.

Updates in the CI/CD area include adding the publishing of a nightly container image that includes any changes in the main branch since the last nightly was published. This allows getting the \"latest and greatest\" code via a container image vs. having to install ACA-Py from the repository. In addition, Snyk scanning was added to the CI pipeline, and Indy SDK tests were removed from the pipeline.

"},{"location":"CHANGELOG/#0101-breaking-changes","title":"0.10.1 Breaking Changes","text":"

[#2352] is a breaking change related to the storage of presentation exchange records in ACA-Py. In previous releases, presentation exchange protocol state data records were retained in ACA-Py secure storage after the completion of protocol instances. With this release the default behavior changes to deleting those records by default, unless the ----preserve-exchange-records flag is set in the configuration. This extends the use of that flag that previously applied only to issue credential records. The extension matches the initial intention of the flag--that it cover both issue credential and present proof exchanges. The \"best practices\" for ACA-Py is that the controller (business logic) store any long-lasting business information needed for the service that is using the Aries Agent, and ACA-Py storage should be used only for data necessary for the operation of the agent. In particular, protocol state data should be held in ACA-Py only as long as the protocol is running (as it is needed by ACA-Py), and once a protocol instance completes, the controller should extract and store the business information from the protocol state before it is deleted from ACA-Py storage.

"},{"location":"CHANGELOG/#0100-categorized-list-of-pull-requests","title":"0.10.0 Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • fix: outbound send status missing on path #2393 dbluhm
    • fix: keylist update response race condition #2391 dbluhm
  • DID Handling and Connection Establishment Updates/Fixes
    • fix: handle stored afgo and findy docs in corrections #2450 dbluhm
    • chore: relax connections filter DID format #2451 chumbert
    • fix: ignore duplicate record errors on add key #2447 dbluhm
    • fix: ignore duplicate record errors on add key #2447 dbluhm
    • fix: more diddoc corrections #2446 dbluhm
    • feat: resolve connection targets and permit connecting via public DID #2409 dbluhm
    • feat: add legacy peer did resolver #2404 dbluhm
    • Fix: Ensure event/webhook is emitted for multi-use invitations #2413 esune
    • feat: add DID Exchange specific problem reports and reject endpoint #2394 dbluhm
    • fix: additional tweaks for did:web and other methods as public DIDs #2392 dbluhm
    • Fix empty ServiceDecorator in OobRecord causing 422 Unprocessable Entity Error #2362 ff137
    • Feat: Added support for Ed25519Signature2020 signature type and Ed25519VerificationKey2020 #2241 dkulic
  • Upgrading to Aries Askar Updates
    • Add symlink to /home/indy/.indy_client for backwards compatibility #2443 esune
  • Credential Exchange (Issue, Present) Updates
    • fix: ensure request matches offer in JSON-LD exchanges, if sent #2341 dbluhm
    • BREAKING Extend --preserve-exchange-records to include Presentation Exchange. #2352 usingtechnology
    • Correct the response type in send_rev_reg_def #2355 ff137
  • Multitenancy Updates and Fixes
    • Multitenant check endorser_info before saving #2395 usingtechnology
    • Feat: Support Selectable Write Ledger #2339 shaangill025
  • Other Fixes, Demo, and Documentation Fixes
    • Redis Plugins [redis_cache & redis_queue] documentation and docker related updates #1937 shaangill025
    • Chore: fix marshmallow warnings #2398 ff137
    • Upgrade pre-commit and flake8 dependencies; fix flake8 warnings #2399 ff137
    • Corrected typo on mediator invitation configuration argument #2365 jorgefl0
    • Add workaround for ARM based macs #2313 finnformica
  • Dependencies and Internal Updates
    • chore(deps): Bump certifi from 2023.5.7 to 2023.7.22 in /demo/playground/scripts dependencies #2354 dependabot bot
  • CI/CD and Developer Tools/Productivity Updates
    • Fix for nightly tests failing on Python 3.10 #2435 Gavinok
    • Don't run Snyk on forks #2429 ryjones
    • Issue #2250 Nightly publish workflow #2421 Gavinok
    • Enable Snyk scanning #2418 ryjones
    • Remove Indy tests from workflows #2415 dbluhm
  • Release management pull requests
    • 0.10.1 #2454 swcurran
    • 0.10.0 #2452 swcurran
    • 0.10.0-rc2 #2448 swcurran
    • 0.10.0-rc1 #2442 swcurran
    • 0.10.0-rc0 #2414 swcurran
"},{"location":"CHANGELOG/#0100","title":"0.10.0","text":""},{"location":"CHANGELOG/#august-29-2023_1","title":"August 29, 2023","text":"

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

"},{"location":"CHANGELOG/#090","title":"0.9.0","text":""},{"location":"CHANGELOG/#july-24-2023","title":"July 24, 2023","text":"

Release 0.9.0 is an important upgrade that changes (PR [#2302]) the dependency on the now archived Hyperledger Ursa project to its updated, improved replacement, AnonCreds CL-Signatures. This important change is ONLY available when using Aries Askar as the wallet type, which brings in both [Indy VDR] and the CL-Signatures via the latest version of CredX from the indy-shared-rs repository. The update is NOT available to those that are using the Indy SDK. All new deployments of ACA-Py SHOULD use Aries Askar. Further, we strongly recommend that all deployments using the Indy SDK with ACA-Py upgrade their installation to use Aries Askar and the related components using the migration scripts available. An Indy SDK to Askar migration document added to the aca-py.org documentation site, and a deprecation warning added to the ACA-Py startup.

The second big change in this release is that we have upgraded the primary Python version from 3.6 to 3.9 (PR [#2247]). In this case, primary means that Python 3.9 is used to run the unit and integration tests on all Pull Requests. We also do nightly runs of the main branch using Python 3.10. As of this release we have dropped Python 3.6, 3.7 and 3.8, and introduced new dependencies that are not supported in those versions of Python. For those that use the published ACA-Py container images, the upgrade should be easily handled. If you are pulling ACA-Py into your own image, or a non-containerized environment, this is a breaking change that you will need to address.

Please see the next section for all breaking changes, and the subsequent section for a categorized list of all pull requests in this release.

"},{"location":"CHANGELOG/#breaking-changes","title":"Breaking Changes","text":"

In addition to the breaking Python 3.6 to 3.9 upgrade, there are two other breaking changes that may impact some deployments.

[#2034] allows for additional flexibility in using public DIDs in invitations, and adds a restriction that \"implicit\" invitations must be proactively enabled using a flag (--requests-through-public-did). Previously, such requests would always be accepted if --auto-accept was enabled, which could lead to unexpected connections being established.

[#2170] is a change to improve message handling in the face of delivery errors when using a persistent queue implementation such as the ACA-Py Redis Plugin. If you are using the Redis plugin, you MUST upgrade to Redis Plugin Release 0.1.0 in conjunction with deploying this ACA-Py release. For those using their own persistent queue solution, see the PR [#2170] comments for information about changes you might need to make to your deployment.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests","title":"Categorized List of Pull Requests","text":"
  • DIDComm Messaging Improvements/Fixes
    • BREAKING: feat: get queued outbound message in transport handle message #2170 dbluhm
  • DID Handling and Connection Establishment Updates/Fixes
    • Allow any did to be public #2295 mkempa
    • Feat: Added support for Ed25519Signature2020 signature type and Ed25519VerificationKey2020 #2241 dkulic
    • Add Goal and Goal Code to OOB and DIDex Request #2294 usingtechnology
    • Fix routing in set public did #2288 mkempa - Fix: Do not replace public verkey on mediator #2269 mkempa - BREAKING: Allow multi-use public invites and public invites with metadata #2034 mepeltier
    • fix: public did mediator routing keys as did keys #1977 dbluhm
  • Credential Exchange (Issue, Present) Updates
    • Add revocation registry rotate to faber demo #2333 usingtechnology
    • Update to indy-credx 1.0 #2302 andrewwhitehead
    • feat(anoncreds): Implement automated setup of revocation #2292 dbluhm
    • fix: schema class can set Meta.unknown #1885 dbluhm
    • Respect auto-verify-presentation flag in present proof v1 and v2 #2097 dbluhm
    • Feature: JWT Sign and Verify Admin Endpoints with DID Support #2300 burdettadam
  • Multitenancy Updates and Fixes
    • Fix: Track endorser and author roles in per-tenant settings #2331 shaangill025
    • Added base wallet provisioning details to Multitenancy.md #2328 esune
  • Other Fixes, Demo, and Documentation Fixes
    • Add more context to the ACA-Py Revocation handling documentation #2343 swcurran
    • Document the Indy SDK to Askar Migration process #2340 swcurran
    • Add revocation registry rotate to faber demo #2333 usingtechnology
    • chore: add indy deprecation warnings #2332 dbluhm
    • Fix alice/faber demo execution #2305 andrewwhitehead
    • Add .indy_client folder to Askar only image. #2308 WadeBarnes
    • Add build step for indy-base image in run_demo #2299 usingtechnology
    • Webhook over websocket clarification #2287 dbluhm
  • ACA-Py Deployment Upgrade Changes
    • Add Explicit/Offline marking mechanism for Upgrade #2204 shaangill025
  • Plugin Handling Updates
    • Feature: Add the ability to deny specific plugins from loading 0.7.4 #1737 frostyfrog
  • Dependencies and Internal Updates
    • upgrade pyjwt to latest; introduce leeway to jwt.decodet #2335 ff137
    • upgrade requests to latest #2336 ff137
    • upgrade packaging to latest #2334 ff137
    • chore: update PyYAML #2329 dbluhm
    • chore(deps): Bump aiohttp from 3.8.4 to 3.8.5 in /demo/playground/scripts dependencies #2325 dependabot bot
    • \u2b06\ufe0f upgrade marshmallow to latest #2322 ff137
    • fix: use python 3.9 in run_docker #2291 dbluhm
    • BREAKING!: drop python 3.6 support #2247 dbluhm
    • Minor revisions to the README.md and DevReadMe.md #2272 swcurran
  • ACA-Py Administrative Updates
    • Updating Maintainers list to be accurate and using the TOC format #2258 swcurran
  • CI/CD and Developer Tools/Productivity Updates
    • Cancel in-progress workflows when PR is updated #2303 andrewwhitehead
    • ci: add gha for pr-tests #2058 dbluhm
    • Add devcontainer for ACA-Py #2267 usingtechnology
    • Docker images and GHA for publishing images help wanted #2076 dbluhm
    • ci: test additional versions of python nightly #2059 dbluhm
  • Release management pull requests
    • 0.9.0 #2344 swcurran
    • 0.9.0-rc0 #2338 swcurran
"},{"location":"CHANGELOG/#082","title":"0.8.2","text":""},{"location":"CHANGELOG/#june-29-2023","title":"June 29, 2023","text":"

Release 0.8.2 contains a number of minor fixes and updates to ACA-Py, including the correction of a regression in Release 0.8.0 related to the use of plugins (see [#2255]). Highlights include making it easier to use tracing in a development environment to collect detailed performance information about what is going in within ACA-Py.

This release pulls in indy-shared-rs Release 3.3 which fixes a serious issue in AnonCreds verification, as described in issue [#2036], where the verification of a presentation with multiple revocable credentials fails when using Aries Askar and the other shared components. This issue occurs only when using Aries Askar and indy-credx Release 3.3.

An important new feature in this release is the ability to set some instance configuration settings at the tenant level of a multi-tenant deployment. See PR [#2233].

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_1","title":"Categorized List of Pull Requests","text":"
  • Connections Fixes/Updates
    • Resolve definitions.py fix to fix backwards compatibility break in plugins #2255 usingtechnology
    • Add support for JsonWebKey2020 for the connection invitations #2173 dkulic
    • fix: only cache completed connection targets #2240 dbluhm
    • Connection target should not be limited only to indy dids #2229 dkulic
    • Disable webhook trigger on initial response to multi-use connection invitation #2223 esune
  • Credential Exchange (Issue, Present) Updates
    • Pass document loader to jsonld.expand #2175 andrewwhitehead
  • Multi-tenancy fixes/updates
    • Allow Configuration Settings on a per-tenant basis #2233 shaangill025
    • stand up multiple agents (single and multi) for local development and testing #2230 usingtechnology
    • Multi-tenant self-managed mediation verkey lookup #2232 usingtechnology
    • fix: route multitenant connectionless oob invitation #2243 TimoGlastra
    • Fix multitenant/mediation in demo #2075 ianco
  • Other Bug and Documentation Fixes
    • Assign ~thread.thid with thread_id value #2261 usingtechnology
    • Fix: Do not replace public verkey on mediator #2269 mkempa
    • Provide an optional Profile to the verification key strategy #2265 yvgny
    • refactor: Extract verification method ID generation to a separate class #2235 yvgny
    • Create .readthedocs.yaml file #2268 swcurran
    • feat(did creation route): reject unregistered did methods #2262 chumbert
    • ./run_demo performance -c 1 --mediation --timing --trace-log #2245 usingtechnology
    • Fix formatting and grammatical errors in different readme's #2222 ff137
    • Fix broken link in README #2221 ff137
    • fix: run only on main, forks ok #2166 anwalker293
    • Update Alice Wants a JSON-LD Credential to fix invocation #2219 swcurran
  • Dependencies and Internal Updates
    • Bump requests from 2.30.0 to 2.31.0 in /demo/playground/scripts dependenciesPull requests that update a dependency file #2238 dependabot bot
    • Upgrade codegen tools in scripts/generate-open-api-spec and publish Swagger 2.0 and OpenAPI 3.0 specs #2246 ff137
  • ACA-Py Administrative Updates
    • Propose adding Jason Sherman usingtechnology as a Maintainer #2263 swcurran
    • Updating Maintainers list to be accurate and using the TOC format #2258 swcurran
  • Message Tracing/Timing Updates
    • Add updated ELK stack for demos. #2236 usingtechnology
  • Release management pull requests
    • 0.8.2 #2285 swcurran
    • 0.8.2-rc2 #2284 swcurran
    • 0.8.2-rc1 #2282 swcurran
    • 0.8.2-rc0 #2260 swcurran
"},{"location":"CHANGELOG/#081","title":"0.8.1","text":""},{"location":"CHANGELOG/#april-5-2023","title":"April 5, 2023","text":"

Version 0.8.1 is an urgent update to Release 0.8.0 to address an inability to execute the upgrade command. The upgrade command is needed for 0.8.0 Pull Request [#2116] - \"UPGRADE: Fix multi-use invitation performance\", which is useful for (at least) deployments of ACA-Py as a mediator. In the release, the upgrade process is revamped, and documented in Upgrading ACA-Py.

Key points about upgrading for those with production, pre-0.8.1 ACA-Py deployments:

  • Upgrades now happen automatically on startup, when needed.
  • The version of the last executed upgrade, even if it is a \"no change\" upgrade, is put into secure storage and is used to detect when future upgrades are needed.
    • Upgrades are needed when the running version is greater than the version is secure storage.
  • If you have an existing, pre-0.8.1 deployment with many connection records, there may be a delay in starting as an upgrade will be run that loads and saves every connection record, updating the data in the record in the process.
    • A mechanism is to be added (see Issue #2201) for preventing an upgrade running if it should not be run automatically, and requires using the upgrade command. To date, there has been no need for this feature.
  • See the Upgrading ACA-Py document for more details.
"},{"location":"CHANGELOG/#postgres-support-with-aries-askar","title":"Postgres Support with Aries Askar","text":"

Recent changes to Aries Askar have resulted in Askar supporting Postgres version 11 and greater. If you are on Postgres 10 or earlier and want to upgrade to use Askar, you must migrate your database to Postgres 10.

We have also noted that in some container orchestration environments such as Red Hat's OpenShift and possibly other Kubernetes distributions, Askar using Postgres versions greater than 14 do not install correctly. Please monitor [Issue #2199] for an update to this limitation. We have found that Postgres 15 does install correctly in other environments (such as in docker compose setups).

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_2","title":"Categorized List of Pull Requests","text":"
  • Fixes for the upgrade Command
    • Change upgrade definition file entry from 0.8.0 to 0.8.1 #2203 swcurran
    • Add Upgrading ACA-Py document #2200 swcurran
    • Fix: Indy WalletAlreadyOpenedError during upgrade process #2196 shaangill025
    • Fix: Resolve Upgrade Config file in Container #2193 shaangill025
    • Update and automate ACA-Py upgrade process #2185 shaangill025
    • Adds the upgrade command YML file to the PyPi Release #2179 swcurran
  • Test and Documentation
    • 3.7 and 3.10 unittests fix #2187 Jsyro
    • Doc update and some test scripts #2189 ianco
    • Create UnitTests.md #2183 swcurran
    • Add link to recorded session about the ACA-Py Integration tests #2184 swcurran
  • Release management pull requests
    • 0.8.1 #2207 swcurran
    • 0.8.1-rc2 #2198 swcurran
    • 0.8.1-rc1 #2194 swcurran
    • 0.8.1-rc0 #2190 swcurran
"},{"location":"CHANGELOG/#080","title":"0.8.0","text":""},{"location":"CHANGELOG/#march-14-2023","title":"March 14, 2023","text":"

0.8.0 is a breaking change that contains all updates since release 0.7.5. It extends the previously tagged 1.0.0-rc1 release because it is not clear when the 1.0.0 release will be finalized. Many of the PRs in this release were previously included in the 1.0.0-rc1 release. The categorized list of PRs separates those that are new from those in the 1.0.0-rc1 release candidate.

There are not a lot of new Aries Framework features in this release, as the focus has been on cleanup and optimization. The biggest addition is the inclusion with ACA-Py of a universal resolver interface, allowing an instance to have both local resolvers for some DID Methods and a call out to an external universal resolver for other DID Methods. Another significant new capability is full support for Hyperledger Indy transaction endorsement for Authors and Endorsers. A new repo aries-endorser-service has been created that is a pre-configured instance of ACA-Py for use as an Endorser service.

A recently completed feature that is outside of ACA-Py is a script to migrate existing ACA-Py storage from Indy SDK format to Aries Askar format. This enables existing deployments to switch to using the newer Aries Askar components. For details see the converter in the aries-acapy-tools repository.

"},{"location":"CHANGELOG/#container-publishing-updated","title":"Container Publishing Updated","text":"

With this release, a new automated process publishes container images in the Hyperledger container image repository. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\"). Additional information about the container image publication process can be found in the document Container Images and Github Actions.

The ACA-Py container images are based on Python 3.6 and 3.9 slim-bullseye images, and are designed to support linux/386 (x86), linux/amd64 (x64), and linux/arm64. However, for this release, the publication of multi-architecture containers is disabled. We are working to enable that through the updating of some dependencies that lack that capability. There are two flavors of image built for each Python version. One contains only the Indy/Aries Shared Libraries only (Aries Askar, Indy VDR and Indy Shared RS, supporting only the use of --wallet-type askar). The other (labelled indy) contains the Indy/Aries shared libraries and the Indy SDK (considered deprecated). For new deployments, we recommend using the Python 3.9 Shared Library images. For existing deployments, we recommend migrating to those images.

Those currently using the container images published by BC Gov on Docker Hub should change to use those published to the Hyperledger Package Repository under aries-cloudagent-python.

"},{"location":"CHANGELOG/#breaking-changes-and-upgrades","title":"Breaking Changes and Upgrades","text":""},{"location":"CHANGELOG/#pr-2034-implicit-connections","title":"PR #2034 -- Implicit connections","text":"

The break impacts existing deployments that support implicit connections, those initiated by another agent using a Public DID for this instance instead of an explicit invitation. Such deployments need to add the configuration parameter --requests-through-public-did to continue to support that feature. The use case is that an ACA-Py instance publishes a public DID on a ledger with a DIDComm service in the DIDDoc. Other agents resolve that DID, and attempt to establish a connection with the ACA-Py instance using the service endpoint. This is called an \"implicit\" connection in RFC 0023 DID Exchange.

"},{"location":"CHANGELOG/#pr-1913-unrevealed-attributes-in-presentations","title":"PR #1913 -- Unrevealed attributes in presentations","text":"

Updates the handling of \"unrevealed attributes\" during verification of AnonCreds presentations, allowing them to be used in a presentation, with additional data that can be checked if for unrevealed attributes. As few implementations of Aries wallets support unrevealed attributes in an AnonCreds presentation, this is unlikely to impact any deployments.

"},{"location":"CHANGELOG/#pr-2145-update-webhook-message-to-terse-form-by-default-added-startup-flag-debug-webhooks-for-full-form","title":"PR #2145 - Update webhook message to terse form by default, added startup flag --debug-webhooks for full form","text":"

The default behavior in ACA-Py has been to keep the full text of all messages in the protocol state object, and include the full protocol state object in the webhooks sent to the controller. When the messages include an object that is very large in all the messages, the webhook may become too big to be passed via HTTP. For example, issuing a credential with a photo as one of the claims may result in a number of copies of the photo in the protocol state object and hence, very large webhooks. This change reduces the size of the webhook message by eliminating redundant data in the protocol state of the \"Issue Credential\" message as the default, and adds a new parameter to use the old behavior.

"},{"location":"CHANGELOG/#upgrade-pr-2116-upgrade-fix-multi-use-invitation-performance","title":"UPGRADE PR #2116 - UPGRADE: Fix multi-use invitation performance","text":"

The way that multiuse invitations in previous versions of ACA-Py caused performance to degrade over time. An update was made to add state into the tag names that eliminated the need to scan the tags when querying storage for the invitation.

If you are using multiuse invitations in your existing (pre-0.8.0 deployment of ACA-Py, you can run an upgrade to apply this change. To run upgrade from previous versions, use the following command using the 0.8.0 version of ACA-Py, adding you wallet settings:

aca-py upgrade <other wallet config settings> --from-version=v0.7.5 --upgrade-config-path ./upgrade.yml

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_3","title":"Categorized List of Pull Requests","text":"
  • Verifiable credential, presentation and revocation handling updates

    • BREAKING: Update webhook message to terse form [default, added startup flag --debug-webhooks for full form #2145 by victorlee0505
    • Add startup flag --light-weight-webhook to trim down outbound webhook payload #1941 victorlee0505
    • feat: add verification method issue-credentials-2.0/send endpoint #2135 chumbert
    • Respect auto-verify-presentation flag in present proof v1 and v2 #2097 dbluhm
    • Feature: enabled handling VPs (request, creation, verification) with different VCs #1956 (teanas)
    • fix: update issue-credential endpoint summaries #1997 (PeterStrob)
    • fix claim format designation in presentation submission #2013 (rmnre)
    • #2041 - Issue JSON-LD has invalid Admin API documentation #2046 (jfblier-amplitude)
    • Previously flagged in release 1.0.0-rc1
    • Refactor ledger correction code and insert into revocation error handling #1892 (ianco)
    • Indy ledger fixes and cleanups #1870 (andrewwhitehead)
    • Refactoring of revocation registry creation #1813 (andrewwhitehead)
    • Fix: \bthe type of tails file path to string. #1925 (baegjae)
    • Pre-populate revoc_reg_id on IssuerRevRegRecord #1924 (andrewwhitehead)
    • Leave credentialStatus element in the LD credential #1921 (tsabolov)
    • BREAKING: Remove aca-py check for unrevealed revealed attrs on proof validation #1913 (ianco)
    • Send webhooks upon record/credential deletion #1906 (frostyfrog)
  • Out of Band (OOB) and DID Exchange / Connection Handling / Mediator

    • UPGRADE: Fix multi-use invitation performance #2116 reflectivedevelopment
    • fix: public did mediator routing keys as did keys #1977 (dbluhm)
    • Fix for mediator load testing race condition when scaling horizontally #2009 (ianco)
    • BREAKING: Allow multi-use public invites and public invites with metadata #2034 (mepeltier)
    • Do not reject OOB invitation with unknown handshake protocol\\(s\\) #2060 (andrewwhitehead)
    • fix: fix connection timing bug #2099 (reflectivedevelopment)
    • Previously flagged in release 1.0.0-rc1
    • Fix: --mediator-invitation with OOB invitation + cleanup #1970 (shaangill025)
    • include image_url in oob invitation #1966 (Zzocker)
    • feat: 00B v1.1 support #1962 (shaangill025)
    • Fix: OOB - Handling of minor versions #1940 (shaangill025)
    • fix: failed connectionless proof request on some case #1933 (kukgini)
    • fix: propagate endpoint from mediation record #1922 (cjhowland)
    • Feat/public did endpoints for agents behind mediators #1899 (cjhowland)
  • DID Registration and Resolution related updates

    • feat: allow marking non-SOV DIDs as public #2144 chumbert
    • fix: askar exception message always displaying null DID #2155 chumbert
    • feat: enable creation of DIDs for all registered methods #2067 (chumbert)
    • fix: create local DID return schema #2086 (chumbert)
    • feat: universal resolver - configurable authentication #2095 (chumbert)
    • Previously flagged in release 1.0.0-rc1
    • feat: add universal resolver #1866 (dbluhm)
    • fix: resolve dids following new endpoint rules #1863 (dbluhm)
    • fix: didx request cannot be accepted #1881 (rmnre)
    • did method & key type registry #1986 (burdettadam)
    • Fix/endpoint attrib structure #1934 (cjhowland)
    • Simple did registry #1920 (burdettadam)
    • Use did:key for recipient keys #1886 (frostyfrog)
  • Hyperledger Indy Endorser/Author Transaction Handling

    • Update some of the demo Readme and Endorser instructions #2122 swcurran
    • Special handling for the write ledger #2030 (ianco)
    • Previously flagged in release 1.0.0-rc1
    • Fix/txn job setting #1994 (ianco)
    • chore: fix ACAPY_PROMOTE-AUTHOR-DID flag #1978 (morrieinmaas)
    • Endorser write DID transaction #1938 (ianco)
    • Endorser doc updates and some bug fixes #1926 (ianco)
  • Admin API Additions

    • fix: response type on delete-tails-files endpoint #2133 chumbert
    • OpenAPI validation fixes #2127 loneil
    • Delete tail files #2103 ramreddychalla94
  • Startup Command Line / Environment / YAML Parameter Updates

    • Update webhook message to terse form [default, added startup flag --debug-webhooks for full form #2145 by victorlee0505
    • Add startup flag --light-weight-webhook to trim down outbound webhook payload #1941 victorlee0505
    • Add missing --mediator-connections-invite cmd arg info to docs #2051 (matrixik)
    • Issue #2068 boolean flag change to support HEAD requests to default route #2077 (johnekent)
    • Previously flagged in release 1.0.0-rc1
    • Add seed command line parameter but use only if also an \"allow insecure seed\" parameter is set #1714 (DaevMithran)
  • Internal Aries framework data handling updates

    • fix: resolver api schema inconsistency #2112 (TimoGlastra)
    • fix: return if return route but no response #1853 (TimoGlastra)
    • Multi-ledger/Multi-tenant issues #2022 (ianco)
    • fix: Correct typo in model -- required spelled incorrectly #2031 (swcurran)
    • Code formatting #2053 (ianco)
    • Improved validation of record state attributes #2071 (rmnre)
    • Previously flagged in release 1.0.0-rc1
    • fix: update RouteManager methods use to pass profile as parameter #1902 (chumbert)
    • Allow fully qualified class names for profile managers #1880 (chumbert)
    • fix: unable to use askar with in memory db #1878 (dbluhm)
    • Enable manually triggering keylist updates during connection #1851 (dbluhm)
    • feat: make base wallet route access configurable #1836 (dbluhm)
    • feat: event and webhook on keylist update stored #1769 (dbluhm)
    • fix: Safely shutdown when root_profile uninitialized #1960 (frostyfrog)
    • feat: include connection ids in keylist update webhook #1914 (dbluhm)
    • fix: incorrect response schema for discover features #1912 (dbluhm)
    • Fix: SchemasInputDescriptorFilter: broken deserialization renders generated clients unusable #1894 (rmnre)
    • fix: schema class can set Meta.unknown #1885 (dbluhm)
  • Unit, Integration, and Aries Agent Test Harness Test updates

    • Additional integration tests for revocation scenarios #2055 (ianco)
    • Previously flagged in release 1.0.0-rc1
    • Fixes a few AATH failures #1897 (ianco)
    • fix: warnings in tests from IndySdkProfile #1865 (dbluhm)
    • Unit test fixes for python 3.9 #1858 (andrewwhitehead)
    • Update pip-audit.yml #1945 (ryjones)
    • Update pip-audit.yml #1944 (ryjones)
  • Dependency, Python version, GitHub Actions and Container Image Changes

    • Remove CircleCI Status since we aren't using CircleCI anymore #2163 swcurran
    • Update ACA-Py docker files to produce OpenShift compatible images #2130 WadeBarnes
    • Temporarily disable multi-architecture image builds #2125 WadeBarnes
    • Fix ACA-py image builds #2123 WadeBarnes
    • Fix publish workflows #2117 WadeBarnes
    • fix: indy dependency version format #2054 (chumbert)
    • ci: add gha for pr-tests #2058 (dbluhm)
    • ci: test additional versions of python nightly #2059 (dbluhm)
    • Update github actions dependencies \\(for node16 support\\) #2066 (andrewwhitehead)
    • Docker images and GHA for publishing images #2076 (dbluhm)
    • Update dockerfiles to use python 3.9 #2109 (ianco)
    • Updating base images from slim-buster to slim-bullseye #2105 (pradeepp88)
    • Previously flagged in release 1.0.0-rc1
    • feat: update pynacl version from 1.4.0 to 1.50 #1981 (morrieinmaas)
    • Fix: web.py dependency - integration tests & demos #1973 (shaangill025)
    • chore: update pydid #1915 (dbluhm)
  • Demo and Documentation Updates

    • [fix] Removes extra comma that prevents swagger from accepting the presentation request #2149 swcurran
    • Initial plugin docs #2138 ianco
    • Acme workshop #2137 ianco
    • Fix: Performance Demo [no --revocation] #2151 shaangill025
    • Fix typos in alice-local.sh & faber-local.sh #2010 (naonishijima)
    • Added a bit about manually creating a revoc reg tails file #2012 (ianco)
    • Add ability to set docker container name #2024 (matrixik)
    • Doc updates for json demo #2026 (ianco)
    • Multitenancy demo \\(docker-compose with postgres and ngrok\\) #2089 (ianco)
    • Allow using YAML configuration file with run_docker #2091 (matrixik)
    • Previously flagged in release 1.0.0-rc1
    • Fixes to acme exercise code #1990 (ianco)
    • Fixed bug in run_demo script #1982 (pasquale95)
    • Transaction Author with Endorser demo #1975 (ianco)
    • Redis Plugins [redis_cache & redis_queue] related updates #1937 (shaangill025)
  • Release management pull requests

    • 0.8.0 release #2169 (swcurran)
    • 0.8.0-rc0 release updates #2115 (swcurran)
    • Previously flagged in release 1.0.0-rc1
    • Release 1.0.0-rc0 #1904 (swcurran)
    • Add 0.7.5 patch Changelog entry to main branch Changelog #1996 (swcurran)
    • Release 1.0.0-rc1 #2005 (swcurran)
"},{"location":"CHANGELOG/#075","title":"0.7.5","text":""},{"location":"CHANGELOG/#october-26-2022","title":"October 26, 2022","text":"

0.7.5 is a patch release to deal primarily to add PR #1881 DID Exchange in ACA-Py 0.7.4 with explicit invitations and without auto-accept broken. A couple of other PRs were added to the release, as listed below, and in Milestone 0.7.5.

"},{"location":"CHANGELOG/#list-of-pull-requests","title":"List of Pull Requests","text":"
  • Changelog and version updates for version 0.7.5-rc1 #1985 (swcurran)
  • Endorser doc updates and some bug fixes #1926 (ianco)
  • Fix: web.py dependency - integration tests & demos #1973 (shaangill025)
  • Endorser write DID transaction #1938 (ianco)
  • fix: didx request cannot be accepted #1881 (rmnre)
  • Fix: OOB - Handling of minor versions #1940 (shaangill025)
  • fix: Safely shutdown when root_profile uninitialized #1960 (frostyfrog)
  • feat: 00B v1.1 support #1962 (shaangill025)
  • 0.7.5 Cherry Picks #1967 (frostyfrog)
  • Changelog and version updates for version 0.7.5-rc0 #1969 (swcurran)
  • Final 0.7.5 changes #1991 (swcurran)
"},{"location":"CHANGELOG/#074","title":"0.7.4","text":""},{"location":"CHANGELOG/#june-30-2022","title":"June 30, 2022","text":"

Existing multitenant JWTs invalidated when a new JWT is generated: If you have a pre-existing implementation with existing Admin API authorization JWTs, invoking the endpoint to get a JWT now invalidates the existing JWT. Previously an identical JWT would be created. Please see this comment on PR #1725 for more details.

0.7.4 is a significant release focused on stability and production deployments. As the \"patch\" release number indicates, there were no breaking changes in the Admin API, but a huge volume of updates and improvements. Highlights of this release include:

  • A major performance and stability improvement resulting from the now recommended use of Aries Askar instead of the Indy-SDK.
  • There are significant improvements and tools for dealing with revocation-related issues.
  • A lot of work has been on the handling of Hyperledger Indy transaction endorsements.
  • ACA-Py now has a pluggable persistent queues mechanism in place, with Redis and Kafka support available (albeit with work still to come on documentation).

In addition, there are a significant number of general enhancements, bug fixes, documentation updates and code management improvements.

This release is a reflection of the many groups stressing ACA-Py in production environments, reporting issues and the resulting solutions. We also have a very large number of contributors to ACA-Py, with this release having PRs from 22 different individuals. A big thank you to all of those using ACA-Py, raising issues and providing solutions.

"},{"location":"CHANGELOG/#major-enhancements","title":"Major Enhancements","text":"

A lot of work has been put into this release related to performance and load testing, with significant updates being made to the key \"shared component\" ACA-Py dependencies (Aries Askar, Indy VDR) and Indy Shared RS (including CredX). We now recommend using those components (by using --wallet-type askar in the ACA-Py startup parameters) for new ACA-Py deployments. A wallet migration tool from indy-sdk storage to Askar storage is still needed before migrating existing deployment to Askar. A big thanks to those creating/reporting on stress test scenarios, and especially the team at LISSI for creating the aries-cloudagent-loadgenerator to make load testing so easy! And of course to the core ACA-Py team for addressing the findings.

The largest enhancement is in the area of the endorsing of Hyperledger Indy ledger transactions, enabling an instance of ACA-Py to act as an Endorser for Indy authors needing endorsements to write objects to an Indy ledger. We're working on an Aries Endorser Service based on the new capabilities in ACA-Py, an Endorser to be easily operated by an organization, ideally with a controller starter kit supporting a basic human and automated approvals business workflow. Contributions welcome!

A focus towards the end of the 0.7.4 development and release cycle was on the handling of AnonCreds revocation in ACA-Py. Most important, a production issue was uncovered where by an ACA-Py issuer's local Revocation Registry data could get out of sync with what was published on an Indy ledger, resulting in an inability to publish new RevRegEntry transactions -- making new revocations impossible. As a result, we have added some new endpoints to enable an update to the RevReg storage such that RevRegEntry transactions can again be published to the ledger. Other changes were added related to revocation in general and in the handling of tails files in particular.

The team has worked a lot on evolving the persistent queue (PQ) approach available in ACA-Py. We have landed on a design for the queues for inbound and outbound messages using a default in-memory implementation, and the ability to replace the default method with implementations created via an ACA-Py plugin. There are two concrete, out-of-the-box external persistent queuing solutions available for Redis and Kafka. Those ACA-Py persistent queue implementation repositories will soon be migrated to the Aries project within the Hyperledger Foundation's GitHub organization. Anyone else can implement their own queuing plugin as long as it uses the same interface.

Several new ways to control ACA-Py configurations were added, including new startup parameters, Admin API parameters to control instances of protocols, and additional web hook notifications.

A number of fixes were made to the Credential Exchange protocols, both for V1 and V2, and for both AnonCreds and W3C format VCs. Nothing new was added and there no changes in the APIs.

As well there were a number of internal fixes, dependency updates, documentation and demo changes, developer tools and release management updates. All the usual stuff needed for a healthy, growing codebase.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_4","title":"Categorized List of Pull Requests","text":"
  • Hyperledger Indy Endorser related updates:

    • Fix order of operations connecting faber to endorser #1716 (ianco)
    • Endorser support for updating DID endpoints on ledger #1696 (frostyfrog)
    • Add \"sent\" key to both Schema and Cred Defs when using Endorsers #1663 (frostyfrog)
    • Add cred_def_id to metadata when using an Endorser #1655 (frostyfrog)
    • Update Endorser documentation #1646 (chumbert)
    • Auto-promote author did to public after endorsing #1607 (ianco)
    • DID updates for endorser #1601 (ianco)
    • Qualify did exch connection lookup by role #1670 (ianco)
    • Use provided connection_id if provided #1726 (ianco)
  • Additions to the startup parameters, Admin API and Web Hooks

    • Improve typing of settings and add plugin settings object #1833 (dbluhm)
    • feat: accept taa using startup parameter --accept-taa #1643 (TimoGlastra)
    • Add auto_verify flag in present-proof protocol #1702 (DaevMithran)
    • feat: query connections by their_public_did #1637 (TimoGlastra)
    • feat: enable webhook events for mediation records #1614 (TimoGlastra)
    • Feature/undelivered events #1694 (mepeltier)
    • Allow use of SEED when creating local wallet DID Issue-1682 Issue-1682 #1705 (DaevMithran)
    • Feature: Add the ability to deny specific plugins from loading #1737 (frostyfrog)
    • feat: Add filter param to connection list for invitations #1797 (frostyfrog)
    • Fix missing webhook handler #1816 (ianco)
  • Persistent Queues

    • Redis PQ Cleanup in preparation for enabling the uses of plugin PQ implementations [Issue#1659] #1659 (shaangill025)
  • Credential Revocation and Tails File Handling

    • Fix handling of non-revocable credential when timestamp is specified \\(askar/credx\\) #1847 (andrewwhitehead)
    • Additional endpoints to get revocation details and fix \"published\" status #1783 (ianco)
    • Fix IssuerCredRevRecord state update on revocation publish #1827 (andrewwhitehead)
    • Fix put_file when the server returns a redirect #1808 (andrewwhitehead)
    • Adjust revocation registry update procedure to shorten transactions #1804 (andrewwhitehead)
    • fix: Resolve Revocation Notification environment variable name collision #1751 (frostyfrog)
    • fix: always notify if revocation notification record exists #1665 (TimoGlastra)
    • Fix for AnonCreds non-revoc proof with no timestamp #1628 (ianco)
    • Fixes for v7.3.0 - Issue #1597 #1711 (shaangill025)
    • Fixes Issue 1 from #1597: Tails file upload fails when a credDef is created and multi ledger support is enabled
    • Fix tails server upload multi-ledger mode #1785 (ianco)
    • Feat/revocation notification v2 #1734 (frostyfrog)
  • Issue Credential, Present Proof updates/fixes

    • Fix: Present Proof v2 - check_proof_vs_proposal update to support proof request with restrictions #1820 (shaangill025)
    • Fix: present-proof v1 send-proposal flow #1811 (shaangill025)
    • Prover - verification outcome from presentation ack message #1757 (shaangill025)
    • feat: support connectionless exchange #1710 (TimoGlastra)
    • Fix: DIF proof proposal when creating bound presentation request [Issue#1687] #1690 (shaangill025)
    • Fix DIF PresExch and OOB request_attach delete unused connection #1676 (shaangill025)
    • Fix DIFPresFormatHandler returning invalid V20PresExRecord on presentation verification #1645 (rmnre)
    • Update aries-askar patch version to at least 0.2.4 as 0.2.3 does not include backward compatibility #1603 (acuderman)
    • Fixes for credential details in issue-credential webhook responses #1668 (andrewwhitehead)
    • Fix: present-proof v2 send-proposal issue#1474 #1667 (shaangill025)
    • Fixes Issue 3b from #1597: V2 Credential exchange ignores the auto-respond-credential-request
    • Revert change to send_credential_ack return value #1660 (andrewwhitehead)
    • Fix usage of send_credential_ack #1653 (andrewwhitehead)
    • Replace blank credential/presentation exchange states with abandoned state #1605 (andrewwhitehead)
    • Fixes Issue 4 from #1597: Wallet type askar has issues when receiving V1 credentials
    • Fixes and cleanups for issue-credential 1.0 #1619 (andrewwhitehead)
    • Fix: Duplicated schema and cred_def - Askar and Postgres #1800 (shaangill025)
  • Mediator updates and fixes

    • feat: allow querying default mediator from base wallet #1729 (dbluhm)
    • Added async with for mediator record delete #1749 (dejsenlitro)
  • Multitenacy updates and fixes

    • feat: create new JWT tokens and invalidate older for multitenancy #1725 (TimoGlastra)
    • Multi-tenancy stale wallet clean up #1692 (dbluhm)
  • Dependencies and internal code updates/fixes

    • Update pyjwt to 2.4 #1829 (andrewwhitehead)
    • Fix external Outbound Transport loading code #1812 (frostyfrog)
    • Fix iteration over key list, update Askar to 0.2.5 #1740 (andrewwhitehead)
    • Fix: update IndyLedgerRequestsExecutor logic - multitenancy and basic base wallet type #1700 (shaangill025)
    • Move database operations inside the session context #1633 (acuderman)
    • Upgrade ConfigArgParse to version 1.5.3 #1627 (WadeBarnes)
    • Update aiohttp dependency #1606 (acuderman)
    • did-exchange implicit request pthid update & invitation key verification #1599 (shaangill025)
    • Fix auto connection response not being properly mediated #1638 (dbluhm)
    • platform target in run tests. #1697 (burdettadam)
    • Add an integration test for mixed proof with a revocable cred and a n\u2026 #1672 (ianco)
    • Fix: Inbound Transport is_external attribute #1802 (shaangill025)
    • fix: add a close statement to ensure session is closed on error #1777 (reflectivedevelopment)
    • Adds transport_id variable assignment back to outbound enqueue method #1776 (amanji)
    • Replace async workaround within document loader #1774 (frostyfrog)
  • Documentation and Demo Updates

    • Use default wallet type askar for alice/faber demo and bdd tests #1761 (ianco)
    • Update the Supported RFCs document for 0.7.4 release #1846 (swcurran)
    • Fix a typo in DevReadMe.md #1844 (feknall)
    • Add troubleshooting document, include initial examples - ledger connection, out-of-sync RevReg #1818 (swcurran)
    • Update POST /present-proof/send-request to POST /present-proof-2.0/send-request #1824 (lineko)
    • Fetch from --genesis-url likely to fail in composed container #1746 (tdiesler)
    • Fixes logic for web hook formatter in Faber demo #1739 (amanji)
    • Multitenancy Docs Update #1706 (MonolithicMonk)
    • #1674 Add basic DOCKER_ENV logging for run_demo #1675 (tdiesler)
    • Performance demo updates #1647 (ianco)
    • docs: supported features attribution #1654 (TimoGlastra)
    • Documentation on existing language wrappers for aca-py #1738 (etschelp)
    • Document impact of multi-ledger on TAA acceptance #1778 (ianco)
  • Code management and contributor/developer support updates

    • Set prefix for integration test demo agents; some code cleanup #1840 (andrewwhitehead)
    • Pin markupsafe at version 2.0.1 #1642 (andrewwhitehead)
    • style: format with stable black release #1615 (TimoGlastra)
    • Remove references to play with von #1688 (ianco)
    • Add pre-commit as optional developer tool #1671 (dbluhm)
    • run_docker start - pass environment variables #1715 (shaangill025)
    • Use local deps only #1834 (ryjones)
    • Enable pip-audit #1831 (ryjones)
    • Only run pip-audit on main repo #1845 (ryjones)
  • Release management pull requests

    • 0.7.4 Release Changelog and version update #1849 (swcurran)
    • 0.7.4-rc5 changelog, version and ReadTheDocs updates #1838 (swcurran)
    • Update changelog and version for 0.7.4-rc4 #1830 (swcurran)
    • Changelog, version and ReadTheDocs updates for 0.7.4-rc3 release #1817 (swcurran)
    • 0.7.4-rc2 update #1771 (swcurran)
    • Some ReadTheDocs File updates #1770 (swcurran)
    • 0.7.4-RC1 Changelog intro paragraph - fix copy/paste error #1753 (swcurran)
    • Fixing the intro paragraph and heading in the changelog of this 0.7.4RC1 #1752 (swcurran)
    • Updates to Changelog for 0.7.4. RC1 release #1747 (swcurran)
    • Prep for adding the 0.7.4-rc0 tag #1722 (swcurran)
    • Added missed new module -- upgrade -- to the RTD generated docs #1593 (swcurran)
    • Doh....update the date in the Changelog for 0.7.3 #1592 (swcurran)
"},{"location":"CHANGELOG/#073","title":"0.7.3","text":""},{"location":"CHANGELOG/#january-10-2022","title":"January 10, 2022","text":"

This release includes some new AIP 2.0 features out (Revocation Notification and Discover Features 2.0), a major new feature for those using Indy ledger (multi-ledger support), a new \"version upgrade\" process that automates updating data in secure storage required after a new release, and a fix for a critical bug in some mediator scenarios. The release also includes several new pieces of documentation (upgrade processing, storage database information and logging) and some other documentation updates that make the ACA-Py Read The Docs site useful again. And of course, some recent bug fixes and cleanups are included.

There is a BREAKING CHANGE for those deploying ACA-Py with an external outbound queue implementation (see PR #1501). As far as we know, there is only one organization that has such an implementation and they were involved in the creation of this PR, so we are not making this release a minor or major update. However, anyone else using an external queue should be aware of the impact of this PR that is included in the release.

For those that have an existing deployment of ACA-Py with long-lasting connection records, an upgrade is needed to use RFC 434 Out of Band and the \"reuse connection\" as the invitee. In PR #1453 (details below) a performance improvement was made when finding a connection for reuse. The new approach (adding a tag to the connection to enable searching) applies only to connections made using this ACA-Py release and later, and \"as-is\" connections made using earlier releases of ACA-Py will not be found as reuse candidates. A new \"Upgrade deployment\" capability (#1557, described below) must be executed to update your deployment to add tags for all existing connections.

The Supported RFCs document has been updated to reflect the addition of the AIP 2.0 RFCs for which support was added.

The following is an annotated list of PRs in the release, including a link to each PR.

  • AIP 2.0 Features
    • Discover Features Protocol: v1_0 refactoring and v2_0 implementation #1500
    • Updates the Discover Features 1.0 (AIP 1.0) implementation and implements the new 2.0 version. In doing so, adds generalized support for goal codes to ACA-Py.
    • fix DiscoveryExchangeRecord RECORD_TOPIC typo fix #1566
    • Implement Revocation Notification v1.0 #1464
    • Fix integration tests (revocation notifications) #1528
    • Add Revocation notification support to alice/faber #1527
  • Other New Features
    • Multiple Indy Ledger support and State Proof verification #1425
    • Remove required dependencies from multi-ledger code that was requiring the import of Aries Askar even when not being used#1550
    • Fixed IndyDID resolver bug after Tag 0.7.3rc0 created #1569
    • Typo vdr service name #1563
    • Fixes and cleanup for multiple ledger support with Askar #1583
    • Outbound Queue - more usability improvements #1501
    • Display QR code when generating/displaying invites on startup #1526
    • Enable WS Pings for WS Inbound Transport #1530
    • Faster detection of lost Web Socket connections; implementation verified with an existing mediator.
    • Performance Improvement when using connection reuse in OOB and there are many DID connections. ConnRecord tags - their_public_did and invitation_msg_id #1543
    • In previous releases, a \"their_public_did\" was not a tag, so to see if you can reuse a connection, all connections were retrieved from the database to see if a matching public DID can be found. Now, connections created after deploying this release will have a tag on the connection such that an indexed query can be used. See \"Breaking Change\" note above and \"Update\" feature below.
    • Follow up to #1543 - Adding invitation_msg_id and their_public_did back to record_value #1553
    • A generic \"Upgrade Deployment\" capability was added to ACA-Py that operates like a database migration capability in relational databases. When executed (via a command line option), a current version of the deployment is detected and if any storage updates need be applied to be consistent with the new version, they are, and the stored \"current version\"is updated to the new version. An instance of this capability can be used to address the new feature #1543 documented above. #1557
    • Adds a \"credential_revoked\" state to the Issue Credential protocol state object. When the protocol state object is retained past the completion of the protocol, it is updated when the credential is revoked. #1545
    • Updated a missing dependency that recently caused an error when using the --version command line option #1589
  • Critical Fixes
    • Fix connection record response for mobile #1469
  • Documentation Additions and Updates
    • added documentation for wallet storage databases #1523
    • added logging documentation #1519
    • Fix warnings when generating ReadTheDocs #1509
    • Remove Streetcred references #1504
    • Add RTD configs to get generator working #1496
    • The Alice/Faber demo was updated to allow connections based on Public DIDs to be established, including reusing a connection if there is an existing connection. #1574
  • Other Fixes
    • Connection Handling / Out of Band Invitations Fixes
    • OOB: Fixes issues with multiple public explicit invitation and unused 0160 connection #1525
    • OOB added webhooks to notify the controller when a connection reuse message is used in response to an invitation #1581
    • Delete unused ConnRecord generated - OOB invitation (use_exising_connection) #1521
    • When an invitee responded with a \"reuse\" message, the connection record associated with the invitation was not being deleted. Now it is.
    • Await asyncio.sleeps to cleanup warnings in Python 3.8/3.9 #1558
    • Add alias field to didexchange invitation UI #1561
    • fix: use invitation key for connection query #1570
    • Fix the inconsistency of invitation_msg_id between invitation and response #1564
    • chore: update pydid to ^0.3.3 #1562
    • DIF Presentation Exchange Cleanups
    • Fix DIF Presentation Request Input Validation #1517
    • Some validation checking of a DIF presentation request to prevent uncaught errors later in the process.
    • DIF PresExch - ProblemReport and \"is_holder\" #1493
    • Cleanups related to when \"is_holder\" is or is not required. Related to Issue #1486
    • Indy SDK Related Fixes
    • Fix AttributeError when writing an Indy Cred Def record #1516
    • Fix TypeError when calling credential_definitions_fix_cred_def_wallet\u2026 #1515
    • Fix TypeError when writing a Schema record #1494
    • Fix validation for range checks #1538
    • Back out some of the validation checking for proof requests with predicates as they were preventing valid proof requests from being processed.
    • Aries Askar Related Fixes:
    • Fix bug when getting credentials on askar-profile #1510
    • Fix error when removing a wallet on askar-profile #1518
    • Fix error when connection request is received (askar, public invitation) #1508
    • Fix error when an error occurs while issuing a revocable credential #1591
    • Docker fixes:
    • Update docker scripts to use new & improved docker IP detection #1565
    • Release Adminstration:
    • Changelog and RTD updates for the pending 0.7.3 release #1553
"},{"location":"CHANGELOG/#072","title":"0.7.2","text":""},{"location":"CHANGELOG/#november-15-2021","title":"November 15, 2021","text":"

A mostly maintenance release with some key updates and cleanups based on community deployments and discovery. With usage in the field increasing, we're cleaning up edge cases and issues related to volume deployments.

The most significant new feature for users of Indy ledgers is a simplified approach for transaction authors getting their transactions signed by an endorser. Transaction author controllers now do almost nothing other than configuring their instance to use an Endorser, and ACA-Py takes care of the rest. Documentation of that feature is here.

  • Improve cloud native deployments/scaling
    • unprotect liveness and readiness endpoints #1416
    • Open askar sessions only on demand - Connections #1424
    • Fixed potential deadlocks by opening sessions only on demand (Wallet endpoints) #1472
    • Fixed potential deadlocks by opening sessions only on demand #1439
    • Make mediation invitation parameter idempotent #1413
  • Indy Transaction Endorser Support Added
    • Endorser protocol configuration, automation and demo integration #1422
    • Auto connect from author to endorser on startup #1461
    • Startup and shutdown events (prep for endorser updates) #1459
    • Endorser protocol askar fixes #1450
    • Endorser protocol updates - refactor to use event bus #1448
  • Indy verifiable credential/presentation fixes and updates
    • Update credential and proof mappings to allow negative encoded values #1475
    • Add credential validation to offer issuance step #1446
    • Fix error removing proof req entries by timestamp #1465
    • Fix issue with cred limit on presentation endpoint #1437
    • Add support for custom offers from the proposal #1426
    • Make requested attributes and predicates required on indy proof request #1411
    • Remove connection check on proof verify #1383
  • General cleanups and improvements to existing features
    • Fixes failing integration test -- JSON-LD context URL not loading because of external issue #1491
    • Update base record time-stamp to standard ISO format #1453
    • Encode DIDComm messages before sent to the queue #1408
    • Add Event bus Metadata #1429
    • Allow base wallet to connect to a mediator after startup #1463
    • Log warning when unsupported problem report code is received #1409
    • feature/inbound-transport-profile #1407
    • Import cleanups #1393
    • Add no-op handler for generic ack message (RFC 0015) #1390
    • Align OutOfBandManager.receive_invitation with other connection managers #1382
  • Bug fixes
    • fix: fixes error in use of a default mediator in connections/out of band -- mediation ID was being saved as None instead of the retrieved default mediator value #1490
    • fix: help text for open-mediation flag #1445
    • fix: incorrect return type #1438
    • Add missing param to ws protocol #1442
    • fix: create static doc use empty endpoint if None #1483
    • fix: use named tuple instead of dataclass in mediation invite store #1476
    • When fetching the admin config, don't overwrite webhook settings #1420
    • fix: return type of inject #1392
    • fix: typo in connection static result schema #1389
    • fix: don't require push on outbound queue implementations #1387
  • Updates/Fixes to the Alice/Faber demo and integration tests
    • Clarify instructions in the Acme Controller Demo #1484
    • Fix aip 20 behaviour and other cleanup #1406
    • Fix issue with startup sequence for faber agent #1415
    • Connectionless proof demo #1395
    • Typos in the demo's README.md #1405
    • Run integration tests using external ledger and tails server #1400
  • Chores
    • Update CONTRIBUTING.md #1428
    • Update to ReadMe and Supported RFCs for 0.7.2 #1489
    • Updating the RTDs code for Release 0.7.2 - Try 2 #1488
"},{"location":"CHANGELOG/#071","title":"0.7.1","text":""},{"location":"CHANGELOG/#august-31-2021","title":"August 31, 2021","text":"

A relatively minor maintenance release to address issues found since the 0.7.0 Release. Includes some cleanups of JSON-LD Verifiable Credentials and Verifiable Presentations

  • W3C Verifiable Credential cleanups
    • Timezone inclusion [ISO 8601] for W3C VC and Proofs (#1373)
    • W3C VC handling where attachment is JSON and not Base64 encoded (#1352)
  • Refactor outbound queue interface (#1348)
  • Command line parameter handling for arbitrary plugins (#1347)
  • Add an optional parameter '--ledger-socks-proxy' (#1342)
  • OOB Protocol - CredentialOffer Support (#1316), (#1216)
  • Updated IndyCredPrecisSchema - pres_referents renamed to presentation_referents (#1334)
  • Handle unpadded protected header in PackWireFormat::get_recipient_keys (#1324)
  • Initial cut of OpenAPI Code Generation guidelines (#1339)
  • Correct revocation API in credential revocation documentation (#612)
  • Documentation updates for Read-The-Docs (#1359, #1366, #1371)
  • Add inject_or method to dynamic injection framework to resolve typing ambiguity (#1376)
  • Other fixes:
    • Indy Proof processing fix, error not raised in predicate timestamp check (#1364)
    • Problem Report handler for connection specific problems (#1356)
    • fix: error on deserializing conn record with protocol (#1325)
    • fix: failure to verify jsonld on non-conformant doc but vaild vmethod (#1301)
    • fix: allow underscore in endpoints (#1378)
"},{"location":"CHANGELOG/#070","title":"0.7.0","text":""},{"location":"CHANGELOG/#july-14-2021","title":"July 14, 2021","text":"

Another significant release, this version adds support for multiple new protocols, credential formats, and extension methods.

  • Support for W3C Standard Verifiable Credentials based on JSON-LD using LD-Signatures and BBS+ Signatures, contributed by Animo Solutions - #1061
  • Present Proof V2 including support for DIF Presentation Exchange - #1125
  • Pluggable DID Resolver (with a did:web resolver) with fallback to an external DID universal resolver, contributed by Indicio - #1070
  • Updates and extensions to ledger transaction endorsement via the Sign Attachment Protocol, contributed by AyanWorks - #1134, #1200
  • Upgrades to Demos to add support for Credential Exchange 2.0 and W3C Verifiable Credentials #1235
  • Alpha support for the Indy/Aries Shared Components (indy-vdr, indy-credx and aries-askar), which enable running ACA-Py without using Indy-SDK, while still supporting the use of Indy as a ledger, and Indy AnonCreds verifiable credentials #1267
  • A new event bus for distributing internally generated ACA-Py events to controllers and other listeners, contributed by Indicio - #1063
  • Enable operation without Indy ledger support if not needed
  • Performance fix for deployments with large numbers of DIDs/connections #1249
  • Simplify the creation/handling of plugin protocols #1086, #1133, #1226
  • DID Exchange implicit invitation handling #1174
  • Add support for Indy 1.16 predicates (restrictions on predicates based on attribute name and value) #1213
  • BDD Tests run via GitHub Actions #1046
"},{"location":"CHANGELOG/#060","title":"0.6.0","text":""},{"location":"CHANGELOG/#february-25-2021","title":"February 25, 2021","text":"

This is a significant release of ACA-Py with several new features, as well as changes to the internal architecture in order to set the groundwork for using the new shared component libraries: indy-vdr, indy-credx, and aries-askar.

"},{"location":"CHANGELOG/#mediator-support","title":"Mediator support","text":"

While ACA-Py had previous support for a basic routing protocol, this was never fully developed or used in practice. Starting with this release, inbound and outbound connections can be established through a mediator agent using the Aries Mediator Coordination Protocol. This work was initially contributed by Adam Burdett and Daniel Bluhm of Indicio on behalf of SICPA. Read more about mediation support.

"},{"location":"CHANGELOG/#multi-tenancy-support","title":"Multi-Tenancy support","text":"

Started by BMW and completed by Animo Solutions and Anon Solutions on behalf of SICPA, this feature allows for a single ACA-Py instance to host multiple wallet instances. This can greatly reduce the resources required when many identities are being handled. Read more about multi-tenancy support.

"},{"location":"CHANGELOG/#new-connection-protocols","title":"New connection protocol(s)","text":"

In addition to the Aries 0160 Connections RFC, ACA-Py now supports the Aries DID Exchange Protocol for connection establishment and reuse, as well as the Aries Out-of-Band Protocol for representing connection invitations and other pre-connection requests.

"},{"location":"CHANGELOG/#issue-credential-v2","title":"Issue-Credential v2","text":"

This release includes an initial implementation of the Aries Issue Credential v2 protocol.

"},{"location":"CHANGELOG/#notable-changes-for-administrators","title":"Notable changes for administrators","text":"
  • There are several new endpoints available for controllers as well as new startup parameters related to the multi-tenancy and mediator features, see the feature description pages above in order to make use of these features. Additional admin endpoints are introduced for the DID Exchange, Issue Credential v2, and Out-of-Band protocols.

  • When running aca-py start, a new wallet will no longer be created unless the --auto-provision argument is provided. It is recommended to always use aca-py provision to initialize the wallet rather than relying on automatic behaviour, as this removes the need for repeatedly providing the wallet seed value (if any). This is a breaking change from previous versions.

  • When running aca-py provision, an existing wallet will not be removed and re-created unless the --recreate-wallet argument is provided. This is a breaking change from previous versions.

  • The logic around revocation intervals has been tightened up in accordance with Present Proof Best Practices.

"},{"location":"CHANGELOG/#notable-changes-for-plugin-writers","title":"Notable changes for plugin writers","text":"

The following are breaking changes to the internal APIs which may impact Python code extensions.

  • Manager classes generally accept a Profile instance, where previously they accepted a RequestContext.

  • Admin request handlers now receive an AdminRequestContext as app[\"context\"]. The current profile is available as app[\"context\"].profile. The admin server now generates a unique context instance per request in order to facilitate multi-tenancy, rather than reusing the same instance for each handler.

  • In order to inject the BaseStorage or BaseWallet interfaces, a ProfileSession must be used. Other interfaces can be injected at the Profile or ProfileSession level. This is obtained by awaiting profile.session() for the current Profile instance, or (preferably) using it as an async context manager:

python= async with profile.session() as session: storage = session.inject(BaseStorage)

  • The inject method of a context is no longer async.
"},{"location":"CHANGELOG/#056","title":"0.5.6","text":""},{"location":"CHANGELOG/#october-19-2020","title":"October 19, 2020","text":"
  • Fix an attempt to update the agent endpoint when configured with a read-only ledger #758
"},{"location":"CHANGELOG/#055","title":"0.5.5","text":""},{"location":"CHANGELOG/#october-9-2020","title":"October 9, 2020","text":"
  • Support interactions using the new https://didcomm.org message type prefix (currently opt-in via the --emit-new-didcomm-prefix flag) #705, #713
  • Updates to application startup arguments, adding support for YAML configuration #739, #746, #748
  • Add a new endpoint to check the revocation status of a stored credential #735
  • Clean up API documentation and OpenAPI definition, minor API adjustments #712, #726, #732, #734, #738, #741, #747
  • Add configurable support for unencrypted record tags #723
  • Retain more limited records on issued credentials #718
  • Fix handling of custom endpoint in connections accept-request API method #715, #716
  • Add restrictions around revocation registry sizes #727
  • Allow the state for revocation registry records to be set manually #708
  • Handle multiple matching credentials when satisfying a presentation request using names #706
  • Additional handling for a missing local tails file, tails file rollover process #702, #717
  • Handle unknown credential ID in create-proof API method #700
  • Improvements to revocation interval handling in presentation requests #699, #703
  • Clean up warnings on API redirects #692
  • Extensions to DID publicity status #691
  • Support Unicode text in JSON-LD credential handling #687
"},{"location":"CHANGELOG/#054","title":"0.5.4","text":""},{"location":"CHANGELOG/#august-24-2020","title":"August 24, 2020","text":"
  • Improvements to schema, cred def registration procedure #682, #683
  • Updates to align admin API output with documented interface #674, #681
  • Fix provisioning issue when ledger is configured as read-only #673
  • Add get-nym-role action #671
  • Basic support for w3c profile endpoint #667, #669
  • Improve handling of non-revocation interval #648, #680
  • Update revocation demo after changes to tails file handling #644
  • Improve handling of fatal ledger errors #643, #659
  • Improve did:key: handling in out-of-band protocol support #639
  • Fix crash when no public DID is configured #637
  • Fix high CPU usage when only messages pending retry are in the outbound queue #636
  • Additional unit tests for config, messaging, revocation, startup, transports #633, #641, #658, #661, #666
  • Allow forwarded messages to use existing connections and the outbound queue #631
"},{"location":"CHANGELOG/#053","title":"0.5.3","text":""},{"location":"CHANGELOG/#july-23-2020","title":"July 23, 2020","text":"
  • Store endpoint on provisioned DID records #610
  • More reliable delivery of outbound messages and webhooks #615
  • Improvements for OpenShift pod handling #614
  • Remove support for 'on-demand' revocation registries #605
  • Sort tags in generated swagger JSON for better consistency #602
  • Improve support for multi-credential proofs #601
  • Adjust default settings for tracing and add documentation #598, #597
  • Fix reliance on local copy of revocation tails file #590
  • Improved handling of problem reports #595
  • Remove credential preview parameter from credential issue endpoint #596
  • Looser format restrictions on dates #586
  • Support names and attribute-value specifications in present-proof protocol #587
  • Misc documentation updates and unit test coverage
"},{"location":"CHANGELOG/#052","title":"0.5.2","text":""},{"location":"CHANGELOG/#june-26-2020","title":"June 26, 2020","text":"
  • Initial out-of-band protocol support #576
  • Support provisioning a new local-only DID in the wallet, updating a DID endpoint #559, #573
  • Support pagination for holder search operation #558
  • Add raw JSON credential signing and verification admin endpoints #540
  • Catch fatal errors in admin and protocol request handlers #527, #533, #534, #539, #543, #554, #555
  • Add wallet and DID key rotation operations #525
  • Admin API documentation and usability improvements #504, #516, #570
  • Adjust the maximum number of attempts for outbound messages #501
  • Add demo support for tails server #499
  • Various credential and presentation protocol fixes and improvements #491, #494, #498, #526, #561, #563, #564, #577, #579
  • Fixes for multiple agent endpoints #495, #497
  • Additional test coverage #482, #485, #486, #487, #490, #493, #509, #553
  • Update marshmallow dependency #479
"},{"location":"CHANGELOG/#051","title":"0.5.1","text":""},{"location":"CHANGELOG/#april-23-2020","title":"April 23, 2020","text":"
  • Restore previous response format for the /credential/{id} admin route #474
"},{"location":"CHANGELOG/#050","title":"0.5.0","text":""},{"location":"CHANGELOG/#april-21-2020","title":"April 21, 2020","text":"
  • Add support for credential revocation and revocation registry handling, with thanks to Medici Ventures #306, #417, #425, #429, #432, #435, #441, #455
  • Breaking change Remove previous credential and presentation protocols (0.1 versions) #416
  • Add support for major/minor protocol version routing #443
  • Event tracing and trace reports for message exchanges #440
  • Support additional Indy restriction operators (>, <, <= in addition to >=) #457
  • Support signed attachments according to the updated Aries RFC 0017 #456
  • Increased test coverage #442, #453
  • Updates to demo agents and documentation #402, #403, #411, #415, #422, #423, #449, #450, #452
  • Use Indy generate_nonce method to create proof request nonces #431
  • Make request context available in the outbound transport handler #408
  • Contain indy-anoncreds usage in IndyIssuer, IndyHolder, IndyProver classes #406, #463
  • Fix issue with validation of proof with predicates and revocation support #400
"},{"location":"CHANGELOG/#045","title":"0.4.5","text":""},{"location":"CHANGELOG/#march-3-2020","title":"March 3, 2020","text":"
  • Added NOTICES file with license information for dependencies #398
  • Updated documentation for administration API demo #397
  • Accept self-attested attributes in presentation verification, only when no restrictions are present on the requested attribute #394, #396
"},{"location":"CHANGELOG/#044","title":"0.4.4","text":""},{"location":"CHANGELOG/#february-28-2020","title":"February 28, 2020","text":"
  • Update docker image used in demo and test containers #391
  • Fix pre-verify check on received presentations #390
  • Do not canonicalize attribute names in credential previews #389
"},{"location":"CHANGELOG/#043","title":"0.4.3","text":""},{"location":"CHANGELOG/#february-26-2020","title":"February 26, 2020","text":"
  • Fix the application of transaction author agreement acceptance to signed ledger requests #385
  • Add a command line argument to preserve connection exchange records #355
  • Allow custom credential IDs to be specified by the controller in the issue-credential protocol #384
  • Handle send timeouts in the admin server websocket implementation #377
  • Aries RFC 0348: Support the 'didcomm.org' message type prefix for incoming messages #379
  • Add support for additional postgres wallet schemes such as \"MultiWalletDatabase\" #378
  • Updates to the demo agents and documentation to support demos using the OpenAPI interface #371, #375, #376, #382, #383, #382
  • Add a new flag for preventing writes to the ledger #364
"},{"location":"CHANGELOG/#042","title":"0.4.2","text":""},{"location":"CHANGELOG/#february-8-2020","title":"February 8, 2020","text":"
  • Adjust logging on HTTP request retries #363
  • Tweaks to run_docker/run_demo scripts for Windows #357
  • Avoid throwing exceptions on invalid or incomplete received presentations #359
  • Restore the present-proof/create-request admin endpoint for creating connectionless presentation requests #356
  • Activate the connections/create-static admin endpoint for creating static connections #354
"},{"location":"CHANGELOG/#041","title":"0.4.1","text":""},{"location":"CHANGELOG/#january-31-2020","title":"January 31, 2020","text":"
  • Update Forward messages and handlers to align with RFC 0094 for compatibility with libvcx and Streetcred #240, #349
  • Verify encoded attributes match raw attributes on proof presentation #344
  • Improve checks for existing credential definitions in the wallet and on ledger when publishing #333, #346
  • Accommodate referents in presentation proposal preview attribute specifications #333
  • Make credential proposal optional in issue-credential protocol #336
  • Handle proofs with repeated credential definition IDs #330
  • Allow side-loading of alternative inbound transports #322
  • Various fixes to documentation and message schemas, and improved unit test coverage
"},{"location":"CHANGELOG/#040","title":"0.4.0","text":""},{"location":"CHANGELOG/#december-10-2019","title":"December 10, 2019","text":"
  • Improved unit test coverage (actionmenu, basicmessage, connections, introduction, issue-credential, present-proof, routing protocols)
  • Various documentation and bug fixes
  • Add admin routes for fetching and accepting the ledger transaction author agreement #144
  • Add support for receiving connection-less proof presentations #296
  • Set attachment id explicitly in unbound proof request #289
  • Add create-proposal admin endpoint to the present-proof protocol #288
  • Remove old anon/authcrypt support #282
  • Allow additional endpoints to be specified #276
  • Allow timestamp without trailing 'Z' #275, #277
  • Display agent label and version on CLI and SwaggerUI #274
  • Remove connection activity tracking and add ping webhooks (with --monitor-ping) #271
  • Refactor message transport to track all async tasks, active message handlers #269, #287
  • Add invitation mode \"static\" for static connections #260
  • Allow for cred proposal underspecification of cred def id, only lock down cred def id at issuer on offer. Sync up api requests to Aries RFC-36 verbiage #259
  • Disable cookies on outbound requests (avoid session affinity) #258
  • Add plugin registry for managing all loaded protocol plugins, streamline ClassLoader #257, #261
  • Add support for locking a cache key to avoid repeating expensive operations #256
  • Add optional support for uvloop #255
  • Output timing information when --timing-log argument is provided #254
  • General refactoring - modules moved from messaging into new core, protocols, and utils sub-packages #250, #301
  • Switch performance demo to the newer issue-credential protocol #243
"},{"location":"CHANGELOG/#035","title":"0.3.5","text":""},{"location":"CHANGELOG/#november-1-2019","title":"November 1, 2019","text":"
  • Switch performance demo to the newer issue-credential protocol #243
  • Remove old method for reusing credential requests and replace with local caching for credential offers and requests #238, #242
  • Add statistics on HTTP requests to timing output #237
  • Reduce the number of tags on non-secrets records to reduce storage requirements and improve performance #235
"},{"location":"CHANGELOG/#034","title":"0.3.4","text":""},{"location":"CHANGELOG/#october-23-2019","title":"October 23, 2019","text":"
  • Clean up base64 handling in wallet utils and add tests #224
  • Support schema sequence numbers for lookups and caching and allow credential definition tag override via admin API #223
  • Support multiple proof referents in the present-proof protocol #222
  • Group protocol command line arguments appropriately #217
  • Don't require a signature for get_txn_request in credential_definition_id2schema_id and reduce public DID lookups #215
  • Add a role property to credential exchange and presentation exchange records #214, #218
  • Improve attachment decorator handling #210
  • Expand and correct documentation of the OpenAPI interface #208, #212
"},{"location":"CHANGELOG/#033","title":"0.3.3","text":""},{"location":"CHANGELOG/#september-27-2019","title":"September 27, 2019","text":"
  • Clean up LGTM errors and warnings and fix a message dispatch error #203
  • Avoid wrapping messages with Forward wrappers when returning them directly #199
  • Add a CLI parameter to override the base URL used in URL-formatted connection invitations #197
  • Update the feature discovery protocol to match the RFC and rename the admin API endpoint #193
  • Add CLI parameters for specifying additional properties of the printed connection invitation #192
  • Add support for explicitly setting the wallet credential ID on storage #188
  • Additional performance tracking and storage reductions #187
  • Handle connection invitations in base64 or URL format in the Alice demo agent #186
  • Add admin API methods to get and set the credential tagging policy for a credential definition ID #185
  • Allow querying of credentials for proof requests with multiple referents #181
  • Allow self-connected agents to issue credentials, present proofs #179
  • Add admin API endpoints to register a ledger nym, fetch a ledger DID verkey, or fetch a ledger DID endpoint #178
"},{"location":"CHANGELOG/#032","title":"0.3.2","text":""},{"location":"CHANGELOG/#september-3-2019","title":"September 3, 2019","text":"
  • Merge support for Aries #36 (issue-credential) and Aries #37 (present-proof) protocols #164, #167
  • Add initiator to connection record queries to ensure uniqueness in the case of a self-connection #161
  • Add connection aliases #149
  • Misc documentation updates
"},{"location":"CHANGELOG/#031","title":"0.3.1","text":""},{"location":"CHANGELOG/#august-15-2019","title":"August 15, 2019","text":"
  • Do not fail with an error when no ledger is configured #145
  • Switch to PyNaCl instead of pysodium; update dependencies #143
  • Support reusable connection invitations #142
  • Fix --version option and optimize Docker builds #136
  • Add connection_id to basicmessage webhooks #134
  • Fixes for transaction author agreements #133
"},{"location":"CHANGELOG/#030","title":"0.3.0","text":""},{"location":"CHANGELOG/#august-9-2019","title":"August 9, 2019","text":"
  • Ledger and wallet config updates; add support for transaction author agreements #127
  • Handle duplicate schema in send_schema by always fetching first #126
  • More flexible timeout support in detect_process #125
  • Add start command to run_docker invocations #119
  • Add issuer stored state #114
  • Add admin route to create a presentation request without sending it #112
  • Add -v option to aca-py executable to print version #110
  • Fix demo presentation request, optimize credential retrieval #108
  • Add pypi badge to README and make document link URLs absolute #103
  • Add admin routes for creating and listing wallet DIDs, adjusting the public DID #102
  • Update the running locally instructions based on feedback from Sam Smith #101
  • Add support for multiple invocation commands, implement start/provision/help commands #99
  • Add admin endpoint to send problem report #98
  • Add credential received state transition #97
  • Adding documentation for the routing version of the performance example #94
  • Document listing the Aries RFCs supported by ACA-Py and reference to the list in the README #89
  • Further updates to the running locally section of the demo README #86
  • Don't extract decorators with names matching the 'data_key' of defined schema fields #85
  • Allow demo scripts to run outside of Docker; add command line parsing #84
  • Connection invitation fixes and improvements; support DID-based invitations #82
"},{"location":"CHANGELOG/#021","title":"0.2.1","text":""},{"location":"CHANGELOG/#july-16-2019","title":"July 16, 2019","text":"
  • Add missing MANIFEST file #78
"},{"location":"CHANGELOG/#020","title":"0.2.0","text":""},{"location":"CHANGELOG/#july-16-2019_1","title":"July 16, 2019","text":"

This is the first PyPI release. The history begins with the transfer of aca-py from bcgov to hyperledger.

  • Prepare for version 0.2.0 release #77
  • Update von-network related references. #74
  • Fixed log_level arg, added validation error logging #73
  • fix shell inconsistency #72
  • further cleanup to the OpenAPI demo script #71
  • Updates to invitation handling and performance test #68
  • Api security #67
  • Fix line endings on Windows #66
  • Fix repository name in badge links #65
  • Connection record is_ready refactor #64
  • Fix API instructions for cred def id #58
  • Updated API demo docs to use alice/faber scripts #54
  • Updates to the readme for the demo to add PWD support #53
  • Swallow empty input in demo scripts #51
  • Set credential_exchange state when created from a cached credential request #49
  • Check for readiness instead of activeness in credential admin routes #46
  • Demo updates #43
  • Misc fixes #42
  • Readme updates #41
  • Change installed \"binary\" name to aca-py #40
  • Tweak in script to work under Linux; updates to readme for demo #33
  • New routing example document, typo corrections #31
  • More bad links #30
  • Links cleanup for the documentation #29
  • Alice-Faber demo update #28
  • Deployment Model document #27
  • Plantuml source and images for documentation; w/image generator script #26
  • Move generated documentation. #25
  • Update generated documents #24
  • Split application configuration into separate modules and add tests #23
  • Updates to the RTD configuration file #22
  • Merge DIDDoc support from von_anchor #21
  • Adding Prov of BC, Gov of Canada copyright #19
  • Update test configuration #18
  • CI updates #17
  • Transport updates #15
"},{"location":"CODE_OF_CONDUCT/","title":"Hyperledger Code of Conduct","text":"

Hyperledger is a collaborative project at The Linux Foundation. It is an open-source and open community project where participants choose to work together, and in that process experience differences in language, location, nationality, and experience. In such a diverse environment, misunderstandings and disagreements happen, which in most cases can be resolved informally. In rare cases, however, behavior can intimidate, harass, or otherwise disrupt one or more people in the community, which Hyperledger will not tolerate.

A Code of Conduct is useful to define accepted and acceptable behaviors and to promote high standards of professional practice. It also provides a benchmark for self evaluation and acts as a vehicle for better identity of the organization.

This code (CoC) applies to any member of the Hyperledger community \u2013 developers, participants in meetings, teleconferences, mailing lists, conferences or functions, etc. Note that this code complements rather than replaces legal rights and obligations pertaining to any particular situation.

"},{"location":"CODE_OF_CONDUCT/#statement-of-intent","title":"Statement of Intent","text":"

Hyperledger is committed to maintain a positive work environment. This commitment calls for a workplace where participants at all levels behave according to the rules of the following code. A foundational concept of this code is that we all share responsibility for our work environment.

"},{"location":"CODE_OF_CONDUCT/#code","title":"Code","text":"
  1. Treat each other with respect, professionalism, fairness, and sensitivity to our many differences and strengths, including in situations of high pressure and urgency.

  2. Never harass or bully anyone verbally, physically or sexually.

  3. Never discriminate on the basis of personal characteristics or group membership.

  4. Communicate constructively and avoid demeaning or insulting behavior or language.

  5. Seek, accept, and offer objective work criticism, and acknowledge properly the contributions of others.

  6. Be honest about your own qualifications, and about any circumstances that might lead to conflicts of interest.

  7. Respect the privacy of others and the confidentiality of data you access.

  8. With respect to cultural differences, be conservative in what you do and liberal in what you accept from others, but not to the point of accepting disrespectful, unprofessional or unfair or unwelcome behavior or advances.

  9. Promote the rules of this Code and take action (especially if you are in a leadership position) to bring the discussion back to a more civil level whenever inappropriate behaviors are observed.

  10. Stay on topic: Make sure that you are posting to the correct channel and avoid off-topic discussions. Remember when you update an issue or respond to an email you are potentially sending to a large number of people.

  11. Step down considerately: Members of every project come and go, and the Hyperledger is no different. When you leave or disengage from the project, in whole or in part, we ask that you do so in a way that minimizes disruption to the project. This means you should tell people you are leaving and take the proper steps to ensure that others can pick up where you left off.

"},{"location":"CODE_OF_CONDUCT/#glossary","title":"Glossary","text":""},{"location":"CODE_OF_CONDUCT/#demeaning-behavior","title":"Demeaning Behavior","text":"

is acting in a way that reduces another person's dignity, sense of self-worth or respect within the community.

"},{"location":"CODE_OF_CONDUCT/#discrimination","title":"Discrimination","text":"

is the prejudicial treatment of an individual based on criteria such as: physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.

"},{"location":"CODE_OF_CONDUCT/#insulting-behavior","title":"Insulting Behavior","text":"

is treating another person with scorn or disrespect.

"},{"location":"CODE_OF_CONDUCT/#acknowledgement","title":"Acknowledgement","text":"

is a record of the origin(s) and author(s) of a contribution.

"},{"location":"CODE_OF_CONDUCT/#harassment","title":"Harassment","text":"

is any conduct, verbal or physical, that has the intent or effect of interfering with an individual, or that creates an intimidating, hostile, or offensive environment.

"},{"location":"CODE_OF_CONDUCT/#leadership-position","title":"Leadership Position","text":"

includes group Chairs, project maintainers, staff members, and Board members.

"},{"location":"CODE_OF_CONDUCT/#participant","title":"Participant","text":"

includes the following persons:

  • Developers
  • Member representatives
  • Staff members
  • Anyone from the Public partaking in the Hyperledger work environment (e.g. contribute code, comment on our code or specs, email us, attend our conferences, functions, etc)
"},{"location":"CODE_OF_CONDUCT/#respect","title":"Respect","text":"

is the genuine consideration you have for someone (if only because of their status as participant in Hyperledger, like yourself), and that you show by treating them in a polite and kind way.

"},{"location":"CODE_OF_CONDUCT/#sexual-harassment","title":"Sexual Harassment","text":"

includes visual displays of degrading sexual images, sexually suggestive conduct, offensive remarks of a sexual nature, requests for sexual favors, unwelcome physical contact, and sexual assault.

"},{"location":"CODE_OF_CONDUCT/#unwelcome-behavior","title":"Unwelcome Behavior","text":"

Hard to define? Some questions to ask yourself are:

  • how would I feel if I were in the position of the recipient?
  • would my spouse, parent, child, sibling or friend like to be treated this way?
  • would I like an account of my behavior published in the organization's newsletter?
  • could my behavior offend or hurt other members of the work group?
  • could someone misinterpret my behavior as intentionally harmful or harassing?
  • would I treat my boss or a person I admire at work like that ?
  • Summary: if you are unsure whether something might be welcome or unwelcome, don't do it.
"},{"location":"CODE_OF_CONDUCT/#unwelcome-sexual-advance","title":"Unwelcome Sexual Advance","text":"

includes requests for sexual favors, and other verbal or physical conduct of a sexual nature, where:

  • submission to such conduct is made either explicitly or implicitly a term or condition of an individual's employment,
  • submission to or rejection of such conduct by an individual is used as a basis for employment decisions affecting the individual,
  • such conduct has the purpose or effect of unreasonably interfering with an individual's work performance or creating an intimidating hostile or offensive working environment.
"},{"location":"CODE_OF_CONDUCT/#workplace-bullying","title":"Workplace Bullying","text":"

is a tendency of individuals or groups to use persistent aggressive or unreasonable behavior (e.g. verbal or written abuse, offensive conduct or any interference which undermines or impedes work) against a co-worker or any professional relations.

"},{"location":"CODE_OF_CONDUCT/#work-environment","title":"Work Environment","text":"

is the set of all available means of collaboration, including, but not limited to messages to mailing lists, private correspondence, Web pages, chat channels, phone and video teleconferences, and any kind of face-to-face meetings or discussions.

"},{"location":"CODE_OF_CONDUCT/#incident-procedure","title":"Incident Procedure","text":"

To report incidents or to appeal reports of incidents, send email to Mike Dolan (mdolan@linuxfoundation.org) or Angela Brown (angela@linuxfoundation.org). Please include any available relevant information, including links to any publicly accessible material relating to the matter. Every effort will be taken to ensure a safe and collegial environment in which to collaborate on matters relating to the Project. In order to protect the community, the Project reserves the right to take appropriate action, potentially including the removal of an individual from any and all participation in the project. The Project will work towards an equitable resolution in the event of a misunderstanding.

"},{"location":"CODE_OF_CONDUCT/#credits","title":"Credits","text":"

This code is based on the W3C\u2019s Code of Ethics and Professional Conduct with some additions from the Cloud Foundry\u2018s Code of Conduct.

"},{"location":"CONTRIBUTING/","title":"How to contribute","text":"

You are encouraged to contribute to the repository by forking and submitting a pull request.

For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.

(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)

Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main branch. Pull requests should have a descriptive name, include a summary of all changes made in the pull request description, and include unit tests that provide good coverage of the feature or fix. A Continuous Integration (CI) pipeline is executed on all PRs before review and contributors are expected to address all CI issues identified. Where appropriate, PRs that impact the end-user and developer demos in the repo should include updates or extensions to those demos to cover the new capabilities.

If you would like to propose a significant change, please open an issue first to discuss the work with the community.

Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).

"},{"location":"CONTRIBUTING/#development-tools","title":"Development Tools","text":""},{"location":"CONTRIBUTING/#pre-commit","title":"Pre-commit","text":"

A configuration for pre-commit is included in this repository. This is an optional tool to help contributors commit code that follows the formatting requirements enforced by the CI pipeline. Additionally, it can be used to help contributors write descriptive commit messages that can be parsed by changelog generators.

On each commit, pre-commit hooks will run that verify the committed code complies with ruff and is formatted with black. To install the ruff and black checks:

pre-commit install\n

To install the commit message linter:

pre-commit install --hook-type commit-msg\n
"},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#maintainer-scopes-github-roles-and-github-teams","title":"Maintainer Scopes, GitHub Roles and GitHub Teams","text":"

Maintainers are assigned the following scopes in this repository:

Scope Definition GitHub Role GitHub Team Admin Admin aries-admins Maintainer The GitHub Maintain role Maintain aries-cloudagent-python committers Triage The GitHub Triage role Triage aries triage Read The GitHub Read role Read Aries Contributors Read The GitHub Read role Read TOC Read The GitHub Read role Read aries-framework-go-ext committers"},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"GitHub ID Name Scope LFID Discord ID Email Company Affiliation andrewwhitehead Andrew Whitehead Admin cywolf@gmail.com BC Gov dbluhm Daniel Bluhm Admin daniel@indicio.tech Indicio PBC dhh1128 Daniel Hardman Admin daniel.hardman@gmail.com Provident shaangill025 Shaanjot Gill Maintainer gill.shaanjots@gmail.com BC Gov swcurran Stephen Curran Admin swcurran@cloudcompass.ca BC Gov TelegramSam Sam Curren Maintainer telegramsam@gmail.com Indicio PBC TimoGlastra Timo Glastra Admin timo@animo.id Animo Solutions WadeBarnes Wade Barnes Admin wade@neoterictech.ca BC Gov usingtechnology Jason Sherman Maintainer tools@usingtechnolo.gy BC Gov"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name GitHub ID Scope LFID Discord ID Email Company Affiliation"},{"location":"MAINTAINERS/#the-duties-of-a-maintainer","title":"The Duties of a Maintainer","text":"

Maintainers are expected to perform the following duties for this repository. The duties are listed in more or less priority order:

  • Review, respond, and act on any security vulnerabilities reported against the repository.
  • Review, provide feedback on, and merge or reject GitHub Pull Requests from Contributors.
  • Review, triage, comment on, and close GitHub Issues submitted by Contributors.
  • When appropriate, lead/facilitate architectural discussions in the community.
  • When appropriate, lead/facilitate the creation of a product roadmap.
  • Create, clarify, and label issues to be worked on by Contributors.
  • Ensure that there is a well defined (and ideally automated) product test and release pipeline, including the publication of release artifacts.
  • When appropriate, execute the product release process.
  • Maintain the repository CONTRIBUTING.md file and getting started documents to give guidance and encouragement to those wanting to contribute to the product, and those wanting to become maintainers.
  • Contribute to the product via GitHub Pull Requests.
  • Monitor requests from the Hyperledger Technical Oversight Committee about the contents and management of Hyperledger repositories, such as branch handling, required files in repositories and so on.
  • Contribute to the Hyperledger Project's Quarterly Report.
"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

This community welcomes contributions. Interested contributors are encouraged to progress to become maintainers. To become a maintainer the following steps occur, roughly in order.

  • The proposed maintainer establishes their reputation in the community, including authoring five (5) significant merged pull requests, and expresses an interest in becoming a maintainer for the repository.
  • A PR is created to update this file to add the proposed maintainer to the list of active maintainers.
  • The PR is authored by an existing maintainer or has a comment on the PR from an existing maintainer supporting the proposal.
  • The PR is authored by the proposed maintainer or has a comment on the PR from the proposed maintainer confirming their interest in being a maintainer.
  • The PR or comment from the proposed maintainer must include their willingness to be a long-term (more than 6 month) maintainer.
  • Once the PR and necessary comments have been received, an approval timeframe begins.
  • The PR MUST be communicated on all appropriate communication channels, including relevant community calls, chat channels and mailing lists. Comments of support from the community are welcome.
  • The PR is merged and the proposed maintainer becomes a maintainer if either:
  • Two weeks have passed since at least three (3) Maintainer PR approvals have been recorded, OR
  • An absolute majority of maintainers have approved the PR.
  • If the PR does not get the requisite PR approvals, it may be closed.
  • Once the add maintainer PR has been merged, any necessary updates to the GitHub Teams are made.
"},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

Being a maintainer is not a status symbol or a title to be carried indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

  • Resignation of a maintainer.
  • Violation of the Code of Conduct warranting removal.
  • Inactivity.
  • A general measure of inactivity will be no commits or code review comments for one reporting quarter. This will not be strictly enforced if the maintainer expresses a reasonable intent to continue contributing.
  • Reasonable exceptions to inactivity will be granted for known long term leave such as parental leave and medical leave.
  • Other circumstances at the discretion of the other Maintainers.

The process to move a maintainer from active to emeritus status is comparable to the process for adding a maintainer, outlined above. In the case of voluntary resignation, the Pull Request can be merged following a maintainer PR approval. If the removal is for any other reason, the following steps SHOULD be followed:

  • A PR is created to update this file to move the maintainer to the list of emeritus maintainers.
  • The PR is authored by, or has a comment supporting the proposal from, an existing maintainer or Hyperledger GitHub organization administrator.
  • Once the PR and necessary comments have been received, the approval timeframe begins.
  • The PR MAY be communicated on appropriate communication channels, including relevant community calls, chat channels and mailing lists.
  • The PR is merged and the maintainer transitions to maintainer emeritus if:
  • The PR is approved by the maintainer to be transitioned, OR
  • Two weeks have passed since at least three (3) Maintainer PR approvals have been recorded, OR
  • An absolute majority of maintainers have approved the PR.
  • If the PR does not get the requisite PR approvals, it may be closed.

Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

"},{"location":"PUBLISHING/","title":"How to Publish a New Version","text":"

The code to be published should be in the main branch. Make sure that all the PRs to go in the release are merged, and decide on the release tag. Should it be a release candidate or the final tag, and should it be a major, minor or patch release, per semver rules.

Once ready to do a release, create a local branch that includes the following updates:

  1. Create a PR branch from an updated main branch.

  2. Update the CHANGELOG.md to add the new release. Only create a new section when working on the first release candidate for a new release. When transitioning from one release candidate to the next, or to an official release, just update the title and date of the change log section.

  3. Include details of the merged PRs included in this release. General process to follow:

  4. Gather the set of PRs since the last release and put them into a list. A good tool to use for this is the github-changelog-generator. Steps:

  5. Create a read only GitHub token for your account on this page: https://github.com/settings/tokens with a scope of repo / public_repo.
  6. Use a command like the following, adjusting the tag parameters as appropriate. docker run -it --rm -v \"$(pwd)\":/usr/local/src/your-app githubchangeloggenerator/github-changelog-generator --user hyperledger --project aries-cloudagent-python --output 0.11.0rc2.md --since-tag 0.10.4 --future-release 0.11.1rc2 --release-branch main --token <your-token>
  7. In the generated file, use only the PR list -- we don't include the list of closed issues in the Change Log.

In some cases, the approach above fails because of too many API calls. An alternate approach to getting the list of PRs in the right format is to use OpenAI ChatGPT.

Prepare the following ChatGPT request. Don't hit enter yet--you have to add the data.

Generate from this the github pull request number, the github id of the author and the title of the pull request in a tab-delimited list

Get a list of the merged PRs since the last release by displaying the PR list in the GitHub UI, highlighting/copying the PRs and pasting them below the ChatGPT request, one page after another. Hit <Enter>, let the AI magic work, and you should have a list of the PRs in a nice table with a Copy link that you should click.

Once you have that, open this Google Sheet and highlight the A1 cell and paste in the ChatGPT data. A formula in column E will have the properly formatted changelog entries. Double check the list with the GitHub UI to make sure that ChatGPT isn't messing with you and you have the needed data.

If using ChatGPT doesn't appeal to you, try this scary sed/command line approach:

  • Put the following commands into a file called changelog.sed
/Approved/d\n/updated /d\n/^$/d\n/^ [0-9]/d\ns/was merged.*//\n/^@/d\ns# by \\(.*\\) # [\\1](https://github.com/\\1)#\ns/^ //\ns#  \\#\\([0-9]*\\)# [\\#\\1](https://github.com/hyperledger/aries-cloudagent-python/pull/\\1) #\ns/  / /g\n/^Version/d\n/tasks done/d\ns/^/- /\n
  • Navigate in your browser to the paged list of PRs merged since the last release (using in the GitHub UI a filter such as is:pr is:merged sort:updated merged:>2022-04-07) and for each page, highlight, and copy the text of only the list of PRs on the page to use in the following step.
  • For each page, run the command sed -e :a -e '$!N;s/\\n#/ #/;ta' -e 'P;D' <<EOF | sed -f changelog.sed, paste in the copied text and then type EOF. Redirect the output to a file, appending each page of output to the file.
  • The first sed command in the pipeline merges the PR title and PR number plus author lines onto a single line. The commands in the changelog.sed file just clean up the data, removing unwanted lines, etc.
  • At the end of that process, you should have a list of all of the PRs in a form you can use in the CHANGELOG.md file.
  • To verify you have right number of PRs, you can do a wc of the file and there should be one line per PR. You should scan the file as well, looking for anomalies, such as missing \\s before # characters. It's a pretty ugly process.
  • Using a curl command and the GitHub API is probably a much better and more robust way to do this, but this was quick and dirty...

Once you have the list of PRs:

  • Organize the list into suitable categories, update (if necessary) the PR description and add notes to clarify the changes. See previous release entries to understand the style -- a format that should help developers.
  • Add a narrative about the release above the PR that highlights what has gone into the release.

  • Check to see if there are any other PRs that should be included in the release.

  • Update the ReadTheDocs in the /docs folder by following the instructions in the ./UpdateRTD.md file. That will likely add a number of new and modified files to the PR. Eliminate all of the errors in the generation process, either by mocking external dependencies or by fixing ACA-Py code. If necessary, create an issue with the errors and assign it to the appropriate developer. Experience has demonstrated to use that documentation generation errors should be fixed in the code.

  • Search across the repository for the previous version number and update it everywhere that makes sense. The CHANGELOG.md entry for the previous release is a likely exception, and the pyproject.toml in the root MUST be updated. You can skip (although it won't hurt) to update the files in the open-api folder as they will be automagically updated by the next step in publishing. The incremented version number MUST adhere to the Semantic Versioning Specification based on the changes since the last published release. For Release Candidates, the form of the tag is \"0.11.0rc2\". As of release 0.11.0 we have dropped the previously used - in the release candidate version string to better follow the semver rules.

  • Regenerate openapi.json and swagger.json by running ../scripts/generate-open-api-spec from within the aries_cloudagent folder.

Command: cd aries_cloudagent;../scripts/generate-open-api-spec;cd ..

  1. Double check all of these steps above, and then submit a PR from the branch. Add this new PR to CHANGELOG.md so that all the PRs are included. If there are still further changes to be merged, mark the PR as \"Draft\", repeat ALL of the steps again, and then mark this PR as ready and then wait until it is merged. It's embarrassing when you have to do a whole new release just because you missed something silly...I know!

  2. Immediately after it is merged, create a new GitHub tag representing the version. The tag name and title of the release should be the same as the version in pyproject.toml. Use the \"Generate Release Notes\" capability to get a sequential listing of the PRs in the release, to complement the manually curated Changelog. Verify on PyPi that the version is published.

  3. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\").

Additional information about the container image publication process can be found in the document Container Images and Github Actions.

  1. Update the ACA-Py Read The Docs site by building the new \"latest\" (main branch) and activating and building the new release. Appropriate permissions are required to publish the new documentation version.

  2. Update the https://aca-py.org website with the latest documentation by creating a PR and tag of the latest documentation from this site. Details are provided in the aries-acapy-docs repository.

"},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

"},{"location":"UpdateRTD/","title":"Managing Aries Cloud Agent Python Read The Docs Documentation","text":"

This document describes how to maintain the Read The Docs documentation that is generated from the ACA-Py code base. As the structure of the ACA-Py code evolves, the RTD files need to be regenerated and possibly updated, as described here.

"},{"location":"UpdateRTD/#generating-aca-py-read-the-docs-rtd-documentation","title":"Generating ACA-Py Read The Docs (RTD) documentation","text":""},{"location":"UpdateRTD/#before-you-start","title":"Before you start","text":"

To test generate and view the RTD documentation locally, you must install Sphinx and the Sphinx RTD theme. Follow the instructions on the respective pages to install and verify the installation on your system.

"},{"location":"UpdateRTD/#generate-module-files","title":"Generate Module Files","text":"

To rebuild the project and settings from scratch (you'll need to move the generated index file up a level):

rm -rf generated; sphinx-apidoc -f -M -o ./generated ../aries_cloudagent/ $(find ../aries_cloudagent/ -name '*tests*')

Note that the find command that is used to exclude any of the test python files from the RTD documentation.

Check the git status in your repo to see if the generator updates, adds or removes any existing RTD modules.

"},{"location":"UpdateRTD/#reviewing-the-files-locally","title":"Reviewing the files locally","text":"

To auto-generate the module documentation locally run:

sphinx-build -b html -a -E -c ./ ./ ./_build\n

Once generated, go into the _build folder and open index.html in a browser. Note that the _build is .gitignore'd and so will not be part of a git push.

"},{"location":"UpdateRTD/#look-for-errors","title":"Look for Errors","text":"

This is the hard part; looking for errors in docstrings added by devs. Some tips:

  • missing imports (No module named 'async_timeout') can be solved by adding the module to the list of autodoc_mock_imports in the conf.py file in the ACA-Py docs folder.
  • Ignore any errors in .md files
  • Ignore the warnings about including docs/README.md
  • Ignore an dist-package errors

Other than that, please investigate and fix things that you find. If there are fixes, it's usually to adhere to the rules around processing docstrings, and especially around JSON samples.

"},{"location":"UpdateRTD/#checking-for-missing-modules","title":"Checking for missing modules","text":"

The file index.rst in the ACA-Py docs folder drive the RTD generation. It picks up all the modules in the source code, starting from the root ../aries_cloudagent folder. However, some modules are not picked up automatically from the root and have to be manually added to index.rst. To do that:

  • Get a list of all generated modules by running: ls generated | grep \"aries_cloudagent.[a-z]*.rst\"
  • Compare that list with the modules listed in the \"Subpackages\" section of the left side menu in your browser, including any listed below the \"Submodules\".

If any are missing, you likely need to add them to the index.rst file in the toctree section of the file. You will see there are already several instances of that, notably \"connections\" and \"protocols\".

"},{"location":"UpdateRTD/#updating-the-readthedocsorg-site","title":"Updating the readthedocs.org site","text":"

The RTD documentation is not currently auto-generated, so a manual re-generation of the documentation is still required.

TODO: Automate this when new tags are applied to the repository.

"},{"location":"aca-py.org/","title":"Welcome!","text":"

Welcome to the Aries Cloud Agent Python documentation site. On this site you will find documentation for recent releases of ACA-Py. You'll find a few of the older versions of ACA-Py (pre-0.8.0), all versions since 0.8.0, and the main branch, which is the latest and greatest.

All of the documentation here is extracted from the Aries Cloud Agent Python repository. If you want to contribute to the documentation, please start there.

Ready to go? Scan the tabs in the page header to find the documentation you need now!

"},{"location":"aca-py.org/#code-internals-documentation","title":"Code Internals Documentation","text":"

In addition to this documentation site, the ACA-Py community also maintains an ACA-Py internals documentation site. The internals documentation consists of the docstrings extracted from the ACA-Py Python code and covers all of the (non-test) modules in the codebase. Check it out on the Aries Cloud Agent-Python ReadTheDocs site. As with this site, the ReadTheDocs documentation is version specific.

Got questions?

  • Join us on the Hyperledger Discord Server, in the #aries-cloudagent-python channel.
  • Add an issue in the Aries Cloud Agent Python repository.
"},{"location":"assets/","title":"Assets Folder for Documentation","text":"

Put any assets (images, source for images, videos, etc.) in this folder to be referenced in the various documents for this repo.

"},{"location":"assets/#plantuml-source-and-images","title":"Plantuml Source and Images","text":"

Plantuml diagrams are stored in this folder in source form in files ending in .puml and are generated manually using the ./genPlantuml script. The script uses a docker image from docker-hub and can be run without downloading any dependencies.

If you don't want to use the script, download plantuml and a command line utility and use that for the plantuml generation. I preferred not having any dependencies used (other than docker) and couldn't find a nice way to run plantuml headless from a command line.

"},{"location":"assets/#to-do","title":"To Do","text":"

It would be better to use a local Dockerfile vs. one found on Docker Hub. The one I did find was simple and straight forward.

I couldn't tell if the svg generation was working so just went with png. Not sure which would be better.

"},{"location":"demo/","title":"Aries Cloud Agent Python (ACA-Py) Demos","text":"

There are several demos available for ACA-Py mostly (but not only) aimed at developers learning how to deploy an instance of the agent and an ACA-Py controller to implement an application.

"},{"location":"demo/#table-of-contents","title":"Table of Contents","text":"
  • The Alice/Faber Python demo
  • Running in a Browser
  • Running in Docker
  • Running Locally
    • Installing Prerequisites
    • Start a local Indy ledger
    • Genesis File handling
    • Run a local Postgres instance
    • Optional: Run a von-network ledger browser
    • Run the Alice and Faber Controllers/Agents
  • Follow The Script
    • Exchanging Messages
    • Issuing and Proving Credentials
  • Additional Options in the Alice/Faber demo
  • Revocation
  • DID Exchange
  • Endorser
  • Run Indy-SDK Backend
  • Mediation
  • Multi-ledger
  • Multi-tenancy
  • Multi-tenancy with Mediation!!!
  • Other Environment Settings
  • Learning about the Alice/Faber code
  • OpenAPI (Swagger) Demo
  • Performance Demo
  • Coding Challenge: Adding ACME
"},{"location":"demo/#the-alicefaber-python-demo","title":"The Alice/Faber Python demo","text":"

The Alice/Faber demo is the (in)famous first verifiable credentials demo. Alice, a former student of Faber College (\"Knowledge is Good\"), connects with the College, is issued a credential about her degree and then is asked by the College for a proof. There are a variety of ways of running the demo. The easiest is in your browser using a site (\"Play with VON\") that let's you run docker containers without installing anything. Alternatively, you can run locally on docker (our recommendation), or using python on your local machine. Each approach is covered below.

"},{"location":"demo/#running-in-a-browser","title":"Running in a Browser","text":"

In your browser, go to the docker playground service Play with Docker. On the title screen, click \"Start\". On the next screen, click (in the left menu) \"+Add a new instance\". That will start up a terminal in your browser. Run the following commands to start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Alice's agent is now running.

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-in-docker","title":"Running in Docker","text":"

Running the demo in docker requires having a von-network (a Hyperledger Indy public ledger sandbox) instance running in docker locally. See the VON Network Tutorial for guidance on starting and stopping your own local Hyperledger Indy instance.

Open three bash shells. For Windows users, git-bash is highly recommended. bash is the default shell in Linux and Mac terminal sessions. For Mac users on the newer M\u00bd/3 Apple Silicon devices, make sure that you install Apple's Rosetta 2 software, using these installation instructions from Apple, and this even more useful guidance on how to install Rosetta 2 from the command line which amounts to running this MacOS command: softwareupdate --install-rosetta.

In the first terminal window, start von-network by following the Building and Starting instructions.

In the second terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the faber agent by issuing the following command:

  ./run_demo faber\n

In the third terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the alice agent by issuing the following command:

  ./run_demo alice\n

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-locally","title":"Running Locally","text":"

The following is an approach to to running the Alice and Faber demo using Python3 running on a bare machine. There are other ways to run the components, but this covers the general approach.

We don't recommend this approach if you are just trying this demo, as you will likely run into issues with the specific setup of your machine.

"},{"location":"demo/#installing-prerequisites","title":"Installing Prerequisites","text":"

We assume you have a running Python 3 environment. To install the prerequisites specific to running the agent/controller examples in your Python environment, run the following command from this repo's demo folder. The precise command to run may vary based on your Python environment setup.

pip3 install -r demo/requirements.txt\n

While that process will include the installation of the Indy python prerequisite, you still have to build and install the libindy code for your platform. Follow the installation instructions in the indy-sdk repo for your platform.

"},{"location":"demo/#start-a-local-indy-ledger","title":"Start a local Indy ledger","text":"

Start a local von-network Hyperledger Indy network running in Docker by following the VON Network Building and Starting instructions.

We strongly recommend you use Docker for the local Indy network until you really, really need to know the details of running an Indy Node instance on a bare machine.

"},{"location":"demo/#genesis-file-handling","title":"Genesis File handling","text":"

Assuming you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section. If you started the Indy ledger without using VON Network, this information might be helpful.

An Aries agent (or other client) connecting to an Indy ledger must know the contents of the genesis file for the ledger. The genesis file lets the agent/client know the IP addresses of the initial nodes of the ledger, and the agent/client sends ledger requests to those IP addresses. When using the indy-sdk ledger, look for the instructions in that repo for how to find/update the ledger genesis file, and note the path to that file on your local system.

The environment variable GENESIS_FILE is used to let the Aries demo agents know the location of the genesis file. Use the path to that file as value of the GENESIS_FILE environment variable in the instructions below. You might want to copy that file to be local to the demo so the path is shorter.

"},{"location":"demo/#run-a-local-postgres-instance","title":"Run a local Postgres instance","text":"

The demo uses the postgres database the wallet persistence. Use the Docker Hub certified postgres image to start up a postgres instance to be used for the wallet storage:

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres -c 'log_statement=all' -c 'logging_collector=on' -c 'log_destination=stderr'\n
"},{"location":"demo/#optional-run-a-von-network-ledger-browser","title":"Optional: Run a von-network ledger browser","text":"

If you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section, as you already have a Ledger browser running, accessible on http://localhost:9000.

If you started the Indy ledger without using VON Network, and you want to be able to browse your local ledger as you run the demo, clone the von-network repo, go into the root of the cloned instance and run the following command, replacing the /path/to/local-genesis.txt with a path to the same genesis file as was used in starting the ledger.

GENESIS_FILE=/path/to/local-genesis.txt PORT=9000 REGISTER_NEW_DIDS=true python -m server.server\n
"},{"location":"demo/#run-the-alice-and-faber-controllersagents","title":"Run the Alice and Faber Controllers/Agents","text":"

With the rest of the pieces running, you can run the Alice and Faber controllers and agents. To do so, cd into the demo folder your clone of this repo in two terminal windows.

If you are using a VON Network instance of Hyperledger, run the following commands:

DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

If you started the Indy ledger without using VON Network, use the following commands, replacing the /path/to/local-genesis.txt with the one for your configuration.

GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

Note that Alice and Faber will each use 5 ports, e.g., using the parameter ... --port 8020 actually uses ports 8020 through 8024. Feel free to use different ports if you want.

Everything running? See the Follow the Script section below for further instructions.

If the demo fails with an error that references the genesis file, a timeout connecting to the Indy Pool, or an Indy 307 error, it's likely a problem with the genesis file handling. Things to check:

  • Review the instructions for running the ledger with indy-sdk. Is it running properly?
  • Is the /path/to/local-genesis.txt file correct in your start commands?
  • Look at the IP addresses in the genesis file you are using, and make sure that those IP addresses are accessible from the location you are running the Aries demo
  • Check to make sure that all of the nodes of the ledger started. We've seen examples of only some of the nodes starting up, triggering an Indy 307 error.
"},{"location":"demo/#follow-the-script","title":"Follow The Script","text":"

With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (X) Exit?\n
"},{"location":"demo/#exchanging-messages","title":"Exchanging Messages","text":"

Feel free to use the \"3\" option to send messages back and forth between the agents. Fun, eh? Those are secure, end-to-end encrypted messages.

"},{"location":"demo/#issuing-and-proving-credentials","title":"Issuing and Proving Credentials","text":"

When ready to test the credentials exchange protocols, go to the Faber prompt, enter \"1\" to send a credential, and then \"2\" to request a proof.

You don't need to do anything with Alice's agent - her agent is implemented to automatically receive credentials and respond to proof requests.

Note there is an option \"2a\" to initiate a connectionless proof - you can execute this option but it will only work end-to-end when connecting to Faber from a mobile agent.

"},{"location":"demo/#additional-options-in-the-alicefaber-demo","title":"Additional Options in the Alice/Faber demo","text":"

You can enable support for various ACA-Py features by providing additional command-line arguments when starting up alice or faber.

Note that when the controller starts up the agent, it prints out the ACA-Py startup command with all parameters - you can inspect this command to see what parameters are provided in each case. For more details on the parameters, just start ACA-Py with the --help parameter, for example:

./scripts/run_docker start --help\n
"},{"location":"demo/#revocation","title":"Revocation","text":"

To enable support for revoking credentials, run the faber demo with the --revocation option:

./run_demo faber --revocation\n

Note that you don't specify this option with alice because it's only applicable for the credential issuer (who has to enable revocation when creating a credential definition, and explicitly revoke credentials as appropriate; alice doesn't have to do anything special when revocation is enabled).

You need to run an AnonCreds revocation registry tails server in order to support revocation - the details are described in the Alice gets a Phone demo instructions.

Faber will setup support for revocation automatically, and you will see an extra option in faber's menu to revoke a credential:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (5) Revoke Credential\n    (6) Publish Revocations\n    (7) Rotate Revocation Registry\n    (8) List Revocation Registries\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n  ```\n\nWhen you issue a credential, make a note of the `Revocation registry ID` and `Credential revocation ID`:\n
Faber | Revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Faber | Credential revocation ID: 1
When you revoke a credential you will need to provide those values:\n
[\u00bd/\u00be/\u215a/\u215e/T/X] 5

Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y

Note that you need to Publish the revocation information to the ledger.  Once you've revoked a credential any proof which uses this credential will fail to verify.  \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol.  There is no other affect on the operation of the agents.\n\nWith DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, connection re-use, and use of qualified DIDs:\n\n- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations\n- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation)\n- `--multi-use-invitations` - inviter will issue multi-use invitations\n- `--emit-did-peer-4` - participants will prefer use of did:peer:4 for their pairwise connection DIDs\n- `--emit-did-peer-2` - participants will prefer use of did:peer:2 for their pairwise connection DIDs\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Run Indy-SDK Backend\n\nThis runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask):\n\n```bash\n./run_demo faber --wallet-type indy\n

"},{"location":"demo/#mediation","title":"Mediation","text":"

To enable mediation, run the alice or faber demo with the --mediation option:

./run_demo faber --mediation\n

This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.

"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"

To enable multiple ledger mode, run the alice or faber demo with the --multi-ledger option:

./run_demo faber --multi-ledger\n

The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml.

"},{"location":"demo/#multi-tenancy","title":"Multi-tenancy","text":"

To enable support for multi-tenancy, run the alice or faber demo with the --multitenant option:

./run_demo faber --multitenant\n

(This option can be used with both (or either) alice and/or faber.)

You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (W) Create and/or Enable Wallet\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (W) Create and/or Enable Wallet\n    (X) Exit?\n

When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)

[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber      | Register or switch to wallet new_wallet_12\nFaber      | Created new profile\nFaber      | Profile backend: indy\nFaber      | Profile name: new_wallet_12\nFaber      | No public DID\n... etc\n

Note that faber will create a public DID for this wallet, and will create a schema and credential definition.

Once you have created a new wallet, you must establish a connection between alice and faber (remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").

In faber, create a new invitation:

[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n

In alice, accept the invitation:

[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n

You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:

Show me a screenshot - multi-tenancy via admin API

Note that with multi-tenancy enabled:

  • The \"base\" wallet will have access to this new \"agency API\" - the agent's admin key, if enabled, must be provided in a header
  • \"Base wallet\" API calls are handled here
  • The \"sub-wallets\" will have access to the \"normal\" ACA-Py admin API - to identify the sub-wallet, a JWT token must be provided, this token is created upon creation of the new wallet (see: this code here)
  • \"Sub-wallet\" API calls are handled here

Documentation on ACA-Py's multi-tenancy support can be found here.

"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"

There are two options for configuring mediation with multi-tenancy, documented here.

This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.

Run the demo (Alice or Faber) specifying both options:

./run_demo faber --multitenant --mediation\n

This works exactly as the vanilla multi-tenancy, except that all connections are mediated.

"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To override the default port settings:

AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n

(The agent requires up to 10 available ports.)

To pass extra arguments to the agent (for example):

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n

Additionally, separating the build and run functionalities in the script allows for smoother development and debugging processes. With the mounting of volumes from the host into the Docker container, code changes can be automatically reloaded without the need to repeatedly build the demo.

Build Command:

./demo/run_demo build alice --wallet-type askar-anoncreds --events\n

Run Command:

./demo/run_demo run alice --wallet-type askar-anoncreds --events\n

"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"

These Alice and Faber scripts (in the demo/runners folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.

The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.

"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"

Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This Aries OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connecting, issuing a credential, and presenting a proof sequence.

"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"

Another example in the demo/runners folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.

To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:

  • When starting the first agent, replace the agent name (e.g. faber) with performance.
  • Don't start the second agent (alice) at all.

The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.

A second version of the performance test can be run by adding the parameter --routing to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.

You can also run the demo against a postgres database using the following:

./run_demo performance --arg-file demo/postgres-indy-args.yml\n

(Obviously you need to be running a postgres database - the command to start postgres is in the yml file provided above.)

You can tweak the number of credentials issued using the --count and --batch parameters, and you can run against an Askar database using the --wallet-type askar option (or run using indy-sdk using --wallet-type indy).

An example full set of options is:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n

Or:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"

Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:

  • ACME inviting Alice to connect
  • ACME requesting a proof of her College degree
  • ACME issuing Alice a credential after she is hired.

The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:

  • Use the instructions above to start the Alice/Faber demo (above).
  • Start another terminal session and run the same commands as for \"Alice\", but replace \"alice\" with \"acme\".

All done? Checkout how we added the missing code segments here.

"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"

In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.

Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo

"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"

There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)

To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:

Open 2 bash shells, and in each run:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

In one shell run Faber:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

... and in the second shell run Alice:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

When Faber has produced an invitation, copy it over to Alice.

Then, in the Faber shell, select option 1 to issue a credential to Alice. (You can select option 2 if you like, to confirm via proof.)

Then, in the Faber shell, enter X to exit the controller, and then run the Acme controller:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n

In the Alice shell, select option 4 (to enter a new invitation) and then copy over Acme's invitation once it's available.

Then, in the Acme shell, you can select option 2 and then option 1, which don't do anything ... yet!!!

"},{"location":"demo/AcmeDemoWorkshop/#asking-alice-for-a-proof-of-education","title":"Asking Alice for a Proof of Education","text":"

In the Acme code acme.py we are going to add code to issue a proof request to Alice, and then validate the received proof.

First the following import statements and constants that we will need near the top of acme.py:

import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n

Next locate the code that is triggered by option 2:

            elif option == \"2\":\n                log_status(\"#20 Request proof of degree from alice\")\n                # TODO presentation requests\n

Replace the # TODO comment with the following code:

                req_attrs = [\n                    {\n                        \"name\": \"name\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"date\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"degree\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    }\n                ]\n                req_preds = []\n                indy_proof_request = {\n                    \"name\": \"Proof of Education\",\n                    \"version\": \"1.0\",\n                    \"nonce\": str(uuid4().int),\n                    \"requested_attributes\": {\n                        f\"0_{req_attr['name']}_uuid\": req_attr\n                        for req_attr in req_attrs\n                    },\n                    \"requested_predicates\": {}\n                }\n                proof_request_web_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"presentation_request\": {\"indy\": indy_proof_request},\n                }\n                # this sends the request to our agent, which forwards it to Alice\n                # (based on the connection_id)\n                await agent.admin_POST(\n                    \"/present-proof-2.0/send-request\",\n                    proof_request_web_request\n                )\n

Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):

        if state == \"presentation-received\":\n            # TODO handle received presentations\n            pass\n

then replace the # TODO comment and the pass statement:

            log_status(\"#27 Process the proof provided by X\")\n            log_status(\"#28 Check if proof is valid\")\n            proof = await self.admin_POST(\n                f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n            )\n            self.log(\"Proof = \", proof[\"verified\"])\n\n            # if presentation is a degree schema (proof of education),\n            # check values received\n            pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n            pres = message[\"by_format\"][\"pres\"][\"indy\"]\n            is_proof_of_education = (\n                pres_req[\"name\"] == \"Proof of Education\"\n            )\n            if is_proof_of_education:\n                log_status(\"#28.1 Received proof of education, check claims\")\n                for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n                    if referent in pres['requested_proof']['revealed_attrs']:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n                        )\n                    else:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            \"(attribute not revealed)\"\n                        )\n                for id_spec in pres[\"identifiers\"]:\n                    # just print out the schema/cred def id's of presented claims\n                    self.log(f\"schema_id: {id_spec['schema_id']}\")\n                    self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n                # TODO placeholder for the next step\n            else:\n                # in case there are any other kinds of proofs received\n                self.log(\"#28.1 Received \", pres_req[\"name\"])\n

Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.

Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!

"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"

Now we can issue a work credential to Alice!

There are two options for this. We can (a) add code under option 1 to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.

We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!

First though we need to register a schema and credential definition. Find this code:

        # acme_schema_name = \"employee id schema\"\n        # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n        await acme_agent.initialize(\n            the_agent=agent,\n            # schema_name=acme_schema_name,\n            # schema_attrs=acme_schema_attrs,\n        )\n\n        # TODO publish schema and cred def\n

... and uncomment the code lines. Replace the # TODO comment with the following code:

        with log_timer(\"Publish schema and cred def duration:\"):\n            # define schema\n            version = format(\n                \"%d.%d.%d\"\n                % (\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                )\n            )\n            # register schema and cred def\n            (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n                \"employee id schema\",\n                version,\n                [\"employee_id\", \"name\", \"date\", \"position\"],\n                support_revocation=False,\n                revocation_registry_size=TAILS_FILE_COUNT,\n            )\n

For option (1) we want to replace the # TODO comment here:

            elif option == \"1\":\n                log_status(\"#13 Issue credential offer to X\")\n                # TODO credential offers\n

with the following code:

                agent.cred_attrs[cred_def_id] = {\n                    \"employee_id\": \"ACME0009\",\n                    \"name\": \"Alice Smith\",\n                    \"date\": date.isoformat(date.today()),\n                    \"position\": \"CEO\"\n                }\n                cred_preview = {\n                    \"@type\": CRED_PREVIEW_TYPE,\n                    \"attributes\": [\n                        {\"name\": n, \"value\": v}\n                        for (n, v) in agent.cred_attrs[cred_def_id].items()\n                    ],\n                }\n                offer_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"comment\": f\"Offer on cred def id {cred_def_id}\",\n                    \"credential_preview\": cred_preview,\n                    \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n                }\n                await agent.admin_POST(\n                    \"/issue-credential-2.0/send-offer\", offer_request\n                )\n

... and then locate the code that handles the credential request callback:

        if state == \"request-received\":\n            # TODO issue credentials based on offer preview in cred ex record\n            pass\n

... and replace the # TODO comment and pass statement with the following code to issue the credential as Acme offered it:

            # issue credentials based on offer preview in cred ex record\n            if not message.get(\"auto_issue\"):\n                await self.admin_POST(\n                    f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n                    {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n                )\n

Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.

"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"

In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.

This demo also introduces revocation of credentials.

"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":"
  • Getting Started
  • Get a mobile agent
  • Running Locally in Docker
    • Install ngrok and jq
    • Expose services publicly using ngrok
  • Running in Play With Docker
  • Run an instance of indy-tails-server
    • Running locally in a bash shell?
    • Running in Play with Docker?
  • Run faber With Extra Parameters
    • Running locally in a bash shell?
    • Running in Play with Docker?
    • Waiting for the Faber agent to start ...
  • Accept the Invitation
  • Issue a Credential
  • Accept the Credential
  • Issue a Presentation Request
  • Present the Proof
  • Review the Proof
  • Revoke the Credential and Send Another Proof Request
  • Send a Connectionless Proof Request
  • Conclusion
"},{"location":"demo/AliceGetsAPhone/#getting-started","title":"Getting Started","text":"

This demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.

If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.

"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"

Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

There are a couple of extra steps you need to take to prepare to run the Faber agent locally:

"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"

ngrok is used to expose public endpoints for services running locally on your computer.

jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.

You can install ngrok from here

You can download jq releases here

"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"

Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.

Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accessible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:

ngrok http 8020\n

This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.

You will see something like this:

Forwarding                    http://abc123.ngrok.io -> http://localhost:8020\nForwarding                    https://abc123.ngrok.io -> http://localhost:8020\n

This creates a public url for ports 8020 on your local machine.

Note that an ngrok process is created automatically for your tails server.

Keep this process running as we'll come back to it in a moment.

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

"},{"location":"demo/AliceGetsAPhone/#run-an-instance-of-indy-tails-server","title":"Run an instance of indy-tails-server","text":"

For revocation to function, we need another component running that is used to store what are called tails files.

If you are not running with revocation enabled you can skip this step.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"

Open a new bash shell, and in a project directory, run:

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n

This will run the required components for the tails server to function and make a tails server available on port 6543.

This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:

ngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n

Note the server name in the url=https://c5789aa0.ngrok.io parameter (https://c5789aa0.ngrok.io) - this is the external url for your tails server. Make sure you use the https url!

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_1","title":"Running in Play with Docker?","text":"

Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.

Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events option when you run the Faber agent to reduce the quantity of information logged to the screen.

"},{"location":"demo/AliceGetsAPhone/#run-faber-with-extra-parameters","title":"Run faber With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"

If you are running in a local bash shell, navigate to the demo directory in your fork/clone of the Aries Cloud Agent Python repository and run:

TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

(Note that we have to start faber with --aip 10 for compatibility with mobile clients.)

The TAILS_NETWORK parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_2","title":"Running in Play with Docker?","text":"

If you are running in Play with Docker, navigate to the demo folder in the clone of Aries Cloud Agent Python and run the following:

PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

The PUBLIC_TAILS_URL parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.

Use the ngrok url for the tails server that you noted earlier.

*Note that you must use the https url for the tails server endpoint.

*Note - you may want to leave off the --events option when you run the Faber agent, if you are finding you are getting too much logging output.

"},{"location":"demo/AliceGetsAPhone/#waiting-for-the-faber-agent-to-start","title":"Waiting for the Faber agent to start ...","text":"

The Preparing agent image... step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:

  • We are using the BCovrin Test network because that is a network that the mobile agents can be configured to use.
  • We are running in \"auto\" mode, so we will make no manual acknowledgements.
  • The revocation related changes:
  • The TAILS_NETWORK parameter tells the ./run_demo script how to connect to the tails server and determine the public ngrok endpoint.
  • The PUBLIC_TAILS_URL environment variable is the address of your tails server (must be https).
  • The --revocation parameter to the ./run-demo script activates the ACA-Py revocation issuance.

As part of its startup process, the agent will publish a revocation registry to the ledger.

Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"

When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.

Click here to view screenshot

The mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".

Click here to view screenshot Click here to view screenshot

Switch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.

Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL option, and copy the invitation_url, which will look something like:

https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n

Or this:

http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n

Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.

"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"

We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.

In the Faber console, select option 1 to send a credential to the mobile agent.

Click here to view screenshot

The Faber agent outputs details to the console; e.g.,

Faber      | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber      | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber      | Credential revocation ID: 1\nFaber      | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n

The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id later, so we recommend that you copy it it now and paste it into a text file or some place that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id} endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created endpoint).

"},{"location":"demo/AliceGetsAPhone/#accept-the-credential","title":"Accept the Credential","text":"

The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"

We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.

In the Faber console, select option 2 to send a proof request to the mobile agent.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#present-the-proof","title":"Present the Proof","text":"

The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot

If the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.

The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.

"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"

In the Faber console window, the proof should be received as validated.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"

If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber options 5 and 6). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.

Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"

A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.

This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)

If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).

Then in the faber demo, select option 2a - Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.

Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py code in the repository.

"},{"location":"demo/AliceGetsAPhone/#conclusion","title":"Conclusion","text":"

That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.

"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"

ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.

The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.

"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"

Clone this repository to a directory on your local:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

Open up a second shell (so you have 2 shells open in the demo directory) and in one shell:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n

... and in the other:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Note that you start the faber agent with AIP2.0 options. (When you specify --cred-type json-ld faber will set aip to 20 automatically, so the --aip option is not strictly required). Note as well the use of the LEDGER_URL. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.

Also note that the above will only work with the /issue-credential-2.0/create-offer endpoint. If you want to use the /issue-credential-2.0/send endpoint - which automates each step of the credential exchange - you will need to include the --no-auto option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).

(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh and ./alice-local.sh scripts in the demo directory.)

Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.

(If you are running with --no-auto you will also need to call the /connections/{conn_id}/accept-invitation endpoint in alice's admin api swagger page.)

Now open up two browser windows to the Faber and Alice admin api swagger pages.

Using the Faber admin api, you have to create a DID with the appropriate:

  • DID method (\"key\" or \"sov\")
  • key type \"ed25519\" or \"bls12381g2\" (corresponding to signature types \"Ed25519Signature2018\" or \"BbsBlsSignature2020\")
  • if you use DID method \"sov\" you must use key type \"ed25519\"

Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.

For example, in Faber's swagger page call the /wallet/did/create endpoint with the following payload:

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

This will return something like:

{\n  \"result\": {\n    \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n    \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n    \"posture\": \"wallet_only\",\n    \"key_type\": \"bls12381g2\",\n    \"method\": \"key\"\n  }\n}\n

You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).

You will need to create a DID as above for Alice as well (/wallet/did/create etc ...).

Congratulations, you are now ready to start issuing JSON-LD credentials!

  • You have two agents with a connection established between the agents - you will need to copy Faber's connection_id into the examples below.
  • You have created a (non-public) DID for Faber to use to sign/issue the credentials - you will need to copy the DID that you created above into the examples below (as issuer).
  • You have created a (non-public) DID for Alice to use as her credentialSubject.id - this is required for Alice to sign the proof (the credentialSubject.id is not required, but then the provided presentation can't be verified).

To issue a credential, use the /issue-credential-2.0/send-offer endpoint. (You can also use the /issue-credential-2.0/send) endpoint, if, as mentioned above, you have included the --no-auto when starting both of the agents.)

You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:

{\n  \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request, /store, etc endpoints to complete the protocol.

To see the issued credential, call the /credentials/w3c endpoint on Alice's admin api - this will return something like:

{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\"\n      ],\n      \"types\": [\n        \"UniversityDegreeCredential\",\n        \"VerifiableCredential\"\n      ],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n      \"subject_ids\": [],\n      \"proof_types\": [\n        \"BbsBlsSignature2020\"\n      ],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\n          \"VerifiableCredential\",\n          \"UniversityDegreeCredential\"\n        ],\n        \"issuer\": \"did:key:zUC71Kd...poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n          \"created\": \"2021-05-19T16:19:44.458170\",\n          \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n    }\n  ]\n}\n

If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records) and check the state. If the state is credential-received, then the credential has been received but not stored, in this case just call the /store endpoint for this credential exchange.

"},{"location":"demo/AliceWantsAJsonCredential/#building-more-realistic-json-ld-credentials","title":"Building More Realistic JSON-LD Credentials","text":"

The above example uses the https://www.w3.org/2018/credentials/examples/v1 context, which should never be used in a real application.

To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.

"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"

You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).

You first include https://schema.org in the @context block of the credential as follows:

\"@context\": [\n  \"https://www.w3.org/2018/credentials/v1\",\n  \"https://schema.org\"\n],\n

Then you review the attributes and objects defined by https://schema.org and decide what you need to include in your credential.

For example to issue a credetial with givenName, familyName and alumniOf attributes, submit the following:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"alumniOf\": \"Example University\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that with https://schema.org, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject in the above with:

\"credentialSubject\": {\n  \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n  \"givenName\": \"Sally\",\n  \"familyName\": \"Student\",\n  \"alumniOf\": \"Example University\",\n  \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n

... and the credential issuance should fail, however https://schema.org defines a @vocab that by default all terms derive from (see here).

You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName and familyName):

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"student\": {\n            \"type\": \"Person\",\n            \"givenName\": \"Sally\",\n            \"familyName\": \"Student\",\n            \"alumniOf\": \"Example University\"\n          }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"

The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org, you just shouldn't uste this directly in your credential.)

"},{"location":"demo/AliceWantsAJsonCredential/#credential-issue-example","title":"Credential Issue Example","text":"

The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id, issuer and credentialSubject.id with your local values):

{\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"filter\": {\n        \"ld_proof\": {\n            \"credential\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://w3id.org/citizenship/v1\"\n                ],\n                \"type\": [\n                    \"VerifiableCredential\",\n                    \"PermanentResident\"\n                ],\n                \"id\": \"https://credential.example.com/residents/1234567890\",\n                \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n                \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n                \"credentialSubject\": {\n                    \"type\": [\n                        \"PermanentResident\"\n                    ],\n                    \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n                    \"givenName\": \"ALICE\",\n                    \"familyName\": \"SMITH\",\n                    \"gender\": \"Female\",\n                    \"birthCountry\": \"Bahamas\",\n                    \"birthDate\": \"1958-07-17\"\n                }\n            },\n            \"options\": {\n                \"proofType\": \"BbsBlsSignature2020\"\n            }\n        }\n    }\n}\n

Copy and paste this content into Faber's /issue-credential-2.0/send-offer endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.

In Alice's swagger page, submit the /credentials/records/w3c endpoint to see the issued credential.

"},{"location":"demo/AliceWantsAJsonCredential/#request-presentation-example","title":"Request Presentation Example","text":"

To request a proof, submit the following (with appropriate connection_id) to Faber's /present-proof-2.0/send-request endpoint:

{\n    \"comment\": \"string\",\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"presentation_request\": {\n        \"dif\": {\n            \"options\": {\n                \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n                \"domain\": \"4jt78h47fh47\"\n            },\n            \"presentation_definition\": {\n                \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n                \"format\": {\n                    \"ldp_vp\": {\n                        \"proof_type\": [\n                            \"BbsBlsSignature2020\"\n                        ]\n                    }\n                },\n                \"input_descriptors\": [\n                    {\n                        \"id\": \"citizenship_input_1\",\n                        \"name\": \"EU Driver's License\",\n                        \"schema\": [\n                            {\n                                \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n                            },\n                            {\n                                \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n                            }\n                        ],\n                        \"constraints\": {\n                            \"limit_disclosure\": \"required\",\n                            \"is_holder\": [\n                                {\n                                    \"directive\": \"required\",\n                                    \"field_id\": [\n                                        \"1f44d55f-f161-4938-a659-f8026467f126\"\n                                    ]\n                                }\n                            ],\n                            \"fields\": [\n                                {\n                                    \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                                    \"path\": [\n                                        \"$.credentialSubject.familyName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\",\n                                    \"filter\": {\n                                        \"const\": \"SMITH\"\n                                    }\n                                },\n                                {\n                                    \"path\": [\n                                        \"$.credentialSubject.givenName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\"\n                                }\n                            ]\n                        }\n                    }\n                ]\n            }\n        }\n    }\n}\n

Note that the is_holder property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName). Later on, the received presentation will be signed and verifiable only if is_holder with \"directive\": \"required\" is included in the presentation request.

There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation:

{\n  \"dif\": {\n  }\n}\n

There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.

Firstly, Alice can include the received presentation request in the body to the /send-presentation endpoint, and can include additional constraints on the fields:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"is_holder\": [\n                {\n                    \"directive\": \"required\",\n                    \"field_id\": [\n                        \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"332be361-823a-4863-b18b-c3b930c5623e\"\n                    ],\n                }\n            ],\n            \"fields\": [\n              {\n                \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              },\n              {\n                  \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n                  \"path\": [\n                      \"$.id\"\n                  ],\n                  \"purpose\": \"Specify the id of the credential to present\",\n                  \"filter\": {\n                      \"const\": \"https://credential.example.com/residents/1234567890\"\n                  }\n              }\n            ]\n          }\n        }\n      ]\n    }\n  }\n}\n

Note the additional constraint on \"path\": [ \"$.id\" ] - this restricts the presented credential to the one with the matching credential.id. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.

Another option is for Alice to specify the credential record_id - this is an internal value within ACA-Py:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"fields\": [\n              {\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              }\n            ]\n          }\n        }\n      ]\n    },\n    \"record_ids\": {\n      \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"

TBD the following credential is based on the W3C Vaccination schema:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://w3id.org/vaccination/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n            \"type\": \"VaccinationEvent\",\n            \"batchNumber\": \"1183738569\",\n            \"administeringCentre\": \"MoH\",\n            \"healthProfessional\": \"MoH\",\n            \"countryOfVaccination\": \"NZ\",\n            \"recipient\": {\n              \"type\": \"VaccineRecipient\",\n              \"givenName\": \"JOHN\",\n              \"familyName\": \"SMITH\",\n              \"gender\": \"Male\",\n              \"birthDate\": \"1958-07-17\"\n            },\n            \"vaccine\": {\n              \"type\": \"Vaccine\",\n              \"disease\": \"COVID-19\",\n              \"atcCode\": \"J07BX03\",\n              \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n              \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n            }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/Aries-Workshop/","title":"A Hyperledger Aries/AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/Aries-Workshop/#introduction","title":"Introduction","text":"

Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.

To run the labs, you\u2019ll need a Hyperledger Aries agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant Aries agent built on Hyperledger Aries Cloud Agent Python (ACA-Py). Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.

The four labs in this workshop are laid out as follows:

  • Lab 1: Getting a Traction Tenant Agent and Mobile Wallet
  • Lab 2: Getting Ready To Be An Issuer
  • Lab 3: Issuing Credentials to a Mobile Wallet
  • Lab 4: Requesting and Sending Presentations

Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py

Jump in!

"},{"location":"demo/Aries-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"

Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.

"},{"location":"demo/Aries-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"
  1. Get a compatible Aries Mobile Wallet to use with your Aries Traction tenant. There are a number to choose from. We suggest that you use one of these:
    1. BC Wallet from the Government of British Columbia
    2. Orbit Wallet from Northern Block
  2. Click this Traction Sandbox link to go to the Sandbox login page to create your own Traction Tenant Aries agent. Once there, do the following:
    1. Click \"Create Request!\", fill in at least the required form fields, and click \"Submit\".
    2. Your new Traction Tenant's Wallet ID and Wallet Key will be displayed. SAVE THOSE IMMEDIATELY SO THAT YOU HAVE THEM TO ACCESS YOUR TENANT. You only get to see/save them once!
      1. You will need those each time you open your Traction Tenant agent. Putting them into a Password Manager is a great idea!
      2. We can't recover your Wallet ID and Wallet Key, so if you lose them you have to start the entire process again.
  3. Go back to the Traction Sandbox login and this time, use your Wallet ID/Key to log in to your brand new Traction Tenant agent. You might want to bookmark the site.
  4. Make your new Traction Tenant a verifiable credential issuer by:
    1. Clicking on the \"User\" (folder icon) menu (top right), and choosing \"Profile\"
    2. Clicking the \u201cBCovrin Test\u201d Action in the Endorser section.
      1. When done, you will have your own public DID (displayed on the page) that has been published on the BCovrin Test Ledger (can you find it?). Your DID will be used to publish other AnonCreds transactions so you can issue verifiable credentials.
  5. Connect from your Traction Tenant to your mobile Wallet app by:
    1. Selecting on the left menu \"Connections\" and then \"Invitations\"
    2. Click the \"Single Use Connection\" button, give the connection an alias (maybe \"My Wallet\"), and click \"Submit.\"
    3. Scan the resulting QR code with your initialized mobile Wallet and follow the prompts. Once you connect, type a quick \"Hi!\" message to the Traction Agent and you should get an automated message back.
    4. Check the Traction Tenant menu item \"Connections\u2192Connections\" to see the status of your connection \u2013 it should be active.
    5. If anything didn't work in the sequence, here are some things to try:
    6. If the Traction Tenant connection is not active, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.
    7. We've created a Traction Sandbox Workshop FAQ and Questions GitHub issue that you can check to see if your question is already answered, and if not, you can add your question as comment on the issue, and we'll get back to you.

That's it--you should be ready to start issuing and receiving verifiable credentials.

"},{"location":"demo/Aries-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"

::: todo To Do: Update lab to use this schema: H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0 :::

In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:

  • A Schema defines the list of attributes (claims) in a credential. An issuer often publishes their own schema, but they may also use one published by someone else. For example, a group of universities all might use the schema published by the \"Association of Universities and Colleges\" to which they belong.
  • A Credential Definition (CredDef) is published by the issuer, linking together Issuer's DID with the schema upon which the credentials will be issued, and containing the public key material needed to verify presentations of the credential. Revocation Registries are also linked to the Credential Definition, enabling an issuer to revoke credentials when necessary.
"},{"location":"demo/Aries-Workshop/#lab-2-steps-to-follow","title":"Lab 2: Steps to Follow","text":"
  1. Log into your Traction Sandbox. You did record your Wallet ID and Key, right?
    1. If not \u2014 jump back to Lab 1 to create a new Traction Tenant, and to a connection to your mobile Wallet.
  2. Create a Schema:
    1. Click the menu item \u201cConfiguration\u201d and then \u201cSchema Storage\u201d.
    2. Click \u201cAdd Schema From Ledger\u201d and fill in the Schema Id with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0.
      1. By doing this, you (as the issuer) will be using a previously published schema. Click here to see the schema on the ledger.
    3. To see the details about your schema, hit the Expand (>) link, and then the subsequent > to \u201cView Raw Content.\"
  3. With the schema in place, it's time to become an issuer. To do that, you have to create a Credential Definition. Click on the \u201cCredential\u201d icon in the \u201cCredential Definition\u201d column of your schema to create the Credential Definition (CredDef) for the Schema. The \u201cTag\u201d can be any value you want \u2014 it is an issuer defined part of the identifier for the Credential Definition. Wait for the operation to complete. Click the \u201cRefresh\u201d button if needed to see the Create icon has been replaced with the identifier for your CredDef.
  4. Move to the menu item \"Configuration \u2192 Credential Definition Storage\" to see the CredDef you created, If you want, expand it to view the raw data. In this case, the raw data does not show the actual CredDef, but rather the Traction data about the CredDef. You can again use the BCovrin Test ledger browser to see your new, published CredDef.

Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.

"},{"location":"demo/Aries-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"

In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.

"},{"location":"demo/Aries-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Issue a Credential:
    1. Click the menu item \u201cIssuance\u201d and then \u201cOffer a Credential\u201d.
    2. Select the Credential Definition of the credential you want to issue.
    3. Select the Contact Name to whom you are issuing the credential\u2014the alias of the connection you made to your mobile Wallet.
    4. Click the \u201cEnter Credential Value\u201d to popup a data entry form for the attributes to populate.
      1. When you enter the date values that you want to use in predicates (e.g., \u201cOlder than 19\u201d), put the date into the following format: YYYYMMDD, e.g., 20231001. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.
      2. We suggest you use realistic dates for Date of Birth (DOB) (e.g., 20-ish years in the past) and expiry (e.g., 3 years in the future) to make using them in predicates easier.
    5. Click \u201cSave\u201d when you are finished entering the attributes and review the information you have entered.
    6. When you are ready, click \u201cSend Offer\u201d to initiate the issuance of the credential.
  3. Receive the Credential:
    1. Open up your mobile Wallet and look for a notification about the credential offer. Where that appears may vary based on the Wallet you are using.
    2. Review the offer and then click the \u201cAccept\u201d button.
    3. Your new credential should be saved to your wallet.
  4. Review the Issuance Data:
    1. Back in your Traction Tenant, refresh the list to see the updates status of the issuance you just completed (should be \u201ccredential_issued\u201d or \u201ccredential_acked\u201d, depending on the Wallet you are using).
    2. Expand the issuance and again to \u201cView Raw Content.\u201d to see the data that was exchanged between the Traction Issuer and the Wallet.
  5. If you want, repeat the process for other credentials types your Traction Tenant is capable of issuing.

That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.

"},{"location":"demo/Aries-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"

In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.

"},{"location":"demo/Aries-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Create and send a presentation request:
    1. Click the menu item \u201cVerification\u201d and then the button \u201cCreate Presentation Request\u201d.
    2. Select the Connection to whom you are sending the request\u2014the alias of the connection you made to your mobile Wallet.
    3. Update the example Presentation Request to match the credential that you want to request. Keep it simple for your first request\u2014it\u2019s easy to iterate in Traction to make your request more complicated. If you used the schema we suggested in Lab 1, just use the default presentation request. It should just work! If not, start from it, and:
      1. Update the value of \u201cschema_name\u201d to the name(s) of the schema for the credential(s) you issued.
      2. Update the group name(s) to something that makes sense for your credential(s) and make sure the attributes listed match your credential(s).
      3. Update (or perhaps remove) the \u201crequest_predicates\u201d JSON item, if it is not applicable to your credential.
    4. Update the optional fields (\u201cAuto Verify\u201d and \u201cOptional Comment\u201d) as you see fit. The \u201cOptional Comment\u201d goes into the list of Verifications so you can keep track of the different presentation requests you create.
    5. Click \u201cSubmit\u201d when your presentation request is ready.
  3. Respond to the Presentation Request:
    1. Open up your mobile Wallet and look for a notification about receiving a presentation request. Where that appears may vary based on the Wallet you are using.
    2. Review the information you are being asked to share, and then click the \u201cShare\u201d button to send the presentation.
  4. Review the Presentation Request Result:
    1. Back in your Traction Tenant, refresh the Verifications list to see the updated status of the presentation request you just completed. It should be something positive, like \u201cpresentation_received\u201d if all went well. It may be different depending on the Wallet you are using.
    2. If you want, expand the presentation request and \u201cView Raw Content.\u201d to see the presentation request, and presentation data exchanged between the Traction Verifier and the Wallet.
  5. Repeat the process, making the presentation request more complicated:
    1. From the list of presentations, use the arrow icon action to copy an existing presentation request and just re-run it, or evolve it.
    2. Ideas:
    3. Add predicates using date of birth (\u201colder than\u201d) and expiry (\u201cnot expired today\u201d).
      1. The p_value should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD format (the integer form of the date).
      2. The p_type should be >= for the \u201colder than\u201d, and =< for \u201cnot expired\u201d. See the table below for the form of the expression form.
    4. Add a second credential group with a restriction for a different credential to the request, so the presentation is derived from two source credentials.
p_value p_type credential_data 20230527 <= expiry_dateint 20030527 >= dob_dateint

That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!

"},{"location":"demo/Aries-Workshop/#whats-next","title":"What's Next","text":"

The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.

Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!

"},{"location":"demo/Aries-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"

Are you going to build an app that uses Traction or an instance of the Aries Cloud Agent Python (ACA-Py)? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.

To access and use your Tenant's OpenAPI (aka Swagger) interface:

  • In your Traction Tenant, click the User icon (top right) and choose \u201cDeveloper\u201d
  • Scroll to the bottom and expand the \u201cEncoded JWT\u201d, and click the \u201cCopy\u201d icon to the right to get the JWT into your clipboard.
  • By using the \u201ccopy\u201d icon, the JWT is prefixed with \u201cBearer \u201c, which is needed in the OpenAPI authorization. If you just highlight and copy the JWT, you don\u2019t get the prefix.
  • Click on \u201cAbout\u201d from the left menu and then click \u201cTraction.\u201d
  • Click on the link with the \u201cSwagger URL\u201d label to open up the OpenAPI (Swagger) API.
  • The URL is just the normal Traction Tenant API with `\u201dapi/doc\u201d added to it.
  • Click Authorize in the top right, click in the second box \u201cAuthorizationHeader (apiKey)\u201d and paste in your previously copied encoded JWT.
  • Close the authorization window and try out an Endpoint. For example, scroll down to the \u201cGET /connections\u201d endpoint, \u201cTry It Out\u201d and \u201cExecute\u201d. You should get back a list of the connections you have established in your Tenant.

The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.

We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.

"},{"location":"demo/Aries-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"

If you are challenged to use Traction or [Aries Cloud Agent Python] to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.

"},{"location":"demo/AriesOpenAPIDemo/","title":"Aries OpenAPI Demo","text":"

What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.

"},{"location":"demo/AriesOpenAPIDemo/#contents","title":"Contents","text":"
  • Getting Started
  • Running in a Browser
  • Start the Faber Agent
  • Start the Alice Agent
  • Running in Docker
  • Start the Faber Agent
  • Start the Alice Agent
  • Restarting the Docker Containers
  • Using the OpenAPI/Swagger User Interface
  • Establishing a Connection
  • Use the Faber Agent to Create an Invitation
  • Copy the Invitation created by the Faber Agent
  • Use the Alice Agent to Receive Faber's Invitation
  • Tell Alice's Agent to Accept the Invitation
  • The Faber Agent Gets the Request
  • The Faber Agent Completes the Connection
  • Review the Connection Status in Alice's Agent
  • Review the Connection Status in Faber's Agent
  • Basic Messaging Between Agents
  • Sending a message from Alice to Faber
  • Receiving a Basic Message (Faber)
  • Alice's Agent Verifies that Faber has Received the Message
  • Preparing to Issue a Credential
  • Confirming your Schema and Credential Definition
  • Notes
  • Issuing a Credential
  • Faber - Preparing to Issue a Credential
  • Faber - Issuing the Credential
  • Alice Receives Credential
  • Alice Stores Credential in her Wallet
  • Faber Receives Acknowledgment that the Credential was Received
  • Issue Credential Notes
  • Bonus Points
  • Requesting/Presenting a Proof
  • Faber sends a Proof Request
  • Alice - Responding to the Proof Request
  • Faber - Verifying the Proof
  • Present Proof Notes
  • Bonus Points
  • Conclusion
"},{"location":"demo/AriesOpenAPIDemo/#getting-started","title":"Getting Started","text":"

We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.

Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.

For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.

Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg. In that:

  • The LEDGER_URL environment variable informs the agent what ledger to use.
  • The --events option indicates that we want the controller to display the webhook events from ACA-Py in the log displayed on the terminal.
  • The --no-auto option indicates that we don't want the ACA-Py agent to automatically handle some events such as connecting. We want the controller (you!) to handle each step of the protocol.
  • The --bg option indicates that the docker container will run in the background, so accidentally hitting Ctrl-C won't stop the process.
"},{"location":"demo/AriesOpenAPIDemo/#running-in-a-browser","title":"Running in a Browser","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"

In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct....

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent","title":"Start the Alice Agent","text":"

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).

Once the Alice agent has started up (with the invite: prompt displayed), click the link near the top of the screen 8031. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct....

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!

You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.

"},{"location":"demo/AriesOpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"

To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"

To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.

In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"

To start Alice's agent, open up a second terminal window and in it, change to the same demo directory as where Faber's agent was started above. Once there, start Alice's agent:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR) that may appear.

If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"

When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:

docker stop faber\ndocker stop alice\n
"},{"location":"demo/AriesOpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"

Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.

From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.

In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.

Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation. That means you must:

  1. scroll to and find that endpoint;
  2. click on the endpoint name to expand its section of the UI;
  3. click on the Try it out button;
  4. fill in any data necessary to run the command;
  5. click Execute;
  6. check the response to see if the request worked.

So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.

Enough with the preliminaries, let\u2019s get started!

"},{"location":"demo/AriesOpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"

We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer DID method, a public ledger is not needed.

"},{"location":"demo/AriesOpenAPIDemo/#use-the-faber-agent-to-create-an-invitation","title":"Use the Faber Agent to Create an Invitation","text":"

In the Faber browser tab, navigate to the POST /connections/create-invitation endpoint. Replace the sample body with and empty production ({}) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"

Copy the entire block of the invitation object, from the curly brackets {}, excluding the trailing comma.

Show me a screenshot - Create Invitation Response

Before switching over to the Alice browser tab, scroll to and execute the GET /connections endpoint to see the list of Faber's connections. You should see a connection with a connection_id that is identical to the invitation you just created, and that its state is invitation.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#use-the-alice-agent-to-receive-fabers-invitation","title":"Use the Alice Agent to Receive Faber's Invitation","text":"

Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation Response

A key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.

"},{"location":"demo/AriesOpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"

At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections endpoint.

Show me a screenshot

To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id in the connection response from the previous POST /connections/receive-invitation endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation endpoint and paste the connection_id in the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking Execute should show that the connection has a state of request.

Show me a screenshot - Accept Invitation Request Show me a screenshot - Accept Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-gets-the-request","title":"The Faber Agent Gets the Request","text":"

In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id from the event for the next step.

Show me the event

Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.

"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"

To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request endpoint and paste the connection_id you previously copied into the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking the Execute button should show that the connection has a state of response, which indicates that Faber has accepted Alice's connection request.

Show me a screenshot - Accept Connection Request Show me a screenshot - Accept Connection Request"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-alices-agent","title":"Review the Connection Status in Alice's Agent","text":"

Switch over to the Alice browser tab.

Scroll to and execute GET /connections to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active.

Show me a screenshot - Alice Connection Status

As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.

Show me the event"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"

You are connected! Switch to the Faber browser tab and run the same GET /connections endpoint to see Faber's view of the connection. Its state is also active. Note the connection_id, you\u2019ll need it later in the tutorial.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#basic-messaging-between-agents","title":"Basic Messaging Between Agents","text":"

Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.

"},{"location":"demo/AriesOpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"

On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message endpoint. Click on Try it Out and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}). Enter the connection id of Alice's connection in the field provided. Then click on Execute.

Show me a screenshot"},{"location":"demo/AriesOpenAPIDemo/#receiving-a-basic-message-faber","title":"Receiving a Basic Message (Faber)","text":"

How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:

Show me a screenshot

Faber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).

"},{"location":"demo/AriesOpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"

How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:

Show me a screenshot

Again, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received.

"},{"location":"demo/AriesOpenAPIDemo/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.

Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:

  1. registered a public DID and stored it on the ledger;
  2. created a schema and registered it on the ledger;
  3. created a credential definition and registered it on the ledger.

The schema and credential definition could also be created through this swagger interface.

We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.

To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.

"},{"location":"demo/AriesOpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"

You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public endpoint. Click on Try it Out and Execute and you will see Faber's public DID.

Show me a screenshot

On the ledger browser of the BCovrin ledger, click the Domain page, refresh, and paste the Faber public DID into the Filter: field:

Show me a screenshot

The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:

  • the initial DID registration
  • registration of the DID endpoint (Faber is an issuer so it has a public endpoint)
  • the registered schema
  • the registered credential definition
Show me the ledger transactions

You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created endpoint to get a list of schemas, including the one schema_id that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Likewise use the GET /credential-definitions/created endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST versions of these endpoints. Now you know!

"},{"location":"demo/AriesOpenAPIDemo/#notes","title":"Notes","text":"

The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!

"},{"location":"demo/AriesOpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"

Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.

"},{"location":"demo/AriesOpenAPIDemo/#faber-preparing-to-issue-a-credential","title":"Faber - Preparing to Issue a Credential","text":"

First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections API endpoint, execute, copy it and paste the connection_id value into the same field in the issue credential JSON.

Click here to see a screenshot

For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:

  • issuer_did the Faber public DID (use GET /wallet/DID/public),
  • schema_id the Id of the schema Faber created (use GET /schemas/created) and,
  • cred_def_id the Id of the credential definition Faber created (use GET /credential-definitions/created)

into the filter section's indy subsection. Remove the \"dif\" subsection of the filter section within the JSON, and specify the remaining indy filter criteria as follows:

  • schema_version: set to the last segment of the schema_id, a three part version number that was randomly generated on startup of the Faber agent. Segments of the schema_id are separated by \":\"s.
  • schema_issuer_did: set to the same the value as in issuer_did,
  • schema_name: set to the second last segment of the schema_id, in this case degree schema

Finally, set the remaining values as follows: - auto_remove: set to true (no quotes), see note below - comment: set to any string. It's intended to let Alice know something about the credential being offered. - trace: set to false (no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.

By setting auto_remove to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.

"},{"location":"demo/AriesOpenAPIDemo/#faber-issuing-the-credential","title":"Faber - Issuing the Credential","text":"

Finally, we need put into the JSON the data values for the credential_preview section of the JSON. Copy the following and paste it between the square brackets of the attributes item, replacing what is there. Feel free to change the attribute value items, but don't change the labels or names:

      {\n        \"name\": \"name\",\n        \"value\": \"Alice Smith\"\n      },\n      {\n        \"name\": \"timestamp\",\n        \"value\": \"1234567890\"\n      },\n      {\n        \"name\": \"date\",\n        \"value\": \"2018-05-28\"\n      },\n      {\n        \"name\": \"degree\",\n        \"value\": \"Maths\"\n      },\n      {\n        \"name\": \"birthdate_dateint\",\n        \"value\": \"19640101\"\n      }\n

(Note that the birthdate above is used to present later on to pass an \"age proof\".)

OK, finally, you are ready to click Execute. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?

Show me a screenshot - credential offer

To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0 section and execute the GET /issue-credential-2.0/records endpoint. You should see a lot of information about the exchange just initiated.

"},{"location":"demo/AriesOpenAPIDemo/#alice-receives-credential","title":"Alice Receives Credential","text":"

Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.

Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.

Show me a screenshot - issue credential"},{"location":"demo/AriesOpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"

We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records endpoint. Note that within the results, the cred_ex_record just received has a state of credential-received, but not yet done. Let's address that.

Show me a screenshot - check credential exchange status

First, we need the cred_ex_id from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id string. A real controller might use the cred_ex_id for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).

Show me a screenshot - store credential

Now, in Alice\u2019s swagger browser tab, find the credentials section and within that, execute the GET /credentials endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent is the value of the credential_id element used in other calls. referent is the name returned in the indy-sdk call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.

"},{"location":"demo/AriesOpenAPIDemo/#faber-receives-acknowledgment-that-the-credential-was-received","title":"Faber Receives Acknowledgment that the Credential was Received","text":"

On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.

Show me Faber's event activity

Note that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records

Show me a screenshot

You\u2019ve done it, issued a credential! w00t!

"},{"location":"demo/AriesOpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"

Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.

  • On the Faber agent side, this is because we used the POST /issue-credential-2.0/send administrative message, which handles the back and forth for the issuer automatically. We could have used the other /issue-credential-2.0/ endpoints to allow the controller to handle each step of the protocol.
  • On Alice's agent side, this is because the handler for the issue_credential_v2_0 event always responds to credential offers with corresponding credential requests.
"},{"location":"demo/AriesOpenAPIDemo/#bonus-points","title":"Bonus Points","text":"

If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/ messages. Use the GET /issue-credential-2.0/records to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.

Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential Offer POST /issue-credential-2.0/send-offer REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/AriesOpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"

Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.

"},{"location":"demo/AriesOpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"

From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request endpoint. After hitting Try it Now, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id (there are four instances) and connection_id with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment item to whatever you want.

{\n  \"comment\": \"This is a comment about the reason for the proof\",\n  \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n  \"presentation_request\": {\n    \"indy\": {\n      \"name\": \"Proof of Education\",\n      \"version\": \"1.0\",\n      \"requested_attributes\": {\n        \"0_name_uuid\": {\n          \"name\": \"name\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_date_uuid\": {\n          \"name\": \"date\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_degree_uuid\": {\n          \"name\": \"degree\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_self_attested_thing_uuid\": {\n          \"name\": \"self_attested_thing\"\n        }\n      },\n      \"requested_predicates\": {\n        \"0_age_GE_uuid\": {\n          \"name\": \"birthdate_dateint\",\n          \"p_type\": \"<=\",\n          \"p_value\": 20030101,\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n

(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18), and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py demo code.)

Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute and cross your fingers. If the request fails check your JSON!

Show me a screenshot - send proof request"},{"location":"demo/AriesOpenAPIDemo/#alice-responding-to-the-proof-request","title":"Alice - Responding to the Proof Request","text":"

As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.

Show me Alice's event activity

In a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.

"},{"location":"demo/AriesOpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"

Note that in the response, the state is request-sent. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id} endpoint. That should return a result showing the state as done and verified as true. Proof positive!

You can see some of Faber's activity below:

Show me Faber's event activity"},{"location":"demo/AriesOpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"

As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0 event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points_1","title":"Bonus Points","text":"

If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0 messages.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.

Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof Request POST /present-proof-2.0/send-request REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation REST service Save Proof application data"},{"location":"demo/AriesOpenAPIDemo/#conclusion","title":"Conclusion","text":"

That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.

One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.

"},{"location":"demo/AriesPostmanDemo/","title":"Aries Postman Demo","text":"

In these demos we will use Postman as our controller client.

"},{"location":"demo/AriesPostmanDemo/#contents","title":"Contents","text":"
  • Getting Started
  • Installing Postman
  • Creating a workspace
  • Importing the environment
  • Importing the collections
  • Postman basics
  • Experimenting with the vc-api endpoints
  • Register new dids
  • Issue credentials
  • Store and retrieve credentials
  • Verify credentials
  • Prove a presentation
  • Verify a presentation
"},{"location":"demo/AriesPostmanDemo/#getting-started","title":"Getting Started","text":"

Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.

"},{"location":"demo/AriesPostmanDemo/#installing-postman","title":"Installing Postman","text":"

Download, install and launch postman.

"},{"location":"demo/AriesPostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"

Create a new postman workspace labeled \"acapy-demo\".

"},{"location":"demo/AriesPostmanDemo/#importing-the-environment","title":"Importing the environment","text":"

In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.

Make sure you have the environment set as your active environment.

"},{"location":"demo/AriesPostmanDemo/#importing-the-collections","title":"Importing the collections","text":"

In the collections tab from the left, click the import button.

The following collections are available:

  • vc-api
"},{"location":"demo/AriesPostmanDemo/#postman-basics","title":"Postman basics","text":"

Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.

You have your environment where you define variables to be accessed by your collections.

Each collection consists of a series of requests which can be configured independently.

"},{"location":"demo/AriesPostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"

Make sure you have a demo agent available. You can use the following command to deploy one:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n

When running for the first time, please allow some time for the images to build.

"},{"location":"demo/AriesPostmanDemo/#register-new-dids","title":"Register new dids","text":"

The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020 and BbsBlsSignature2020 credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create endpoint.

"},{"location":"demo/AriesPostmanDemo/#issue-credentials","title":"Issue credentials","text":"

For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.

{\n    \"credential\":   { \n        \"@context\": [\n            \"https://www.w3.org/2018/credentials/v1\"\n        ],\n        \"type\": [\n            \"VerifiableCredential\"\n        ],\n        \"issuer\": \"did:example:123\",\n        \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:123\"\n        }\n    },\n    \"options\": {}\n}\n

Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.

"},{"location":"demo/AriesPostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"

Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.

Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.

"},{"location":"demo/AriesPostmanDemo/#verify-credentials","title":"Verify credentials","text":"

You can verify your last issued credential with this endpoint or any issued credential you provide to it.

"},{"location":"demo/AriesPostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"

Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.

"},{"location":"demo/AriesPostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"

The final request is to verify a presentation.

"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"

There are two ways to run the alice/faber demo with endorser support enabled.

"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"

This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.

Start a VON Network instance and a Tails server:

  • Following the Building and Starting section of the VON Network Tutorial to get ledger started. You can leave off the --logs option if you want to use the same terminal for running both VON Network and the Tails server. When you are finished with VON Network, follow the Stopping And Removing a VON Network instructions.
  • Run an AnonCreds revocation registry tails server in order to support revocation by following the instructions in the Alice gets a Phone demo.

Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n

Start up Alice as normal:

./run_demo alice\n

You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.

If you issue more than 5 credentials, you will see Faber creating a new revocation registry (including endorser operations).

"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"

This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:

  • Faber runs as an Endorser (all of Faber's functions - issue credential, request proof, etc.) run normally, since Faber has ledger write access
  • Alice starts up with a DID with Author privileges (no ledger write access) and Faber is setup as Alice's Endorser

Start a VON Network and a Tails server using the instructions above.

Start up Faber as Endorser:

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n

Start up Alice as Author:

TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n

Copy the invitation from Faber to Alice to complete the connection.

Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).

This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.

Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).

If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.

"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"

The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.

The requirements on your invitations (such as in the example below) are:

  • The invitation services item MUST be a resolvable DID.
  • Or alternatively, the invitation services item MUST NOT be an inline service.
  • The DID in the invitation services item is the same one in every invitation.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.

  • The Wallet user is using a browser on the Issuers website and gets to the point where they are going to be offered a credential. As part of that flow, they are presented with a QR code that they scan with their wallet app.
  • The QR contains an RFC 0434 Out of Band invitation to connect that the Wallet processes as the invitee.
  • The Wallet uses the information in the invitation to send an RFC 0023 DID Exchange request DIDComm message back to the Issuer to initiate establishing a connection.
  • The Issuer responds back to the request with a response message, and the connection is established.
  • Later, the Wallet user returns to the Issuer's website, and does something (perhaps starts the process to get another credential) that results in the same QR code being displayed, and again the users scans the QR code with their Wallet app.
  • The Wallet recognizes (based on the DID in the services item in the invitation -- see example below) that it already has a connection to the Issuer, so instead of sending a DID Exchange request message back to the Issuer, they send an RFC 0434 Out of Band reuse DIDComm message, and both parties know to use the existing connection.
  • Had the Wallet used the DID Exchange request message, a new connection would have been established.

The RFC 0434 Out of Band protocol requirement enables reuse message by the invitee (the Wallet in the flow above) is that the service in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services item of the Out of Band invitation. As noted in the Out of Band specification, reuse cannot be used with such DID types even if the contents are the same.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

The use of connection reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.

  1. On a command line, run Faber with these parameters: ./run_demo faber --reuse-connections --public-did-connections --events.
  2. On a second command line, run Alice as normal, perhaps with the events option: ./run_demo alice --reuse-connections --events
  3. Copy the invitation from the Faber terminal and paste it into the Alice terminal at the prompt.
  4. Verify that the connection was established.
  5. If you want, go to the Alice OpenAPI screen (port 8031, path api/docs), and then use the GET Connections to see that Alice has one connection to Faber.
  6. In the Faber terminal, type 4 to get a prompt for a new connection. This will generate a new invitation with the same public DID.
  7. In the Alice terminal, type 4 to get a prompt for a new connection, and paste the new invitation.
  8. Note from the webhook events in the Faber terminal that the reuse message is received from Alice, and as a result, no new connection was created.
  9. Execute again the GET Connections endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.
  10. Try running the demo again without the --reuse-connections parameter and compare the services value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.

While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.

For example, to run faber with connection reuse using a non-public DID:

./run_demo faber --reuse-connections --events\n

To run faber using a did:peer and reusable connections:

./run_demo faber --reuse-connections --emit-did-peer-2 --events\n

To run this demo using a multi-use invitation (from Faber):

./run_demo faber --reuse-connections --emit-did-peer-2 --multi-use-invitations --events\n
"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"

A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:

--wallet-type askar-anoncreds\n

When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs instead of credx.

There is a new package under aries_cloudagent/anoncreds with code that supports the new library.

There are new endpoints (under /anoncreds) for creating a Schema and Credential Definition. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).

Within the protocols, there are new handler libraries to support the new anoncreds format (these are in parallel to the existing indy libraries).

The existing indy code are in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/indy/handler.py\naries_cloudagent/protocols/indy/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/indy/handler.py\n

The new anoncreds code is in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\naries_cloudagent/protocols/present_proof/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n

The Indy handler checks to see if the wallet type is askar-anoncreds and if so delegates the calls to the anoncreds handler, for example:

        # Temporary shim while the new anoncreds library integration is in progress\n        wallet_type = profile.settings.get_value(\"wallet.type\")\n        if wallet_type == \"askar-anoncreds\":\n            self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n

... and then:

        # Temporary shim while the new anoncreds library integration is in progress\n        if self.anoncreds_handler:\n            return self.anoncreds_handler.get_format_identifier(message_type)\n

To run the alice/faber demo using the new anoncreds library, start the demo with:

--wallet-type askar-anoncreds\n

There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:

--wallet-type askar-anoncreds\n

Everything should just work!!!

Theoretically ATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).

"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"

The changes are significant. Notably:

  • the old way was that from Indy you got the timestamp of the RevRegEntry used, accumulator and the \"deltas\" -- list of revoked and list of unrevoked credentials for a given range. I'm not exactly sure what was passed to the AnonCreds library code for building the presentation.
  • In the new way, the AnonCreds library expects the identifier for the revregentry used (aka the timestamp), the accumulator, and the full state (0s and 1s) of the revocation status of all credentials in the registry.
  • The conversion from delta to full state must be handled in the Indy resolver -- not in the \"generic\" ACA-Py code, since the other ledgers automagically provide the full state. In fact, we're likely to update Indy VDR to always provide the full state. The \"common\" (post resolver) code should get back from the resolver the full state.

The Tails File changes are minimal -- nothing about the file itself changed. What changed:

  • the tails-file-server can be published to WITHOUT knowing the ID of the RevRegEntry, since that is not known when the tails file is generated/published. See: https://github.com/bcgov/indy-tails-server/pull/53 -- basically, by publishing based on the hash.
  • The tails-file is not needed by the issuer after generation. It used to be needed for issuing and revoking credentials. Those are now done without the tails file. See: https://github.com/hyperledger/aries-cloudagent-python/pull/2302/files. That code is already in Main, so you should have it.
"},{"location":"deploying/AnonCredsWalletType/#outstanding-work","title":"Outstanding work","text":"
  • revocation notifications (not sure if they're included in anoncreds-rs updates, haven't tested them ...)
  • revocation support - complete the revocation implementation (support for unhappy path scenarios)
  • testing - various scenarios like mediation, multitenancy etc.

  • unit tests (in the new anoncreds package) (see https://github.com/hyperledger/aries-cloudagent-python/pull/2596/commits/229ffbba209aff0ea7def5bad6556d93057f3c2a)

  • unit tests (review and possibly update unit tests for the credential and presentation integration)
  • endorsement (not implemented with new anoncreds code)
  • wallet upgrade (askar to askar-anoncreds)
  • update V1.0 versions of the Credential and Presentation endpoints to use anoncreds
  • any other anoncreds issues - https://github.com/hyperledger/aries-cloudagent-python/issues?q=is%3Aopen+is%3Aissue+label%3AAnonCreds
"},{"location":"deploying/AnonCredsWalletType/#retiring-old-indy-and-askar-credx-code","title":"Retiring old Indy and Askar (credx) Code","text":"

The main changes for the Credential and Presentation support are in the following two files:

aries_cloudagent/protocols/issue_credential/v2_0/messages/cred_format.py\naries_cloudagent/protocols/present_proof/v2_0/messages/pres_format.py\n

The INDY handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.

The new code is already in place (in comments). For example for the Credential handler:

        To make the switch from indy to anoncreds replace the above with the following\n        INDY = FormatSpec(\n            \"hlindy/\",\n            DeferLoad(\n                \"aries_cloudagent.protocols.present_proof.v2_0\"\n                \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n            ),\n        )\n

There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.

Some new methods were added within the Ledger class.

New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).

"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"

Aries Cloud Agent - Python is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their Aries stack using the container images graciously provided by BC Gov and hosted through their bcgovimages docker hub account. These images have been critical to the adoption of not only ACA-Py but also Hyperledger Aries and SSI more generally.

Recognizing how critical these images are to the success of ACA-Py and consistent with Hyperledger's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.

"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"

This project builds and publishes the ghcr.io/hyperledger/aries-cloudagent-python image. Multiple variants are available; see Tags.

"},{"location":"deploying/ContainerImagesAndGithubActions/#tags","title":"Tags","text":"

ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. The following variants exist:

  • \"Standard\" - The default configuration of ACA-Py, including:
  • Aries Askar for secure storage
  • Indy VDR for Indy ledger communication
  • Indy Shared Libraries for AnonCreds

In the past, two image variants were published. These two variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.

The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.

Below is a table of all generated images and their tags:

Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"

There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.

  • Standard Image
  • Based on slim variant of Debian
  • Does NOT include libindy
  • Default user is aries
  • Uses container's system python environment rather than pyenv
  • Askar and Indy Shared libraries are installed as dependencies of ACA-Py through pip from pre-compiled binaries included in the python wrappers
  • Built from repo contents
  • Indy Image (no longer produced but included here for clarity)
  • Based on slim variant of Debian
  • Built from multi-stage build step (indy-base in the Dockerfile) which includes Indy dependencies; this could be replaced with an explicit indy-python image from the Indy SDK repo
  • Includes libindy but does NOT include the Indy CLI
  • Default user is indy
  • Uses container's system python environment rather than pyenv
  • Askar and Indy Shared libraries are installed as dependencies of ACA-Py through pip from pre-compiled binaries included in the python wrappers
  • Built from repo contents
  • Includes Indy postgres storage plugin
  • bcgovimages/aries-cloudagent
  • (Usually) based on Ubuntu
  • Based on von-image
  • Default user is indy
  • Includes libindy and Indy CLI
  • Uses pyenv
  • Askar and Indy Shared libraries built from source
  • Built from ACA-Py python package uploaded to PyPI
  • Includes Indy postgres storage plugin
"},{"location":"deploying/ContainerImagesAndGithubActions/#github-actions","title":"Github Actions","text":"
  • Tests (.github/workflows/tests.yml) - A reusable workflow that runs tests for the Standard ACA-Py variant for a given python version.
  • PR Tests (.github/workflows/pr-tests.yml) - Run on pull requests; runs tests for the Standard ACA-Py variant for a \"default\" python version. Check this workflow for the current default python version in use.
  • Nightly Tests (.github/workflows/nightly-tests.yml) - Run nightly; runs tests for the Standard ACA-Py variant for all currently supported python versions. Check this workflow for the set of currently supported versions in use.
  • Publish (.github/workflows/publish.yml) - Run on new release published or when manually triggered; builds and pushes the Standard ACA-Py variant to the Github Container Registry.
  • Integration Tests (.github/workflows/integrationtests.yml) - Run on pull requests (to the hyperledger fork only); runs BDD integration tests.
  • Black Format (.github/workflows/blackformat.yml) - Run on pull requests; checks formatting of files modified by the PR.
  • CodeQL (.github/workflows/codeql.yml) - Run on pull requests; performs CodeQL analysis.
  • Python Publish (.github/workflows/pythonpublish.yml) - Run on release created; publishes ACA-Py python package to PyPI.
  • PIP Audit (.github/workflows/pipaudit.yml) - Run when manually triggered; performs pip audit.
"},{"location":"deploying/Databases/","title":"Databases","text":"

Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.

"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"

If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n

For this configuration, a folder called wallet will be created which contains a file called sqlite.db.

"},{"location":"deploying/Databases/#postgresql","title":"PostgreSQL","text":"

The wallet can be configured to use PostgreSQL as storage.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n

In this case the hostname for the database is db on port 5432.

A docker-compose file could look like this:

# docker-compose.yml\nversion: '3'\nservices:\n  # acapy ...\n  # database\n  db:\n    image: postgres:10\n    environment:\n      POSTGRES_PASSWORD: mysecretpassword\n      POSTGRES_USER: postgres\n      POSTGRES_DB: postgres\n    ports:\n      - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"

The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.

"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"

Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.

"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"

Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.

Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.

However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.

"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"

The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)

The components are:

  1. Aries Askar: the replacement for the \u201cindy-wallet\u201d part of Indy SDK. Askar is a key management service, handling the creation and use of private keys managed by Aries agents. It\u2019s also the secure storage for DIDs, verifiable credentials, and data used by issuers of verifiable credentials for signing. As the Aries moniker indicates, Askar is suitable for use with any Aries agent, and for managing any keys, whether for use with Indy or any other Verifiable Data Registry (VDR).
  2. Indy VDR: the interface to publishing to and retrieving data from Hyperledger Indy networks. Indy VDR is scoped at the appropriate level for any client application using Hyperledger Indy networks.
  3. CredX: a Rust implementation of AnonCreds that evolved from the Indy SDK implementation. CredX is within the indy-shared-rs repository. It has significant performance enhancements over the version in the Indy SDK, particularly for Issuers.
  4. Hyperledger AnonCreds: a newer implementation of AnonCreds that is \u201cledger-agnostic\u201d \u2014 it can be used with Hyperledger Indy and any other suitable verifiable data registry.

In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.

If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.

"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"

What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.

"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"

If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar configuration parameter. You will automatically be using all of the shared components.

As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.

"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"

If you have an existing deployment, in changing the --wallet-type configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the aries-acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.

Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.

We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.

  • Prepare a way to run the Askar Upgrade script from the aries-acapy-tools repository. For example, you might want to prepare a container that you can run in the same environment that you run ACA-Py (e.g., within Kubernetes or OpenShift).
  • Shutdown your ACA-Py instance.
  • Backup the existing wallet using the usual tools you have for backing up the database.
  • If you are running in a cloud native environment such as Kubernetes, deploy the Askar Upgrade container, and as needed, update the network policies to allow the Askar Upgrade container to connect with the wallet database
  • Run the askar-upgrade script. For example:
askar-upgrade \\\n  --strategy dbpw \\\n  --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n  --wallet-name <wallet name> \\\n  --wallet-key <wallet key>\n
  • Switch the ACA-Py instance's --wallet-type configuration setting to askar
  • Start up the ACA-Py instances.
  • Trouble? Restore the initial database and revert the --wallet-type change to rollback to the pre-migration state.
  • Check the data.
  • Test the deployment.

It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.

"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"

If you have questions, comments, or suggestions about the upgrade process, please use the Aries Cloud Agent Python channel on Hyperledger Discord, or submit a GitHub issue to the ACA-Py repository.

"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"

Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.

"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"

Poetry manages virtual environments for your projects to ensure clean and isolated development environments.

"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"
poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"
poetry shell\n

Alternatively you can source the environment settings in the current shell

source $(poetry env info --path)/bin/activate\n

for powershell users this would be

(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"

When using poetry shell

exit\n

When using the activate script

deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"

Poetry uses the pyproject.toml file to manage dependencies. Add new dependencies to this file and update existing ones as needed.

"},{"location":"deploying/Poetry/#adding-a-dependency","title":"Adding a Dependency","text":"
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"
poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"
poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"
poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"

Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.

"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"
poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"
poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"

Poetry streamlines the process of building and publishing Python packages.

"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"
poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"
poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"

Extras allow you to specify additional dependencies based on project requirements.

"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"
poetry install -E extras-name\n

for example

poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"

Development dependencies are useful for tasks like testing, linting, and documentation generation.

"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"
poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":"
  • Poetry Documentation
  • PyPI: The Python Package Index
"},{"location":"deploying/RedisPlugins/","title":"ACA-Py Redis Plugins","text":""},{"location":"deploying/RedisPlugins/#aries-acapy-plugin-redis-events-redis_queue","title":"aries-acapy-plugin-redis-events redis_queue","text":"

It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configuration yaml","text":"
redis_queue:\n  connection: \n    connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n  ### For Inbound ###\n  inbound:\n    acapy_inbound_topic: \"acapy_inbound\"\n    acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n  ### For Outbound ###\n  outbound:\n    acapy_outbound_topic: \"acapy_outbound\"\n    mediator_mode: false\n\n  ### For Event ###\n  event:\n    event_topic_maps:\n      ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n      ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n      ^acapy::record::([^:])?: acapy-record-$wallet_id\n      acapy::basicmessage::received: acapy-basicmessage-received\n      acapy::problem_report: acapy-problem_report\n      acapy::ping::received: acapy-ping-received\n      acapy::ping::response_received: acapy-ping-response_received\n      acapy::actionmenu::received: acapy-actionmenu-received\n      acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n      acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n      acapy::keylist::updated: acapy-keylist-updated\n      acapy::revocation-notification::received: acapy-revocation-notification-received\n      acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n      acapy::forward::received: acapy-forward-received\n    event_webhook_topic_maps:\n      acapy::basicmessage::received: basicmessages\n      acapy::problem_report: problem_report\n      acapy::ping::received: ping\n      acapy::ping::response_received: ping\n      acapy::actionmenu::received: actionmenu\n      acapy::actionmenu::get-active-menu: get-active-menu\n      acapy::actionmenu::perform-menu-action: perform-menu-action\n      acapy::keylist::updated: keylist\n    deliver_webhook: true\n
  • redis_queue.connection.connection_url: This is required and is expected in redis://{username}:{password}@{host}:{port} format.
  • redis_queue.inbound.acapy_inbound_topic: This is the topic prefix for the inbound message queues. Recipient key of the message are also included in the complete topic name. The final topic will be in the following format acapy_inbound_{recip_key}
  • redis_queue.inbound.acapy_direct_resp_topic: Queue topic name for direct responses to inbound message.
  • redis_queue.outbound.acapy_outbound_topic: Queue topic name for the outbound messages. Used by Deliverer service to deliver the payloads to specified endpoint.
  • redis_queue.outbound.mediator_mode: Set to true, if using Redis as a http bridge when setting up a mediator agent. By default, it is set to false.
  • event.event_topic_maps: Event topic map
  • event.event_webhook_topic_maps: Event to webhook topic map
  • event.deliver_webhook: When set to true, this will deliver webhooks to endpoints specified in admin.webhook_urls. By default, set to true.
"},{"location":"deploying/RedisPlugins/#redis-plugin-usage","title":"Redis Plugin Usage","text":""},{"location":"deploying/RedisPlugins/#redis-plugin-with-docker","title":"Redis Plugin With Docker","text":"

Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.

docker-compose up --build -d\n

More details can be found here.

"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"

Installation

pip install git+https://github.com/bcgov/aries-acapy-plugin-redis-events.git\n

Startup ACA-Py with redis_queue plugin loaded

docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n    --plugin redis_queue.v1_0.events \\\n    --plugin-config plugins-config.yaml \\\n    -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n    # ... the remainder of your startup arguments\n

Regardless of the options above, you will need to startup deliverer and relay/mediator service as a bridge to receive inbound messages. Consider the following to build your docker-compose file which should also start up your redis cluster:

  • Relay + Deliverer

    relay:\n    image: redis-relay\n    build:\n        context: ..\n        dockerfile: redis_relay/Dockerfile\n    ports:\n        - 7001:7001\n        - 80:80\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7001\n        - STATUS_ENDPOINT_API_KEY=test_api_key_1\n        - INBOUND_TRANSPORT_CONFIG=[[\"http\", \"0.0.0.0\", \"80\"]]\n        - TUNNEL_ENDPOINT=http://relay-tunnel:4040\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    depends_on:\n        - redis-cluster\n        - relay-tunnel\n    networks:\n        - acapy_default\ndeliverer:\n    image: redis-deliverer\n    build:\n        context: ..\n        dockerfile: redis_deliverer/Dockerfile\n    ports:\n        - 7002:7002\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7002\n        - STATUS_ENDPOINT_API_KEY=test_api_key_2\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    depends_on:\n        - redis-cluster\n    networks:\n        - acapy_default\n
  • Mediator + Deliverer

    mediator:\n    image: acapy-redis-queue\n    build:\n        context: ..\n        dockerfile: docker/Dockerfile\n    ports:\n        - 3002:3001\n    depends_on:\n        - deliverer\n    volumes:\n        - ./configs:/home/indy/configs:z\n        - ./acapy-endpoint.sh:/home/indy/acapy-endpoint.sh:z\n    environment:\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n        - TUNNEL_ENDPOINT=http://mediator-tunnel:4040\n    networks:\n        - acapy_default\n    entrypoint: /bin/sh -c '/wait && ./acapy-endpoint.sh poetry run aca-py \"$$@\"' --\n    command: start --arg-file ./configs/mediator.yml\n\ndeliverer:\n    image: redis-deliverer\n    build:\n        context: ..\n        dockerfile: redis_deliverer/Dockerfile\n    depends_on:\n        - redis-cluster\n    ports:\n        - 7002:7002\n    environment:\n        - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n        - TOPIC_PREFIX=acapy\n        - STATUS_ENDPOINT_HOST=0.0.0.0\n        - STATUS_ENDPOINT_PORT=7002\n        - STATUS_ENDPOINT_API_KEY=test_api_key_2\n        - WAIT_BEFORE_HOSTS=15\n        - WAIT_HOSTS=redis-node-3:6379\n        - WAIT_HOSTS_TIMEOUT=120\n        - WAIT_SLEEP_INTERVAL=1\n        - WAIT_HOST_CONNECT_TIMEOUT=60\n    networks:\n        - acapy_default\n

Both relay and mediator demos are also available.

"},{"location":"deploying/RedisPlugins/#aries-acapy-cache-redis-redis_cache","title":"aries-acapy-cache-redis redis_cache","text":"

ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configuration yaml","text":"
redis_cache:\n  connection: \"redis://default:test1234@172.28.0.103:6379\"\n  max_connection: 50\n  credentials:\n    username: \"default\"\n    password: \"test1234\"\n  ssl:\n    cacerts: ./ca.crt\n
  • redis_cache.connection: This is required and is expected in redis://{username}:{password}@{host}:{port} format.
  • redis_cache.max_connection: Maximum number of redis pool connections. Default: 50
  • redis_cache.credentials.username: Redis instance username
  • redis_cache.credentials.password: Redis instance password
  • redis_cache.ssl.cacerts
"},{"location":"deploying/RedisPlugins/#redis-cache-usage","title":"Redis Cache Usage","text":""},{"location":"deploying/RedisPlugins/#redis-cache-using-docker","title":"Redis Cache Using Docker","text":"
  • Running the plugin with docker is simple and straight-forward. There is an example docker-compose.yml file in the root of the project that launches both ACA-Py and an accompanying Redis instance. Running it is as simple as:

    docker-compose up --build -d\n
  • To launch ACA-Py with an accompanying redis cluster of 6 nodes (3 primaries and 3 replicas), please refer to example docker-compose.cluster.yml and run the following:

    Note: Cluster requires external docker network with specified subnet

    docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\ndocker-compose -f docker-compose.cluster.yml up --build -d\n
"},{"location":"deploying/RedisPlugins/#redis-cache-without-docker","title":"Redis Cache Without Docker","text":"

Installation

pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n

Startup ACA-Py with redis_cache plugin loaded

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config plugins-config.yaml \\\n    # ... the remainder of your startup arguments\n

or

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n    --plugin-config-value \"redis_cache.max_connections=90\" \\\n    --plugin-config-value \"redis_cache.credentials.username=username\" \\\n    --plugin-config-value \"redis_cache.credentials.password=password\" \\\n    # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"

If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue or redis_cache plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster (onto the root_profile). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.

"},{"location":"deploying/UpgradingACA-Py/","title":"Upgrading ACA-Py Data","text":"

Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.

"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"

The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.

Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.

Once an upgrade is identified as needed, the process is:

  • Collect (if any) the actions to be taken to get from the version recorded in secure storage to the current version.py
  • Execute the actions from oldest to newest.
  • If the same action is collected more than once (e.g., \"Resave the Connection Records\" is defined for two different versions), perform the action only once.
  • Store the current ACA-Py version (from version.py) in the secure storage database.
"},{"location":"deploying/UpgradingACA-Py/#forced-offline-upgrades","title":"Forced Offline Upgrades","text":"

In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.

If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.

Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!

"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"

Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade is required when running name tags based upgrade (i.e. providing --named-tag).

Tags are specified in YML file as below:

fix_issue_rev_reg:\n  fix_issue_rev_reg_records: true\n

Example:

 ./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"

With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.

There are 2 options to perform such upgrades:

  • --upgrade-all-subwallets

This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).

  • --upgrade-subwallet

This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.

Note: multiple specifications allowed

"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"

There are a couple of upgrade exception conditions to consider, as outlined in the following sections.

"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"

Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.

"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"

If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>\" and \"--force-upgrade\", the --from-version version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.

"},{"location":"deploying/deploymentModel/","title":"Deployment Model","text":""},{"location":"deploying/deploymentModel/#aries-cloud-agent-python-aca-py-deployment-model","title":"Aries Cloud Agent-Python (ACA-Py) - Deployment Model","text":"

This document is a \"concept of operations\" for an instance of an Aries cloud agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).

The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.

The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.

Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.

Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.

The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.

"},{"location":"deploying/deploymentModel/#aries-cloud-agent","title":"Aries Cloud Agent","text":"

Aries cloud agent implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.

Instances of the Aries cloud agents are configured with the following sub-components:

  • Transport Plugins - pluggable transport-specific message sender/receiver modules that interact with other agents. Messages outside the plugins are transport-agnostic JSON structures. Current modules include HTTP and WebSockets. In the future, we might add ZMQ, SMTP and so on.
  • Conductor receives inbound messages from, and sends outbound messages to, the transport plugins. After internal processing, the conductor passes inbound messages to, and receives outbound messages from, the Dispatcher. In processing the messages, the conductor manages the message\u2019s protocol instance thread state, retrieving the state on inbound messages and saving the state on outbound messages. The conductor handles generic decorators in messages such as verifying and generating signatures on message data elements, internationalization and so on.
  • Dispatcher handles the distribution of messages to the DIDComm protocol message handlers and the responses received. The dispatcher passes to the conductor the thread state to be persistance and message data (if any) to be sent out from the Aries cloud agent instance.
  • DIDComm Protocols - implement the DIDComm protocols supported by the agent instance, including the state object for the protocol, the DIDComm message handlers and the admin message handlers. Protocols are bundled as Python modules and loaded for during the agent deployment. Each protocol contributes the admin messages for the protocol to the controller REST interface. The protocols implement a number of events that invoke the controller via webhooks so that controller\u2019s business logic can respond to the event.
  • Controller REST API - a dynamically generated REST API (with a Swagger/OpenAPI user interface) based on the set of DIDComm protocols included in the agent deployment. The controller, activated via the webhooks from the protocol DIDComm message handlers, controls the Aries cloud agent by calling the REST API that invoke the protocol admin message handlers.
  • Handler API - provides abstract interfaces to various handlers needed by the protocols and core Aries cloud agent components for accessing the secure storage (wallet), other storage, the ledger and so on. The API calls the handler implementations configured into the agent deployment.
  • Handler Plugins - are the handler implementations called from the Handler API. The plugins may be internal to the Agent (in the same process space) or could be external (for example, in other processes/containers).
  • Secure Storage Plugin - the Indy SDK is embedded in the Aries cloud agent and implements the default secure storage. An Aries cloud agent can be configured to use one of a number of indy-sdk storage implementations - in-memory, SQLite and Postgres at this time.
  • Ledger Interface Plugin - In the current Aries cloud agent implementation, the Indy SDK provides an interface to an Indy-based public ledger for verifiable credential protocols. In future, ledger implementations (including those other than Indy) might be moved into the DIDComm protocol modules to be included as needed within a configured Aries cloud agent instance based on the DIDComm protocols used by the agent.
"},{"location":"deploying/deploymentModel/#controller","title":"Controller","text":"

A controller provides the personality of Aries cloud agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single Aries cloud agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.

Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding Aries cloud agent and invoking the DIDComm administration capabilities of the Aries cloud agent by calling the REST API exposed by that cloud agent. As well as responding to Aries cloud agent events, the controller initiates DIDComm protocol instances using the same REST API.

The controller and Aries cloud agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.

A controller implements the following capabilities.

  • Initiator - provides a mechanism to initiate new DIDComm protocol instances. The initiator invokes the REST API exposed by the Aries cloud agent to initiate the creation of a DIDComm protocol instance. For example, a permit-issuing service uses this mechanism to issue a Verifiable Credential associated with the issuance of a new permit.
  • Responder - subscribes to and responds to events from the Aries cloud agent protocol message handlers, providing business-driven responses. The responder might respond immediately, or the event might cause a delay while the decision is determined, perhaps by sending the request to a person to decide. The controller may persist the event response state if the event is asynchronous - for example, when the event is passed to a person who may respond when they next use the web app.
  • Configuration - manages the controller configuration data and the configuration of the Aries cloud agent. Configuration in this context includes things like:
  • Credentials and Proof Requests to be Issued/Verified (respectively) by the Aries cloud agent.
  • The configuration of the webhook handler to which the responder subscribes.

While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the Aries cloud agent it controls via the REST API exposed by that agent.

"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"

The Aries cloud agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the Aries cloud agent instance. In the most common scenario, the Aries cloud agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and Aries cloud agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the Aries cloud agent to use the Postgres wallet supports enterprise scale agent deployments.

Current examples of deployed instances of Aries cloud agent and controllers include:

  • indy-email-verification - a web app Controller that sends an email to a given email address with an embedded DIDComm invitation and on establishment of a connection, offers and provides the connected agent with an email control verifiable credential.
  • iiwbook - a web app Controller that on creation of a DIDComm connection, requests a proof of email control, and then sends to the connection a verifiable credential proving attendance at IIW. In between the proof and issuance is a human approval step using a simple web-based UI that implements a request queue.
"},{"location":"design/AnoncredsW3CCompatibility/","title":"Supporting AnonCreds in W3C VC/VP Formats in Aries Cloud Agent Python","text":"

This design proposes to extend the Aries Cloud Agent Python (ACA-PY) to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.

"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"

The pre-requisites for the work are:

  • The availability of the AnonCreds RS library supporting the generation and processing of AnonCreds VCs in W3C VC format.
  • The availability of the AnonCreds RS library supporting the generation and verification of AnonCreds VPs in W3C VP format.
  • The availability of support in the AnonCreds RS Python Wrapper for the W3C VC/VP capabilities in AnonCreds RS.
  • Agreement on the Aries Issue Credential v2.0 and Present Proof v2.0 protocol attachment formats to use when issuing AnonCreds W3C VC format credentials, and when presenting AnonCreds W3C VP format presentations.
  • For issuing, use the (proposed) RFC 0809 VC-DI Attachments
  • For presenting, use the RFC 0510 DIF Presentation Exchange Attachments

As of 2024-01-15, these pre-requisites have been met.

"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"

Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer and issue messages and when receiving request messages.

Related notes:

  • The Issue Credential v1.0 protocol will not be updated to support AnonCreds W3C VC format credentials.
  • Once an instance of the Issue Credential v2.0 protocol is started using RFC 0809 VC-DI format attachments, subsequent messages in the protocol MUST use RFC 0809 VC-DI attachments.
  • The ACA-Py maintainers are discussing the possibility of making pluggable the Issue Credential v2.0 and Present Proof v2.0 attachment formats, to simplify supporting additional formats, including RFC 0809 VC-DI.

A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.

A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.

"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"

A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request message with an RFC 0510 DIF Presentation Exchange format attachment.

If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.

An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.

"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"

A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer and issue messages, and when sending request messages.

On receiving an Issue Credential v2.0 offer message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request message.

On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.

  • The AnonCreds verifiable credential MUST support being used in both legacy AnonCreds and W3C VP format (DIF Presentation Exchange) presentations.

On receiving an RFC 0510 DIF Presentation Exchange request message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issues-to-consider","title":"Issues to consider","text":"
  • If and how the W3C VC Format attachments for the Issue Credential V2.0 and Present Proof V2 Aries DIDComm Protocols should be used when using AnonCreds W3C VC Format credentials. Anticipated triggers:
  • An Issuer Controller invokes the Admin API to trigger an Issue Credential v2.0 protocol instance such that the RFC 0809 VC-DI will be used.
  • A Holder receives an Issue Credential v2.0 offer message with an RFC 0809 VC-DI attachment.
  • A Verifier initiates a Present Proof v2.0 protocol instance with an RFC 0510 DIF Presentation Exchange that can be satisfied by AnonCreds VCs held by the holder.
  • A Holder receives a present proof request message with an RFC 0510 DIF Presentation Exchange format attachment that can be satisfied with AnonCreds credentials held by the holder.
    • How are the restrictions and revocation data elements conveyed?
  • How AnonCreds W3C VC Format verifiable credentials are stored by the holder such that they will be discoverable when needed for creating verifiable presentations.
  • How and when multiple signatures can/should be added to a W3C VC Format credential, enabling both AnonCreds and non-AnonCreds signatures on a single credential and their use in presentations. Completing a multi-signature controller is out of scope, however we want to consider and ensure the design is fundamentally compatible with multi-sig credentials.
"},{"location":"design/AnoncredsW3CCompatibility/#flow-chart","title":"Flow Chart","text":""},{"location":"design/AnoncredsW3CCompatibility/#key-questions","title":"Key Questions","text":""},{"location":"design/AnoncredsW3CCompatibility/#what-is-the-roadmap-for-delivery-what-will-we-build-first-then-second","title":"What is the roadmap for delivery? What will we build first, then second?","text":"

It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"
  1. Update Admin API endpoints to initiate an Issue Credential v2.0 protocol to issue an AnonCreds credential in W3C VC format using RFC 0809 VC-DI format attachments.
  2. Add support for the RFC 0809 VC-DI message attachment formats.
  3. Should the attachment format be made pluggable as part of this? From the maintainers: If we did make it pluggable, this would be the point where that would take place. Since these values are hard coded, it is not pluggable currently, as noted. I've been dissatisfied with how this particular piece works for a while. I think making it pluggable, if done right, could help clean it up nicely. A plugin would then define their own implementation of V20CredFormatHandler. (@dbluhm)
  4. Update the v2.0 Issue Credential protocol handler to support a \"RFC 0809 VC-DI mode\" such that when a protocol instance starts with that format, it continues with it until completion, supporting issuing AnonCreds credentials in the process. This includes both the sending and receiving of all protocol message types.
"},{"location":"design/AnoncredsW3CCompatibility/#present-proof","title":"Present Proof","text":"
  1. Adjust as needed the sending of a Present Proof request using the RFC 0510 DIF Presentation Exchange with support (to be defined) for requesting AnonCreds VCs.
  2. Adjust as needed the processing of a Present Proof request message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.
  3. AnonCreds VCs issued as legacy or W3C VC format credentials should be usable in AnonCreds W3C VP format presentations.
  4. Update the creation of an RFC 0510 DIF Presentation Exchange presentation submission to support the use of AnonCreds VCs as the source of the VPs.
  5. Update the verifier receipt of a Present Proof v2.0 presentation message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.
"},{"location":"design/AnoncredsW3CCompatibility/#what-are-the-functions-we-are-going-to-wrap","title":"What are the functions we are going to wrap?","text":"

After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject impacted by changes are as follows:

W3CCredential

  • class methods (create, load)
  • instance methods (process, to_legacy, add_non_anoncreds_integrity_proof, set_id, set_subject_id, add_context, add_type)
  • class properties (schema_id, cred_def_id, rev_reg_id, rev_reg_index)
  • bindings functions (create_w3c_credential, process_w3c_credential, _object_from_json, _object_get_attribute, w3c_credential_add_non_anoncreds_integrity_proof, w3c_credential_set_id, w3c_credential_set_subject_id, w3c_credential_add_context, w3c_credential_add_type)

W3CPresentation

  • class methods (create, load)
  • instance methods (verify)
  • bindings functions (create_w3c_presentation, _object_from_json, verify_w3c_presentation)

They will be added to __init__.py as additional exports of AnoncredsObject.

We also have to consider which classes or anoncreds objects have been modified

The classes modified according to the same PR mentioned above are:

Credential

  • added class methods (from_w3c)
  • added instance methods (to_w3c)
  • added bindings functions (credential_from_w3c, credential_to_w3c)

PresentCredential

  • modified instance methods (_get_entry, add_attributes, add_predicates)
"},{"location":"design/AnoncredsW3CCompatibility/#creating-a-w3c-vc-credential-from-credential-definition-and-issuing-and-presenting-it-as-is","title":"Creating a W3C VC credential from credential definition, and issuing and presenting it as is","text":"

The issuance, presentation and verification of legacy anoncreds are implemented in this ./aries_cloudagent/anoncreds directory. Therefore, we will also start from there.

Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer class:

Create VC_DI Credential Offer

According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,

  • binding_required
  • binding_method
  • credential_definition

could be the parameters for create_offer method.

Create VC_DI Credential

NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.

async def create_credential(\n        self,\n        credential_offer: dict,\n        credential_request: dict,\n        credential_values: dict,\n    ) -> str:\n...\n...\n  try:\n    credential = await asyncio.get_event_loop().run_in_executor(\n        None,\n        lambda: W3CCredential.create(\n            cred_def.raw_value,\n            cred_def_private.raw_value,\n            credential_offer,\n            credential_request,\n            raw_values,\n            None,\n            None,\n            None,\n            None,\n        ),\n    )\n...\n

Create VC_DI Credential Request

async def create_vc_di_credential_request(\n        self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n    ) -> Tuple[str, str]:\n...\n...\ntry:\n  secret = await self.get_master_secret()\n  (\n      cred_req,\n      cred_req_metadata,\n  ) = await asyncio.get_event_loop().run_in_executor(\n      None,\n      W3CCredentialRequest.create,\n      None,\n      holder_did,\n      credential_definition.to_native(),\n      secret,\n      AnonCredsHolder.MASTER_SECRET_ID,\n      credential_offer,\n  )\n...\n

Create VC_DI Credential Presentation

async def create_vc_di_presentation(\n        self,\n        presentation_request: dict,\n        requested_credentials: dict,\n        schemas: Dict[str, AnonCredsSchema],\n        credential_definitions: Dict[str, CredDef],\n        rev_states: dict = None,\n    ) -> str:\n...\n...\n  try:\n    secret = await self.get_master_secret()\n    presentation = await asyncio.get_event_loop().run_in_executor(\n        None,\n        Presentation.create,\n        presentation_request,\n        present_creds,\n        self_attest,\n        secret,\n        {\n            schema_id: schema.to_native()\n            for schema_id, schema in schemas.items()\n        },\n        {\n            cred_def_id: cred_def.to_native()\n            for cred_def_id, cred_def in credential_definitions.items()\n        },\n    )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"

In this case, we can use to_w3c method of Credential class to convert from legacy to w3c and to_legacy method of W3CCredential class to convert from w3c to legacy.

We could call to_w3c method like this:

vc_di_cred = Credential.to_w3c(cred_def)\n

and for to_legacy:

legacy_cred = W3CCredential.to_legacy()\n

We don't need to input any parameters to it as it in turn calls Credential.from_w3c() method under the hood.

"},{"location":"design/AnoncredsW3CCompatibility/#format-handler-for-issue_credential-v2_0-protocol","title":"Format Handler for Issue_credential V2_0 Protocol","text":"

Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI in ./protocols/issue_credential/v2_0/messages/cred_format.py -

# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n    \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n    INDY = FormatSpec(...)\n    LD_PROOF = FormatSpec(...)\n    VC_DI = FormatSpec(\n        \u201cvc_di/\u201d,\n        CredExRecordVCDI,\n        DeferLoad(\n            \u201caries_cloudagent.protocols.issue_credential.v2_0\u201d\n            \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n        ),\n    )\n

And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof

# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n    \"\"\"Credential exchange W3C detail record.\"\"\"\n\n    class Meta:\n        \"\"\"CredExRecordW3C metadata.\"\"\"\n\n        schema_class = \"CredExRecordW3CSchema\"\n\n    RECORD_ID_NAME = \"cred_ex_w3c_id\"\n    RECORD_TYPE = \"w3c_cred_ex_v20\"\n    TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n    RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n

Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -

{\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n

Assuming VCDIDetail and VCDIOptions are already in place, VCDIDetailSchema can be created like so:

# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n    \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n    class Meta:\n        \"\"\"Accept parameter overload.\"\"\"\n\n        unknown = INCLUDE\n        model_class = VCDIDetail\n\n    credential = fields.Nested(\n        CredentialSchema(),\n        required=True,\n        metadata={\n            \"description\": \"Detail of the VC_DI Credential to be issued\",\n            \"example\": {\n                \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n                \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n                \"comment\": \"<some comment>\",\n                \"formats\": [\n                    {\n                        \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"format\": \"didcomm/w3c-di-vc@v0.1\"\n                    }\n                ],\n                \"credentials~attach\": [\n                    {\n                        \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"mime-type\": \"application/ld+json\",\n                        \"data\": {\n                            \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n                        }\n                    }\n                ]\n            }\n        },\n    )\n

Then create w3c format handler with mapping like so:

# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n            CRED_20_PROPOSAL: VCDIDetailSchema,\n            CRED_20_OFFER: VCDIDetailSchema,\n            CRED_20_REQUEST: VCDIDetailSchema,\n            CRED_20_ISSUE: VerifiableCredentialSchema,\n        }\n

Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di flag to the corresponding routes.

"},{"location":"design/AnoncredsW3CCompatibility/#admin-api-attachments","title":"Admin API Attachments","text":"

To make sure that once an endpoint has been called to trigger the Issue Credential flow in 0809 W3C_DI attachment formats the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI format.

# Format specifications\nATTACHMENT_FORMAT = {\n    CRED_20_PROPOSAL: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_OFFER: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_REQUEST: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_ISSUE: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n    },\n}\n

And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:

  • _create_free_offer function that gets called in the handler function of /issue-credential-2.0/send-offer route (in addition to other offer routes)
  • credential_exchange_send_free_request handler function of /issue-credential-2.0/send-request route
  • credential_exchange_create handler function of /issue-credential-2.0/create route
  • credential_exchange_send handler function of /issue-credential-2.0/send route

The same goes for ATTACHMENT_FORMAT of Present Proof flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:

  • present_proof_send_proposal handler function of /present-proof-2.0/send-proposal route
  • present_proof_create_request handler function of /present-proof-2.0/create-request route
  • present_proof_send_free_request handler function of /present-proof-2.0/send-request route
"},{"location":"design/AnoncredsW3CCompatibility/#credential-exchange-admin-routes","title":"Credential Exchange Admin Routes","text":"
  • /issue-credential-2.0/create-offer

This route indirectly calls _formats_filters function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n            ...\n            ...\n        }\n    }\n}\n
  • /issue-credential-2.0/create

This route indirectly calls _format_result_with_details function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send-offer

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
  • /issue-credential-2.0/send-request

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"
  • /present-proof-2.0/send-proposal

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-present\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/create-request

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/send-request

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n
  • /present-proof-2.0/records/{pres_ex_id}/send-presentation

The request body for this route might look like this:

{\n    \"presentation_definition\": <presentation_definition_schema>,\n    \"auto_remove\": true,\n    \"dif\": {\n        issuer_id: \"<issuer_id>\",\n        record_ids: {\n            \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n            \"<input descriptor id_2>\": [\"<record id>\"],\n        }\n    },\n    \"reveal_doc\": {\n        // vc_di dict\n    }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"

Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.

One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.

We will duplicate this store_credential function and modify it:

async def store_w3c_credential(...) {\n    ...\n    ...\n    try:\n        cred = W3CCredential.load(credential_data)\n    ...\n    ...\n}\n

Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?

Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"

Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy(), all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c() signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.

  • Unlike a \"typical\" non-AnonCreds W3C VC, an AnonCreds VC is never directly presented to a verifier. Rather, a derivation of the credential is generated, and it is the derivation that is shared with the verifier as a presentation. The derivation:
  • Generates presentation-specific signatures to be verified.
  • Selectively reveals attributes.
  • Generates proofs of the requested predicates.
  • Generates a proof of knowledge of the link secret blinded in the verifiable credential.
"},{"location":"design/AnoncredsW3CCompatibility/#compatibility-with-afj-how-can-we-make-sure-that-we-are-compatible","title":"Compatibility with AFJ: how can we make sure that we are compatible?","text":"

We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.

"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"

Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.

If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.

"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"

So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?

"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"

Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?

  • I don't think naming comes into play. We are creating and deriving presentations from VC Data Integrity Proofs using an AnonCreds cryptosuite. As such, these are \"stock\" W3C verifiable credentials.
"},{"location":"design/AnoncredsW3CCompatibility/#how-can-a-wallet-retain-the-capability-to-present-only-an-anoncred-credential","title":"How can a wallet retain the capability to present ONLY an anoncred credential?","text":"

If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof

How will wallets support that in a way that is developer-friendly to wallet devs?

  • The trigger for wallets to generate a W3C VP Format presentation is that they have receive a RFC 0510 DIF Presentation Exchange that can be satisfied with an AnonCreds verifiable credential in their storage. Once we decide to use one or more AnonCreds VCs to satisfy a presentation, we'll derive such a presentation and send it using the RFC 0510 DIF Presentation Exchange for the presentation message of the Present Proof v2.0 protocol.
"},{"location":"design/UpgradeViaApi/","title":"Upgrade via API Design","text":""},{"location":"design/UpgradeViaApi/#to-isolate-an-upgrade-process-and-trigger-it-via-api-the-following-pattern-was-designed-to-handle-multitenant-scenarios-it-includes-an-is_upgrading-record-in-the-walletdb-and-a-middleware-to-prevent-requests-during-the-upgrade-process","title":"To isolate an upgrade process and trigger it via API the following pattern was designed to handle multitenant scenarios. It includes an is_upgrading record in the wallet(DB) and a middleware to prevent requests during the upgrade process.","text":""},{"location":"design/UpgradeViaApi/#the-diagam-below-descripes-the-sequence-of-events-for-the-anoncreds-upgrade-process-which-it-was-designed-for-but-the-architecture-can-be-used-for-any-upgrade-process","title":"The diagam below descripes the sequence of events for the anoncreds upgrade process which it was designed for, but the architecture can be used for any upgrade process.","text":"
sequenceDiagram\n    participant A1 as Agent 1\n    participant M1 as Middleware\n    participant IAS1 as IsAnoncredsSingleton Set\n    participant UIPS1 as UpgradeInProgressSingleton Set\n    participant W as Wallet (DB)\n    participant UIPS2 as UpgradeInProgressSingleton Set\n    participant IAS2 as IsAnoncredsSingleton Set\n    participant M2 as Middleware\n    participant A2 as Agent 2\n\n    Note over A1,A2: Start upgrade for non-anoncreds wallet\n    A1->>M1: POST /anoncreds/wallet/upgrade\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is not in set\n    M1-->>UIPS1: check if wallet is in set\n    UIPS1-->>M1: wallet is not in set\n    M1->>A1: OK\n    A1-->>W: Add is_upgrading = anoncreds_in_progress record\n    A1->>A1: Upgrade wallet\n    A1-->>UIPS1: Add wallet to set\n\n    Note over A1,A2: Attempted Requests During Upgrade\n\n    Note over A1: Attempted Request\n    A1->>M1: GET /any-endpoint\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is not in set\n    M1-->>UIPS1: check if wallet is in set\n    UIPS1-->>M1: wallet is in set\n    M1->>A1: 503 Service Unavailable\n\n    Note over A2: Attempted Request\n    A2->>M2: GET /any-endpoint\n    M2-->>IAS2: check if wallet is in set\n    IAS2->>M2: wallet is not in set\n    M2-->>UIPS2: check if wallet is in set\n    UIPS2-->>M2: wallet is not in set\n    A2-->>W: Query is_upgrading = anoncreds_in_progress record\n    W-->>A2: record = anoncreds_in_progress\n    A2->>A2: Loop until upgrade is finished in seperate process\n    A2-->>UIPS2: Add wallet to set\n    M2->>A2: 503 Service Unavailable\n\n    Note over A1,A2: Agent Restart During Upgrade\n    A1-->>W: Get is_upgrading record for wallet or all subwallets\n    W-->>A1: \n    A1->>A1: Resume upgrade if in progress\n    A1-->>UIPS1: Add wallet to set\n\n    Note over A2: Same as Agent 1\n\n    Note over A1,A2: Upgrade Completes\n\n    Note over A1: Finish Upgrade\n    A1-->>W: set is_upgrading = anoncreds_finished\n    A1-->>UIPS1: Remove wallet from set\n    A1-->>IAS1: Add wallet to set\n    A1->>A1: update subwallet or restart\n\n    Note over A2: Detect Upgrade Complete\n    A2-->>W: Check is_upgrading = anoncreds_finished\n    W-->>A2: record = anoncreds_in_progress\n    A2->>A2: Wait 1 second\n    A2-->>W: Check is_upgrading = anoncreds_finished\n    W-->>A2: record = anoncreds_finished\n    A2-->>UIPS2: Remove wallet from set\n    A2-->>IAS2: Add wallet to set\n    A2->>A2: update subwallet or restart\n\n    Note over A1,A2: Restarted Agents After Upgrade\n\n    A1-->W: Get is_upgrading record for wallet or all subwallets\n    W-->>A1: \n    A1->>IAS1: Add wallet to set if record = anoncreds_finished\n\n    Note over A2: Same as Agent 1\n\n    Note over A1,A2: Attempted Requests After Upgrade\n\n    Note over A1: Attempted Request\n    A1->>M1: GET /any-endpoint\n    M1-->>IAS1: check if wallet is in set\n    IAS1-->>M1: wallet is in set\n    M1-->>A1: OK\n\n    Note over A2: Same as Agent 1
"},{"location":"design/UpgradeViaApi/#an-example-of-the-implementation-can-be-found-via-the-anoncreds-upgrade-components","title":"An example of the implementation can be found via the anoncreds upgrade components.","text":"
- `aries_cloudagent/wallet/routes.py` in the `upgrade_anoncreds` controller \n- the upgrade code in `wallet/anoncreds_upgrade.py`\n- the middleware in `admin/server.py` in the `upgrade_middleware` function\n- the singleton sets in `wallet/singletons.py`\n- the startup process in `core/conductor.py` in the `check_for_wallet_upgrades_in_progress` function\n
"},{"location":"features/AdminAPI/","title":"ACA-Py Administration API","text":""},{"location":"features/AdminAPI/#using-the-openapi-swagger-interface","title":"Using the OpenAPI (Swagger) Interface","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py agent with the --admin {HOST} {PORT} and --admin-insecure-mode command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY} and add the X-API-Key: {KEY} header to all requests instead of using the --admin-insecure-mode parameter.

To invoke a specific method:

  • Scroll to and find that endpoint;
  • Click on the endpoint name to expand its section of the UI;
  • Click on the Try it out button;
  • Fill in any data necessary to run the command;
  • Click Execute;
  • Check the response to see if the request worked as expected.

The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.

Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.

The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.

"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"

When ACA-Py is started with the --webhook-url {URL} command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state property is updated.

When a webhook is dispatched, the record topic is appended as a path component to the URL. For example, https://webhook.host.example becomes https://webhook.host.example/topic/connections when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.

"},{"location":"features/AdminAPI/#webhooks-over-websocket","title":"Webhooks over WebSocket","text":"

ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.

Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.

  • topic: The topic of the webhook, such as connections or basicmessages
  • payload: The payload of the webhook; this is the data usually received in the request body when webhooks are delivered over HTTP
  • wallet_id: If using multitenancy, this is the wallet ID of the subwallet that emitted the webhook. This value will be omitted if not using multitenancy.

To open a WebSocket, connect to the /ws endpoint of the Admin API.

"},{"location":"features/AdminAPI/#pairwise-connection-record-updated-connections","title":"Pairwise Connection Record Updated (/connections)","text":"
  • connection_id: the unique connection identifier
  • state: init / invitation / request / response / active / error / inactive
  • my_did: the DID this agent is using in the connection
  • their_did: the DID the other agent in the connection is using
  • their_label: a connection label provided by the other agent
  • their_role: a role assigned to the other agent in the connection
  • inbound_connection_id: a connection identifier for the related inbound routing connection
  • initiator: self / external / multiuse
  • invitation_key: a verification key used to identify the source connection invitation
  • request_id: the @id property from the connection request message
  • routing_state: none / request / active / error
  • accept: manual / auto
  • error_msg: the most recent error message
  • invitation_mode: once / multi
  • alias: a local alias for the connection record
"},{"location":"features/AdminAPI/#basic-message-received-basicmessages","title":"Basic Message Received (/basicmessages)","text":"
  • connection_id: the identifier of the related pairwise connection
  • message_id: the @id of the incoming agent message
  • content: the contents of the agent message
  • state: received
"},{"location":"features/AdminAPI/#forward-message-received-forward","title":"Forward Message Received (/forward)","text":"

Enable using --monitor-forward.

  • connection_id: the identifier of the connection associated with the recipient key
  • recipient_key: the recipient key of the forward message (to field of the forward message)
  • status: The delivery status of the received forward message. Possible values:
  • sent_to_session: Message is sent directly to the connection over an active transport session
  • sent_to_external_queue: Message is sent to an external queue. No information is known on the delivery of the message
  • queued_for_delivery: Message is queued for delivery using outbound transport (recipient connection has an endpoint)
  • waiting_for_pickup: The connection has no reachable endpoint. Need to wait for the recipient to connect with return routing for delivery
  • undeliverable: The connection has no reachable endpoint, and the internal queue for messages is not enabled (--enable-undelivered-queue).
"},{"location":"features/AdminAPI/#credential-exchange-record-updated-issue_credential","title":"Credential Exchange Record Updated (/issue_credential)","text":"
  • credential_exchange_id: the unique identifier of the credential exchange
  • connection_id: the identifier of the related pairwise connection
  • thread_id: the thread ID of the previously received credential proposal or offer
  • parent_thread_id: the parent thread ID of the previously received credential proposal or offer
  • initiator: issue-credential exchange initiator self / external
  • state: proposal_sent / proposal_received / offer_sent / offer_received / request_sent / request_received / issued / credential_received / credential_acked
  • credential_definition_id: the ledger identifier of the related credential definition
  • schema_id: the ledger identifier of the related credential schema
  • credential_proposal_dict: the credential proposal message
  • credential_offer: (Indy) credential offer
  • credential_request: (Indy) credential request
  • credential_request_metadata: (Indy) credential request metadata
  • credential_id: the wallet identifier of the stored credential
  • raw_credential: the credential record as received
  • credential: the credential record as stored in the wallet
  • auto_offer: (boolean) whether to automatically offer the credential
  • auto_issue: (boolean) whether to automatically issue the credential
  • error_msg: the previous error message
"},{"location":"features/AdminAPI/#presentation-exchange-record-updated-present_proof","title":"Presentation Exchange Record Updated (/present_proof)","text":"
  • presentation_exchange_id: the unique identifier of the presentation exchange
  • connection_id: the identifier of the related pairwise connection
  • thread_id: the thread ID of the previously received presentation proposal or offer
  • initiator: present-proof exchange initiator: self / external
  • state: proposal_sent / proposal_received / request_sent / request_received / presentation_sent / presentation_received / verified
  • presentation_proposal_dict: the presentation proposal message
  • presentation_request: (Indy) presentation request (also known as proof request)
  • presentation: (Indy) presentation (also known as proof)
  • verified: (string) whether the presentation is verified: true or false
  • auto_present: (boolean) prover choice to auto-present proof as verifier requests
  • error_msg: the previous error message
"},{"location":"features/AdminAPI/#api-standard-behavior","title":"API Standard Behavior","text":"

The best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.

The routes.py file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).

The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422 HTTP response with an error message if the schema validation fails.)

API endpoints are defined using aiohttp_apispec tags (e.g. @doc, @request_schema, @response_schema etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register() method and added to the Swagger page in the post_process_routes() method.

The APIs should return the following HTTP status:

  • HTTP 200 for successful API completion, with an appropriate response
  • HTTP 400 (or appropriate 4xx code) (with an error message) for errors on input parameters (i.e., the user can retry with different parameters and potentially get a successful API call)
  • HTTP 404 if a record is expected and not found (generally for GET requests that fetch a single record)
  • HTTP 500 (or appropriate 5xx code) if there is some other processing error (i.e., it won't make any difference what parameters the user tries) with an error message

...and should not return:

  • HTTP 500 with a stack trace due to an untrapped error (we should handle error conditions with a 400 or 404 response and catch errors, providing a meaningful error message)
"},{"location":"features/AnonCredsMethods/","title":"Adding AnonCreds Methods to ACA-Py","text":"

ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.

The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.

This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.

"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"

We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.

Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.

"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"

The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:

  • Implement BaseAnonCredsResolver - here
  • Implement BaseAnonCredsRegistrar - here

The links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of aries_cloudagent/anoncreds/base.py in the main branch.

The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.

Models for the API are defined here

"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"

When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_* event (e.g., AnonCredsIssuer.finish_schema, AnonCredsIssuer.finish_cred_def, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_* to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.

"},{"location":"features/AnonCredsMethods/#questions-or-comments","title":"Questions or Comments","text":"

The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aries-cloudagent-python channel on the Hyperledger Discord Server or open an issue in this repo to get help.

Pull Requests to the ACA-Py repository to improve this content are welcome!

"},{"location":"features/AnoncredsControllerMigration/","title":"Anoncreds Controller Migration","text":"

To upgrade an agent to use anoncreds a controller should implement the required changes to endpoints and payloads in a way that is backwards compatible. The controller can then trigger the upgrade via the upgrade endpoint.

"},{"location":"features/AnoncredsControllerMigration/#step-1-endpoint-payload-and-response-changes","title":"Step 1 - Endpoint Payload and Response Changes","text":"

There is endpoint and payload changes involved with creating schema, credential definition and revocation objects. Your controller will need to implement these changes for any endpoints it uses.

A good way to implement this with backwards compatibility is to get the wallet type via /settings and handle the existing endpoints when wallet.type is askar and the new anoncreds endpoints when wallet.type is askar-anoncreds. In this way the controller will handle both types of wallets in case the upgrade fails. After the upgrade is successful and stable the controller can be updated to only handle the new anoncreds endpoints.

"},{"location":"features/AnoncredsControllerMigration/#schemas","title":"Schemas","text":""},{"location":"features/AnoncredsControllerMigration/#creating-a-schema","title":"Creating a Schema:","text":"
  • Change endpoint from POST /schemas to POST /anoncreds/schema
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"attributes\": [\"score\"],\n  \"schema_name\": \"simple\",\n  \"schema_version\": \"1.0\"\n}\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"name\": \"Example schema\",\n    \"version\": \"1.0\"\n  }\n}\n
  • options are not required
  • issuerId is the public did to be used on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n    \"schema\": {\n      \"ver\": \"1.0\",\n      \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n      \"name\": \"simple\",\n      \"version\": \"1.0\",\n      \"attrNames\": [\"score\"],\n      \"seqNo\": 541\n    }\n  },\n  \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n  \"schema\": {\n    \"ver\": \"1.0\",\n    \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n    \"name\": \"simple\",\n    \"version\": \"1.0\",\n    \"attrNames\": [\"score\"],\n    \"seqNo\": 541\n  }\n}\n

to

{\n  \"job_id\": \"string\",\n  \"registration_metadata\": {},\n  \"schema_metadata\": {},\n  \"schema_state\": {\n    \"schema\": {\n      \"attrNames\": [\"score\"],\n      \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n      \"name\": \"Example schema\",\n      \"version\": \"1.0\"\n    },\n    \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"state\": \"finished\"\n  }\n}\n

With endorsement:

{\n  \"sent\": {\n    \"schema\": {\n      \"attrNames\": [\n        \"score\"\n      ],\n      \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n      \"name\": \"schema_name\",\n      \"seqNo\": 10,\n      \"ver\": \"1.0\",\n      \"version\": \"1.0\"\n    },\n    \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\"\n  },\n  \"txn\": {...}\n}\n

to

{\n  \"job_id\": \"12cb896d648242c8b9b0fff3b870ed00\",\n  \"schema_state\": {\n    \"state\": \"wait\",\n    \"schema_id\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n    \"schema\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"attrNames\": [\n        \"score\"\n      ],\n      \"name\": \"simple\",\n      \"version\": \"1.1\"\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"schema_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#getting-schemas","title":"Getting schemas:","text":"
  • Change endpoint from GET /schemas/created to GET /anoncreds/schemas
  • Response payloads have no change
"},{"location":"features/AnoncredsControllerMigration/#getting-a-schema","title":"Getting a schema:","text":"
  • Change endpoint from GET /schemas/{schema_id} to GET /anoncreds/schema/{schema_id}
  • Response payload changed from
{\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"name\": \"schema_name\",\n    \"seqNo\": 10,\n    \"ver\": \"1.0\",\n    \"version\": \"1.0\"\n  }\n}\n

to

{\n  \"resolution_metadata\": {},\n  \"schema\": {\n    \"attrNames\": [\"score\"],\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"name\": \"Example schema\",\n    \"version\": \"1.0\"\n  },\n  \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n  \"schema_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#credential-definitions","title":"Credential Definitions","text":""},{"location":"features/AnoncredsControllerMigration/#creating-a-credential-definition","title":"Creating a credential definition:","text":"
  • Change endpoint from POST /credential-definitions to POST /anoncreds/credential-definition
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"revocation_registry_size\": 1000,\n  \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:simple:1.0\",\n  \"support_revocation\": true,\n  \"tag\": \"default\"\n}\n

to

{\n  \"credential_definition\": {\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"tag\": \"default\"\n  },\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n    \"revocation_registry_size\": 1000,\n    \"support_revocation\": true\n  }\n}\n
  • options are not required, revocation will default to false
  • issuerId is the public did to be used on the ledger
  • schemaId is the schema id on the ledger
  • The payload responses have changed

Responses

Without Endoresment:

{\n  \"sent\": {\n    \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n  },\n  \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n}\n

to

{\n  \"schema_state\": {\n    \"state\": \"finished\",\n    \"schema_id\": \"BpGaCdTwgEKoYWm6oPbnnj:2:simple:1.0\",\n    \"schema\": {\n      \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n      \"attrNames\": [\"score\"],\n      \"name\": \"simple\",\n      \"version\": \"1.0\"\n    }\n  },\n  \"registration_metadata\": {},\n  \"schema_metadata\": {\n    \"seqNo\": 555\n  }\n}\n

With Endorsement:

{\n  \"sent\": {\n    \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\"\n  },\n  \"txn\": {...}\n}\n
{\n  \"job_id\": \"7082e58aa71d4817bb32c3778596b012\",\n  \"credential_definition_state\": {\n    \"state\": \"wait\",\n    \"credential_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n    \"credential_definition\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"schemaId\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n      \"type\": \"CL\",\n      \"tag\": \"default\",\n      \"value\": {\n        \"primary\": {...},\n        \"revocation\": {...}\n      }\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"credential_definition_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#getting-credential-definitions","title":"Getting credential definitions:","text":"
  • Change endpoint from GET /credential-definitons/created to GET /anoncreds/credential-defintions
  • Response payloads have no change
"},{"location":"features/AnoncredsControllerMigration/#getting-a-credential-definition","title":"Getting a credential definition:","text":"
  • Change endpoint from GET /credential-definitons/{schema_id} to GET /anoncreds/credential-defintion/{cred_def_id}
  • Response payload changed from
{\n  \"credential_definition\": {\n    \"id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n    \"schemaId\": \"20\",\n    \"tag\": \"tag\",\n    \"type\": \"CL\",\n    \"value\": {...},\n      \"revocation\": {...}\n    },\n    \"ver\": \"1.0\"\n  }\n}\n

to

{\n  \"credential_definition\": {\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"tag\": \"default\",\n    \"type\": \"CL\",\n    \"value\": {...},\n      \"revocation\": {...}\n    }\n  },\n  \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n  \"credential_definitions_metadata\": {},\n  \"resolution_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#revocation","title":"Revocation","text":"

Most of the changes with revocation endpoints only require prepending /anoncreds to the path. There are some other subtle changes listed below.

"},{"location":"features/AnoncredsControllerMigration/#create-and-publish-registry-definition","title":"Create and publish registry definition","text":"
  • The endpoints POST /revocation/create-registry and POST /revocation/registry/{rev_reg_id}/definition have been replaced by the single endpoint POST /anoncreds/revocation-registry-definition
  • Instead of creating the registry with POST /revocation/create-registry and payload
{\n  \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n  \"max_cred_num\": 1000\n}\n
  • And then publishing with POST /revocation/registry/{rev_reg_id}/definition
params\n - conn_id\n - create_transaction_for_endorser\n
  • Use POST /anoncreds/revocation-registry-definition with payload
{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"revocation_registry_definition\": {\n    \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n    \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n    \"maxCredNum\": 777,\n    \"tag\": \"default\"\n  }\n}\n
  • options are not required, revocation will default to false
  • issuerId is the public did to be used on the ledger
  • credDefId is the cred def id on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n  },\n  \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n}\n

to

{\n  \"revocation_registry_definition_state\": {\n    \"state\": \"finished\",\n    \"revocation_registry_definition_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\",\n    \"revocation_registry_definition\": {\n      \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n      \"revocDefType\": \"CL_ACCUM\",\n      \"credDefId\": \"BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default\",\n      \"tag\": \"default\",\n      \"value\": {...}\n    }\n  },\n  \"registration_metadata\": {},\n  \"revocation_registry_definition_metadata\": {\n    \"seqNo\": 569\n  }\n}\n

With endorsement:

{\n  \"sent\": {\n    \"result\": {\n      \"created_at\": \"2021-12-31T23:59:59Z\",\n      \"cred_def_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n      \"error_msg\": \"Revocation registry undefined\",\n      \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\",\n      \"max_cred_num\": 1000,\n      \"pending_pub\": [\n        \"23\"\n      ],\n      \"record_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n      \"revoc_def_type\": \"CL_ACCUM\",\n      \"revoc_reg_def\": {\n        \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n        \"id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n        \"revocDefType\": \"CL_ACCUM\",\n        \"tag\": \"string\",\n        \"value\": {...},\n        \"ver\": \"1.0\"\n      },\n      \"revoc_reg_entry\": {...},\n      \"revoc_reg_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n      \"state\": \"active\",\n      \"tag\": \"string\",\n      \"tails_hash\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\",\n      \"tails_local_path\": \"string\",\n      \"tails_public_uri\": \"string\",\n      \"updated_at\": \"2021-12-31T23:59:59Z\"\n    }\n  },\n  \"txn\": {...}\n}\n

to

{\n  \"job_id\": \"25dac53a1fb84cb8a5bf1b4362fbca11\",\n  \"revocation_registry_definition_state\": {\n    \"state\": \"wait\",\n    \"revocation_registry_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:4:RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default:CL_ACCUM:default\",\n    \"revocation_registry_definition\": {\n      \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n      \"revocDefType\": \"CL_ACCUM\",\n      \"credDefId\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n      \"tag\": \"default\",\n      \"value\": {...}\n    }\n  },\n  \"registration_metadata\": {\n    \"txn\": {...}\n  },\n  \"revocation_registry_definition_metadata\": {}\n}\n
"},{"location":"features/AnoncredsControllerMigration/#send-revocation-entry-or-list-to-ledger","title":"Send revocation entry or list to ledger","text":"
  • Changes from POST /revocation/registry/{rev_reg_id}/entry to POST /anoncreds/revocation-list
  • Change from
params\n - conn_id\n - create_transaction_for_endorser\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"rev_reg_def_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\"\n}\n
  • options are not required
  • rev_reg_def_id is the revocation registry definition id on the ledger
  • The payload responses have changed

Responses

Without endorsement:

{\n  \"sent\": {\n    \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n  },\n  \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n}\n

to

\n
"},{"location":"features/AnoncredsControllerMigration/#get-current-active-registry","title":"Get current active registry:","text":"
  • Change from GET /revocation/active-registry/{cred_def_id} to GET /anoncreds/revocation/active-registry/{cred_def_id}
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#rotate-active-registry","title":"Rotate active registry","text":"
  • Change from POST /revocation/active-registry/{cred_def_id}/rotate to POST /anoncreds/revocation/active-registry/{cred_def_id}/rotate
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-credential-revocation-status","title":"Get credential revocation status","text":"
  • Change from GET /revocation/credential-record to GET /anoncreds/revocation/credential-record
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#publish-revocations","title":"Publish revocations","text":"
  • Change from POST /revocation/publish-revocations to POST /anoncreds/revocation/publish-revocations
  • Change payload and parameters from
params\n - conn_id\n - create_transaction_for_endorser\n
{\n  \"rrid2crid\": {\n    \"additionalProp1\": [\"12345\"],\n    \"additionalProp2\": [\"12345\"],\n    \"additionalProp3\": [\"12345\"]\n  }\n}\n

to

{\n  \"options\": {\n    \"create_transaction_for_endorser\": false,\n    \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n  },\n  \"rrid2crid\": {\n    \"additionalProp1\": [\"12345\"],\n    \"additionalProp2\": [\"12345\"],\n    \"additionalProp3\": [\"12345\"]\n  }\n}\n
  • options are not required
"},{"location":"features/AnoncredsControllerMigration/#get-registries","title":"Get registries","text":"
  • Change from GET /revocation/registries/created to GET /anoncreds/revocation/registries
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-registry","title":"Get registry","text":"
  • Changes from GET /revocation/registry/{rev_reg_id} to GET /anoncreds/revocation/registry/{rev_reg_id}
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#fix-reocation-state","title":"Fix reocation state","text":"
  • Changes from POST /revocation/registry/{rev_reg_id}/fix-revocation-entry-state to POST /anoncreds/revocation/registry/{rev_reg_id}/fix-revocation-state
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-number-of-issued-credentials","title":"Get number of issued credentials","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued to GET /anoncreds/revocation/registry/{rev_reg_id}/issued
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-credential-details","title":"Get credential details","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued/details to GET /anoncreds/revocation/registry/{rev_reg_id}/issued/details
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#get-revoked-credential-details","title":"Get revoked credential details","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/issued/indy_recs to GET /anoncreds/revocation/registry/{rev_reg_id}/issued/indy_recs
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#set-state-manually","title":"Set state manually","text":"
  • Changes from PATCH /revocation/registry/{rev_reg_id}/set-state to PATCH /anoncreds/revocation/registry/{rev_reg_id}/set-state
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#upload-tails-file","title":"Upload tails file","text":"
  • Changes from PUT /revocation/registry/{rev_reg_id}/tails-file to PUT /anoncreds/registry/{rev_reg_id}/tails-file
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#download-tails-file","title":"Download tails file","text":"
  • Changes from GET /revocation/registry/{rev_reg_id}/tails-file to GET /anoncreds/revocation/registry/{rev_reg_id}/tails-file
  • No payload changes
"},{"location":"features/AnoncredsControllerMigration/#revoke-a-credential","title":"Revoke a credential","text":"
  • Changes from POST /revocation/revoke to POST /anoncreds/revocation/revoke
  • Change payload and parameters from
"},{"location":"features/AnoncredsControllerMigration/#clear-pending-revocations","title":"Clear pending revocations","text":"
  • POST /revocation/clear-pending-revocations has been removed.
"},{"location":"features/AnoncredsControllerMigration/#delete-tails-file","title":"Delete tails file","text":"
  • Endpoint DELETE /revocation/delete-tails-server has been removed
"},{"location":"features/AnoncredsControllerMigration/#update-tails-file","title":"Update tails file","text":"
  • Endpoint PATCH /revocation/registry/{rev_reg_id} has been removed
"},{"location":"features/AnoncredsControllerMigration/#additional-endpoints","title":"Additional Endpoints","text":"
  • PUT /anoncreds/registry/{rev_reg_id}/active is available to set the active registry
"},{"location":"features/AnoncredsControllerMigration/#step-2-trigger-the-upgrade-via-the-upgrade-endpoint","title":"Step 2 - Trigger the upgrade via the upgrade endpoint","text":"

The upgrade endpoint is at POST /anoncreds/wallet/upgrade.

You need to be careful doing this, as there is no way to downgrade the wallet. It is recommended highly recommended to back-up any wallets and to test the upgrade in a development environment before upgrading a production wallet.

Params: wallet_name is the name of the wallet to upgrade. Used to prevent accidental upgrades.

The behavior for a base wallet (standalone) or admin wallet in multitenant mode is slightly different from the behavior of a subwallet (or tenant) in multitenancy mode. However, the upgrade process is the same.

  1. Backup the wallet
  2. Scale down any controller instances on old endpoints
  3. Call the upgrade endpoint
  4. Scale up the controller instances to handle new endpoints
"},{"location":"features/AnoncredsControllerMigration/#base-wallet-standalone-or-admin-wallet-in-multitenant-mode","title":"Base wallet (standalone) or admin wallet in multitenant mode:","text":"

The agent will get a 503 error during the upgrade process. Any agent instance will shut down when the upgrade is complete. It is up to the aca-py agent to start up again. After the upgrade is complete the old endpoints will no longer be available and result in a 400 error.

The aca-py agent will work after the restart. However, it will receive a warning for having the wrong wallet type configured. It is recommended to change the wallet-type to askar-anoncreds in the agent configuration file or start-up command.

"},{"location":"features/AnoncredsControllerMigration/#subwallet-tenant-in-multitenancy-mode","title":"Subwallet (tenant) in multitenancy mode:","text":"

The sub-tenant which is in the process of being upgraded will get a 503 error during the upgrade process. All other sub-tenants will continue to operate normally. After the upgrade is complete the sub-tenant will be able to use the new endpoints. The old endpoints will no longer be available and result in a 403 error. Any aca-py agents will remain running after the upgrade and it's not required that the aca-py agent restarts.

"},{"location":"features/AnoncredsProofValidation/","title":"Anoncreds Proof Validation in ACA-Py","text":"

ACA-Py performs pre-validation when verifying Anoncreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the anoncreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.

The list of possible verification messages can be found here, and consists of:

class PresVerifyMsg(str, Enum):\n    \"\"\"Credential verification codes.\"\"\"\n\n    RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n    RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n    TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n    CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n    PRES_VALUE_ERROR = \"VALUE_ERROR\"\n    PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n

If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid (which means the attribute identified by 19_uuid contained a timestamp outside of the non-revocation interval (this is just a warning)).

A presentation verification may include multiple messages, for example:

    ...\n    \"verified\": \"true\",\n    \"verified_msgs\": [\n        \"TS_OUT_NRI::18_uuid\",\n        \"TS_OUT_NRI::18_id_GE_uuid\",\n        \"TS_OUT_NRI::18_busid_GE_uuid\"\n    ],\n    ...\n

... or it may include a single message, for example:

    ...\n    \"verified\": \"false\",\n    \"verified_msgs\": [\n        \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n    ],\n    ...\n

... or the verified_msgs may be null or an empty array.

"},{"location":"features/AnoncredsProofValidation/#presentation-modifications-and-warnings","title":"Presentation Modifications and Warnings","text":"

The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:

  • \"RMV_RFNT_NRI\": Referent contains a non-revocation interval for a non-revocable credential (timestamp is removed)
  • \"RMV_GLB_NRI\": Presentation contains a global interval for a non-revocable credential (timestamp is removed)
  • \"TS_OUT_NRI\": Presentation contains a non-revocation timestamp outside of the requested non-revocation interval (warning)
  • \"UNRVL_ATTR\": Presentation contains attributes with unrevealed values (warning)
"},{"location":"features/AnoncredsProofValidation/#presentation-pre-validation-errors","title":"Presentation Pre-validation Errors","text":"

The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:

VALUE_ERROR::<description of the failed validation>\n

These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...) in the code.

A summary of the possible errors includes:

  • Information missing in presentation exchange record
  • Timestamp provided for irrevocable credential
  • Referenced revocation registry not found on ledger
  • Timestamp outside of reasonable range (future date or pre-dates revocation registry)
  • Mismatch between provided and requested timestamps for non-revocation
  • Mismatch between requested and provided attributes or predicates
  • Self-attested attribute provided for a requested attribute with restrictions
  • Encoded value doesn't match raw value
"},{"location":"features/AnoncredsProofValidation/#anoncreds-verification-exceptions","title":"Anoncreds Verification Exceptions","text":"

Typically, when you call the anoncreds verifier_verify_proof() method, it will return a True or False based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:

VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.

ACA-Py provides a DIDMethods registry holding all the DID methods supported for storage in a wallet

Askar and InMemory are the only wallets supporting this registry.

"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"

By default, ACA-Py supports did:key and did:sov. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web to the registry from a plugin setup method.

WEB = DIDMethod(\n    name=\"web\",\n    key_types=[ED25519, BLS12381G2],\n    rotation=True,\n    holder_defined_did=HolderDefinedDid.REQUIRED  # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n    methods = context.inject(DIDMethods)\n    methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"

POST /wallet/did/create can be provided with parameters for any registered DID method. Here's a follow-up to the did:web method example:

{\n    \"method\": \"web\",\n    \"options\": {\n        \"did\": \"did:web:doma.in\",\n        \"key_type\": \"ed25519\"\n    }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"

For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.

"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.

A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.

For example, given the DID did:example:1234abcd, a DID Resolver that supports did:example might return:

{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n  \"id\": \"did:example:1234abcd#keys-1\",\n  \"type\": \"Ed25519VerificationKey2018\",\n  \"controller\": \"did:example:1234abcd\",\n  \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n  \"id\": \"did:example:1234abcd#did-communication\",\n  \"type\": \"did-communication\",\n  \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n

For more details on DIDs and DID Resolution, see the W3C DID Specification.

In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.

"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver","text":"

In ACA-Py, the DIDResolver provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers list. This registry enables additional resolvers to be loaded via plugin.

"},{"location":"features/DIDResolution/#example-usage","title":"Example usage","text":"
class ExampleMessageHandler:\n    async def handle(context: RequestContext, responder: BaseResponder):\n    \"\"\"Handle example message.\"\"\"\n    resolver = await context.inject(DIDResolver)\n\n    doc: dict = await resolver.resolve(\"did:example:123\")\n    assert doc[\"id\"] == \"did:example:123\"\n\n    verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n    # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"

On DIDResolver.resolve or DIDResolver.dereference, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:

  • Type. The resolver's type falls into one of two categories: native or non-native. A \"native\" resolver will perform all resolution steps directly. A \"non-native\" resolver delegates all or part of resolution to another service or entity.
  • Self-reported supported DIDs. Each method resolver implements a supports method or a supported_did_regex method. These methods are used to determine whether the given DID can be handled by the method resolver.

The selection algorithm roughly follows the following steps:

  1. Filter out all resolvers where resolver.supports(did) returns false.
  2. Partition remaining resolvers by type with all native resolvers followed by non-native resolvers (registration order preserved within partitions).
  3. For each resolver in the resulting list, attempt to resolve the DID and return the first successful result.
"},{"location":"features/DIDResolution/#resolver-plugins","title":"Resolver Plugins","text":"

Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool, writing your own should require minimal overhead.

"},{"location":"features/DIDResolution/#writing-a-resolver-plugin","title":"Writing a resolver plugin","text":"

Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver class as the base. BaseDIDResolver is an abstract base class that defines the interface that the core DIDResolver expects for Method resolvers.

The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py will be in charge of injecting the plugin, and example_resolver.py will have the logic implementation to resolve for a fabricated did:example method.

"},{"location":"features/DIDResolution/#__init-__py","title":"__init __.py","text":"

```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver

from .example_resolver import ExampleResolver

async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)

#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n    \"\"\"ExampleResolver class.\"\"\"\n\n    def __init__(self):\n        super().__init__(ResolverType.NATIVE)\n        # Alternatively, ResolverType.NON_NATIVE\n        self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n    @property\n    def supported_did_regex(self) -> Pattern:\n        \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n        return self._supported_did_regex\n\n    async def setup(self, context):\n        \"\"\"Setup the example resolver (none required).\"\"\"\n\n    async def _resolve(self, profile: Profile, did: str) -> dict:\n        \"\"\"Resolve example DIDs.\"\"\"\n        if did != \"did:example:1234abcd\":\n            raise DIDNotFound(\n                \"We only actually resolve did:example:1234abcd. Sorry!\"\n            )\n\n        return {\n            \"@context\": \"https://www.w3.org/ns/did/v1\",\n            \"id\": \"did:example:1234abcd\",\n            \"verificationMethod\": [{\n                \"id\": \"did:example:1234abcd#keys-1\",\n                \"type\": \"Ed25519VerificationKey2018\",\n                \"controller\": \"did:example:1234abcd\",\n                \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n            }],\n            \"service\": [{\n                \"id\": \"did:example:1234abcd#did-communication\",\n                \"type\": \"did-communication\",\n                \"serviceEndpoint\": \"https://agent.example.com/\"\n            }]\n        }\n

"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"

There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.

  • ResolverError
  • Base class for resolver exceptions.
  • DIDNotFound
  • Raised when DID is not found using DID method specific algorithm.
  • DIDMethodNotSupported
  • Raised when no resolver is registered for a given did method.
"},{"location":"features/DIDResolution/#using-resolver-plugins","title":"Using Resolver Plugins","text":"

In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github DIDs.

The resolution algorithm is simple: for the github DID did:github:dbluhm, the method specific identifier dbluhm (a GitHub username) is used to lookup an index.jsonld file in the ghdid repository in that GitHub users profile. See GitHub DID Method Specification for more details.

To use this plugin, first install it into your project's python environment:

pip install git+https://github.com/dbluhm/acapy-resolver-github\n

Then, invoke ACA-Py as you normally do with the addition of:

$ aca-py start \\\n    --plugin acapy_resolver_github \\\n    # ... the remainder of your startup arguments\n

Or add the following to your configuration file:

plugin:\n  - acapy_resolver_github\n

The following is a fully functional Dockerfile encapsulating this setup:

```dockerfile= FROM ghcr.io/hyperledger/aries-cloudagent-python:py3.9-0.12.1 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github

CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]

To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n

"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":"
  • Github Resolver
  • Universal Resolver
  • DIDComm Resolver
"},{"location":"features/DIDResolution/#references","title":"References","text":"

https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/

"},{"location":"features/DevReadMe/","title":"Developer's Read Me for Hyperledger Aries Cloud Agent - Python","text":"

See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.

"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":"
  • Introduction
  • Developer Demos
  • Running
  • Configuring ACA-PY: Command Line Parameters
  • Docker
  • Locally Installed
  • About ACA-Py Command Line Parameters
  • Provisioning Secure Storage
  • Mediation
  • Multi-tenancy
  • JSON-LD Credentials
  • Developing
  • Prerequisites
  • Running In A Dev Container
  • Running Locally
  • Logging
  • Running Tests
  • Running Aries Agent Test Harness Tests
  • Development Workflow
  • Publishing Releases
  • Dynamic Injection of Services
"},{"location":"features/DevReadMe/#introduction","title":"Introduction","text":"

Aries Cloud Agent Python (ACA-Py) is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.

The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.

"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"

To put ACA-Py through its paces at the command line, checkout our demos page.

"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"

ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help option to discover the available command line parameters. There are a lot of them--for good and bad.

"},{"location":"features/DevReadMe/#docker","title":"Docker","text":"

To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"

If you installed the PyPi package, the executable aca-py should be available on your PATH.

Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n

If you get an error about a missing module indy (e.g. ModuleNotFoundError: No module named 'indy') when running aca-py, you will need to install the Indy libraries from the command line:

pip install python3_indy\n

Once that completes successfully, you should be able to run aca-py --version and the other examples above.

"},{"location":"features/DevReadMe/#about-aca-py-command-line-parameters","title":"About ACA-Py Command Line Parameters","text":"

ACA-Py invocations are separated into two types - initially provisioning an agent (provision) and starting a new agent process (start). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.

When starting an agent instance, at least one inbound and one outbound transport MUST be specified.

For example:

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --outbound-transport http\n

or

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --inbound-transport ws 0.0.0.0 8001 \\\n                --outbound-transport ws \\\n                --outbound-transport http\n

ACA-Py ships with both inbound and outbound transport drivers for http and ws (websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.

Most configuration parameters are provided to the agent at startup. Refer to the Running sections above for details on listing the available command line parameters.

"},{"location":"features/DevReadMe/#provisioning-secure-storage","title":"Provisioning Secure Storage","text":"

It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...).

aca-py provision --wallet-type askar --seed $SEED\n

For additional provision options, execute aca-py provision --help.

Additional information about secure storage options and configuration settings can be found here.

"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"

ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.

"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"

ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.

"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"

ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.

"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"

Docker must be installed to run software locally and to run the test suite.

"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"

The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.

One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.

"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"

Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.

./scripts/run_docker start <args>\n

For example:

./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

To enable the ptvsd Python debugger for Visual Studio/VSCode use the --debug command line parameter.

Any ports you will be using from the docker container should be published using the PORTS environment variable. For example:

PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

Refer to the previous section for instructions on how to run ACA-Py.

"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"

You can find more details about logging and log levels here.

"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"

To run the ACA-Py test suite, use the following script:

./scripts/run_tests\n

To run the ACA-Py test suite with ptvsd debugger enabled:

./scripts/run_tests --debug\n

To run specific tests pass parameters as defined by pytest:

./scripts/run_tests aries_cloudagent/protocols/connections\n

To run the tests including Indy SDK and related dependencies, run the script:

./scripts/run_tests_indy\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"

You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).

Check out and run AATH tests as follows (this tests the aca-py main branch):

git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n

The manage script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.

"},{"location":"features/DevReadMe/#development-workflow","title":"Development Workflow","text":"

We use Ruff to enforce a coding style guide.

We use Black to automatically format code.

Please write tests for the work that you submit.

Tests should reside in a directory named tests alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_ to be automatically picked up the test runner.

There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!

The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.

Please also refer to the contributing guidelines and code of conduct.

"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"

The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.

"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"

The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext instance, currently within conductor.py. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass); for instance the wallet instance may be injected using wallet = context.inject(BaseWallet). The inject method normally throws an exception if no implementation of the base class is provided, but can be called with required=False for optional dependencies (in which case a value of None may be returned).

Providers are registered with either context.injector.bind_instance(BaseClass, instance) for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider) for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.

The BaseProvider classes in the config.provider module include ClassProvider, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet). ClassProvider accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass), allowing dynamic injection of dependencies when the class instance is instantiated.

"},{"location":"features/Endorser/","title":"Transaction Endorser Support","text":"

ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.

"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"

Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.

Once the connection is established and active, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info endpoint.

"},{"location":"features/Endorser/#requesting-transaction-endorsement","title":"Requesting Transaction Endorsement","text":"

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id (of the Endorser connection) and create_transaction_for_endorser.

(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)

If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:

Protocol Step Author Endorser Request Endorsement /transactions/create-request Endorse Transaction /transactions/{tran_id}/endorse Write Transaction /transactions/{tran_id}/write

Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.

Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).

"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"

The following start-up parameters are supported by ACA-Py:

Endorsement:\n  --endorser-protocol-role <endorser-role>\n                        Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n                        Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n                        the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n  --endorser-public-did <endorser-public-did>\n                        For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n                        DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n  --endorser-alias <endorser-alias>\n                        For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n  --auto-request-endorsement\n                        For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n                        transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n  --auto-endorse-transactions\n                        For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n                        [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n  --auto-write-transactions\n                        For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n                        var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n  --auto-create-revocation-transactions\n                        For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n                        the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n  --auto-promote-author-did\n                        For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"

Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:

  • a routes.py file exposes the admin endpoints
  • handler files implement responses to any received Endorse protocol messages
  • a manager.py file implements common functionality that is called from both the routes.py and handler classes (as well as from other classes that need to interact with Endorser functionality)

The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).

The overall architecture can be illustrated as:

"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"

An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:

You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).

At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.

For example:

  • Once the credential definition is created, a revocation registry must be created (for revocable cred defs)
  • Once the revocation registry is created, a revocation entry must be created
  • Potentially, the cred def status could be updated once the revocation entry is completed

Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.

"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"

... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:

You can see the same endorsement processes in this sequence.

Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.

"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"

By design Hyperledger Aries is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, Indy and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.

"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":"
  • General Concept
  • BBS+
  • Preparing to Issue a Credential
  • JSON-LD Context
    • Writing JSON-LD Contexts
  • Signature Suite
  • Did Method
    • did:sov
    • did:key
  • Issuing Credentials
  • Retrieving Issued Credentials
  • Present Proof
  • VC-API
"},{"location":"features/JsonLdCredentials/#general-concept","title":"General Concept","text":"

The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:

  • Verifiable Credentials Data Model
  • JSON-LD Articles and Presentations
  • Linked Data Proofs
"},{"location":"features/JsonLdCredentials/#bbs","title":"BBS+","text":"

BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.

Some other resources that can help you get started with BBS+ credentials:

  • BBS+ Signatures 2020
  • Video: BBS+ Credential Exchange in Hyperledger Aries
"},{"location":"features/JsonLdCredentials/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.

"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"

It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:

<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]

For credentials the https://www.w3.org/2018/credentials/v1 context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1 URL MUST be present in the context. For convenience this URL will be automatically added to the @context of the credential if not present.

{\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://other-contexts.com\"\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"

Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:

  • Vaccination Certificate Vocabulary
  • Citizenship Vocabulary
  • Traceability Vocabulary

Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.

For the remainder of this guide, we will be using the example UniversityDegreeCredential type and https://www.w3.org/2018/credentials/examples/v1 context from the Verifiable Credential Data Model. You should not use this for production use cases.

"},{"location":"features/JsonLdCredentials/#signature-suite","title":"Signature Suite","text":"

Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports three signature suites for issuing credentials:

  • Ed25519Signature2018 - Very well supported. No zero knowledge proofs or selective disclosure.
  • Ed25519Signature2020 - Updated version of 2018 suite.
  • BbsBlsSignature2020 - Newer, but supports zero knowledge proofs and selective disclosure.

Generally you should always use BbsBlsSignature2020 as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.

"},{"location":"features/JsonLdCredentials/#did-method","title":"DID Method","text":"

Besides the JSON-LD context, we need a DID to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:

  • did:sov - Can only be used for Ed25519Signature2018 signature suite.
  • did:key - Can be used for both Ed25519Signature2018 and BbsBlsSignature2020 signature suites.
"},{"location":"features/JsonLdCredentials/#didsov","title":"did:sov","text":"

When using did:sov you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public endpoint. For backwards compatibility the did is returned without did:sov prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB becomes did:sov:DViYrCMPWfuLiY7LLs8giB)

"},{"location":"features/JsonLdCredentials/#didkey","title":"did:key","text":"

A did:key did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.

You can create a did:key using the /wallet/did/create endpoint with the following body. Use ed25519 for Ed25519Signature2018, bls12381g2 for BbsBlsSignature2020.

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj

"},{"location":"features/JsonLdCredentials/#issuing-credentials","title":"Issuing Credentials","text":"

Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0)

The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.

All endpoints in API use the aries/ld-proof-vc-detail@v1.0. We'll use the /issue-credential-2.0/send as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.

The detail should be included under the filter.ld_proof property. To issue a credential call the /issue-credential-2.0/send endpoint, with the example body below and the connection_id and issuer keys replaced. The value of issuer should be the did that you created in the Did Method paragraph above.

If you don't have auto-respond-credential-offer and auto-store-credential enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request and /issue-credential-2.0/records/{cred_ex_id}/store to finalize the credential issuance.

See the example body
{\n  \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"

After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.

Call the /credentials/w3c endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.

See the example response
{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/bbs/v1\"\n      ],\n      \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n      \"subject_ids\": [],\n      \"proof_types\": [\"BbsBlsSignature2020\"],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n          \"created\": \"2021-05-03T12:31:28.561945\",\n          \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n    }\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"

\u26a0\ufe0f TODO: https://github.com/hyperledger/aries-cloudagent-python/pull/1125

"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"

In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.

These endpoints include:

  • GET /vc/credentials -> returns a list of all stored json-ld credentials
  • GET /vc/credentials/{id} -> returns a json-ld credential based on it's ID
  • POST /vc/credentials/issue -> signs a credential
  • POST /vc/credentials/verify -> verifies a credential
  • POST /vc/credentials/store -> stores an issued credential
  • POST /vc/presentations/prove -> proves a presentation
  • POST /vc/presentations/verify -> verifies a presentation

To learn more about using these endpoints, please refer to the available postman collection.

"},{"location":"features/JsonLdCredentials/#external-suite-provider","title":"External Suite Provider","text":"

It is possible to extend the signature suite support, including outsourcing signing JSON-LD Credentials to some other component (KMS, HSM, etc.), using the ExternalSuiteProvider interface. This interface can be implemented and registered via plugin. The plugged in provider will be used by ACA-Py's LDP-VC subsystem to create a LinkedDataProof object, which is responsible for signing normalized credential values.

This interface enables taking advantage of ACA-Py's JSON-LD processing to construct and format the credential while exposing a simple interface to a plugin to make it responsible for signatures. This can also be combined with plugged in DID Methods, VerificationKeyStrategy, and other pluggable components.

See this example project here for more details on the interface and its usage: https://github.com/dbluhm/acapy-ld-signer

"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":"
  • DIDComm Message Forwarding - Sending an encrypted message to its recipient by first sending it to a third party responsible for forwarding the message on. Message contents are encrypted once for the recipient then wrapped in a forward message encrypted to the third party.
  • Mediator - An agent that forwards messages to a client over a DIDComm connection.
  • Mediated Agent or Mediation client - The agent(s) to which a mediator is willing to forward messages.
  • Mediation Request - A message from a client to a mediator requesting mediation or forwarding.
  • Keylist - The list of public keys used by the mediator to lookup to which connection a forward message should be sent. Each mediated agent is responsible for maintaining the keylist with the mediator.
  • Keylist Update - A message from a client to a mediator informing the mediator of changes to the keylist.
  • Default Mediator - A mediator to be used with every newly created DIDComm connection.
  • Mediation Connection - Connection between the mediator and the mediated agent or client. Agents can use as many mediators as the identity owner sees fit. Requests for mediation are handled on a per connection basis.
  • See Aries RFC 0211: Coordinate Mediation Protocol for additional details on message attributes and more.
"},{"location":"features/Mediation/#command-line-arguments","title":"Command Line Arguments","text":"
  • --open-mediation - Instructs mediators to automatically grant all incoming mediation requests.
  • --mediator-invitation - Receive invitation, send mediation request and set as default mediator.
  • --mediator-connections-invite - Connect to mediator through a connection invitation. If not specified, connect using an OOB invitation.
  • --default-mediator-id - Set pre-existing mediator as default mediator.
  • --clear-default-mediator - Clear the stored default mediator.

The minimum set of arguments required to enable mediation are:

aca-py start ... \\\n    --open-mediation\n

To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):

aca-py start ... \\\n    --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n

If a default mediator has already been established, then the --default-mediator-id argument can be used instead of the --mediator-invitation.

"},{"location":"features/Mediation/#didcomm-messages","title":"DIDComm Messages","text":"

See Aries RFC 0211: Coordinate Mediation Protocol.

"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":"
  • GET mediation/requests
  • Return a list of all mediation records. Filter by conn_id, state, mediator_terms and recipient_terms.
  • GET mediation/requests/{mediation_id}
  • Retrieve a mediation record by id.
  • DELETE mediation/requests/{mediation_id}
  • Delete mediation record by id.
  • POST mediation/requests/{mediation_id}/grant
  • As a mediator, grant a stored mediation request and send granted message to client.
  • POST mediation/requests/{mediation_id}/deny
  • As a mediator, deny a stored mediation request and send denied message to client.
  • POST mediation/request/{conn_id}
  • Send a mediation request to connection identified by the given connection ID.
  • GET mediation/keylists
  • Returns key list associated with a connection. Filter on client for keys mediated by other agents and server for keys mediated by this agent.
  • POST mediation/keylists/{mediation_id}/send-keylist-update
  • Send keylist update message to mediator identified by the given mediation ID. Updates contained in body of request.
  • POST mediation/keylists/{mediation_id}/send-keylist-query
  • Send keylist query message to mediator identified by the given mediation ID.
  • GET mediation/default-mediator (PR pending)
  • Retrieve the currently set default mediator.
  • PUT mediation/{mediation_id}/default-mediator (PR pending)
  • Set the mediator identified by the given mediation ID as the default mediator.
  • DELETE mediation/default-mediator (PR pending)
  • Clear the currently set default mediator (mediation status is maintained and remains functional, just not used as the default).
"},{"location":"features/Mediation/#mediator-message-flow-overview","title":"Mediator Message Flow Overview","text":""},{"location":"features/Mediation/#using-a-mediator","title":"Using a Mediator","text":"

After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.

"},{"location":"features/Multicredentials/","title":"Multi-Credentials","text":"

It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.

With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of Aries Cloud Agent Python being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.

Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using Aries Cloud Agent Python as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.

In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.

"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"

Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger is supported. Configurable write ledgers can be assigned using is_write in the configuration or using any of the --genesis-url, --genesis-file, and --genesis-transactions startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError is raised.

More background information including problem statement, design (algorithm) and more can be found here.

"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":"
  • Usage
  • Example config file
  • Config properties
  • Multi-ledger Admin API
  • Ledger Selection
  • Read Requests
    • For checking ledger in parallel
  • Write Requests
  • A Special Warning for TAA Acceptance
  • Impact on other ACA-Py function
  • Known Issues
"},{"location":"features/Multiledger/#usage","title":"Usage","text":"

Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list startup parameter. This parameter accepts a string which is the path to the YAML configuration file. For example:

--genesis-transactions-list ./aries_cloudagent/config/multi_ledger_config.yml

If --genesis-transactions-list is specified, then --genesis-url, --genesis-file, --genesis-transactions should not be specified.

"},{"location":"features/Multiledger/#example-config-file","title":"Example config file","text":"
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n  endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n  endorser_alias: \"endorser_test\"\n- id: greenlightDev\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: is_write property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev ledgers are write configurable. By default, on startup bcovrinTest will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger endpoint, either greenlightDev and bcovrinTest can be set as the write ledger.

Note 2: The greenlightDev ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.

- id: localVON\n  is_production: false\n  is_write: true\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: For instance with regards to example config above, localVON will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.

"},{"location":"features/Multiledger/#config-properties","title":"Config properties","text":"

For each ledger, the required properties are as following:

  • id*: The id (or name) of the ledger, can also be used as the pool name if none provided
  • is_production*: Whether the ledger is a production ledger. This is used by the pool selector algorithm to know which ledger to use for certain interactions (i.e. prefer production ledgers over non-production ledgers)

For connecting to ledger, one of the following needs to be specified:

  • genesis_file: The path to the genesis file to use for connecting to an Indy ledger.
  • genesis_transactions: String of genesis transactions to use for connecting to an Indy ledger.
  • genesis_url: The url from which to download the genesis transactions to use for connecting to an Indy ledger.
  • is_write: Whether this ledger is writable. At least one write ledger must be specified, unless running in read-only mode. Multiple write ledgers can be specified in config.

Optional properties:

  • pool_name: name of the indy pool to be opened
  • keepalive: how many seconds to keep the ledger open
  • socks_proxy
  • endorser_did: Endorser public DID registered on the ledger, needed for supporting Endorser protocol at multi-ledger level.
  • endorser_alias: Endorser alias for this ledger, needed for supporting Endorser protocol at multi-ledger level.

Note: Both endorser_did and endorser_alias are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did and endorser.endorser_alias profile setting respectively.

"},{"location":"features/Multiledger/#multi-ledger-admin-api","title":"Multi-ledger Admin API","text":"

Multi-ledger related actions are grouped under the ledger topic in the SwaggerUI.

  • GET /ledger/config: Returns the multiple ledger configuration currently in use
  • GET /ledger/get-write-ledger: Returns the current active/set write_ledger's ledger_id
  • GET /ledger/get-write-ledgers: Returns list of available write_ledger's ledger_id
  • PUT /ledger/{ledger_id}/set-write-ledger: Set active write_ledger's ledger_id
"},{"location":"features/Multiledger/#ledger-selection","title":"Ledger Selection","text":""},{"location":"features/Multiledger/#read-requests","title":"Read Requests","text":"

The following process is executed for these functions in ACA-Py:

  1. get_schema
  2. get_credential_definition
  3. get_revoc_reg_def
  4. get_revoc_reg_entry
  5. get_key_for_did
  6. get_all_endpoints_for_did
  7. get_endpoint_for_did
  8. get_nym_role
  9. get_revoc_reg_delta

If multiple ledgers are configured then IndyLedgerRequestsExecutor service extracts DID from the record identifier and executes the check below, else it returns the BaseLedger instance.

"},{"location":"features/Multiledger/#for-checking-ledger-in-parallel","title":"For checking ledger in parallel","text":"
  • lookup_did_in_configured_ledgers function
  • If the calling function (above) is in items 1-4, then check the DID in cache for a corresponding applicable ledger_id. If found, return the ledger info, else continue.
  • Otherwise, launch parallel _get_ledger_by_did tasks for each of the configured ledgers.
  • As these tasks get finished, construct applicable_prod_ledgers and applicable_non_prod_ledgers dictionaries, each with self_certified and non_self_certified inner dict which are sorted by the original order or index.
  • Order/preference for selection: self_certified > production > non_production
    • Checks production ledger where the DID is self_certified
    • Checks non_production ledger where the DID is self_certified
    • Checks production ledger where the DID is not self_certified
    • Checks non_production ledger where the DID is not self_certified
  • Return an applicable ledger if found, else raise an exception.
  • _get_ledger_by_did function
  • Build and submit GET_NYM
  • Wait for a response for 10 seconds, if timed out return None
  • Parse response
  • Validate state proof
  • Check if DID is self certified
  • Returns ledger info to lookup_did_in_configured_ledgers
"},{"location":"features/Multiledger/#write-requests","title":"Write Requests","text":"

On startup, the first configured applicable ledger is assigned as the write_ledger (BaseLedger), the selection is dependent on the order (top-down) and whether it is production or non_production. For instance, considering this example configuration, ledger bcovrinTest will be set as write_ledger as it is the topmost production ledger. If no production ledgers are included in configuration then the topmost non_production ledger is selected.

"},{"location":"features/Multiledger/#a-special-warning-for-taa-acceptance","title":"A Special Warning for TAA Acceptance","text":"

When you run in multi-ledger mode, ACA-Py will use the pool-name (or id) specified in the ledger configuration file for each ledger.

(When running in single-ledger mode, ACA-Py uses default as the ledger name.)

If you are running against a ledger in write mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name as a key.

This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:

  • set the id for your writable ledger to default (in your ledgers.yaml file)

or:

  • re-accept the TAA once you restart your ACA-Py in multi-ledger mode

Once you re-start ACA-Py, you can check the GET /ledger/taa endpoint to verify your TAA acceptance status.

"},{"location":"features/Multiledger/#impact-on-other-aca-py-function","title":"Impact on other ACA-Py function","text":"

There should be no impact/change in functionality to any ACA-Py protocols.

IndySdkLedger was refactored by replacing wallet: IndySdkWallet instance variable with profile: Profile and accordingly .aries_cloudagent/indy/credex/verifier, .aries_cloudagent/indy/models/pres_preview, .aries_cloudagent/indy/sdk/profile.py, .aries_cloudagent/indy/sdk/verifier, ./aries_cloudagent/indy/verifier were also updated.

Added build_and_return_get_nym_request and submit_get_nym_request helper functions to IndySdkLedger and IndyVdrLedger.

Best practice/feedback emerging from Askar session deadlock issue and endorser refactoring PR was also addressed here by not leaving sessions open unnecessarily and changing context.session to context.profile.session, etc.

These changes are made here:

  • ./aries_cloudagent/ledger/routes.py
  • ./aries_cloudagent/messaging/credential_definitions/routes.py
  • ./aries_cloudagent/messaging/schemas/routes.py
  • ./aries_cloudagent/protocols/actionmenu/v1_0/routes.py
  • ./aries_cloudagent/protocols/actionmenu/v1_0/util.py
  • ./aries_cloudagent/protocols/basicmessage/v1_0/routes.py
  • ./aries_cloudagent/protocols/coordinate_mediation/v1_0/handlers/keylist_handler.py
  • ./aries_cloudagent/protocols/coordinate_mediation/v1_0/routes.py
  • ./aries_cloudagent/protocols/endorse_transaction/v1_0/routes.py
  • ./aries_cloudagent/protocols/introduction/v0_1/handlers/invitation_handler.py
  • ./aries_cloudagent/protocols/introduction/v0_1/routes.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_issue_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_offer_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_proposal_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/handlers/credential_request_handler.py
  • ./aries_cloudagent/protocols/issue_credential/v1_0/routes.py
  • ./aries_cloudagent/protocols/issue_credential/v2_0/routes.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_proposal_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/handlers/presentation_request_handler.py
  • ./aries_cloudagent/protocols/present_proof/v1_0/routes.py
  • ./aries_cloudagent/protocols/trustping/v1_0/routes.py
  • ./aries_cloudagent/resolver/routes.py
  • ./aries_cloudagent/revocation/routes.py
"},{"location":"features/Multiledger/#known-issues","title":"Known Issues","text":"
  • When in multi-ledger mode and switching ledgers (e.g.: the agent is registered on Ledger A and has published its DID there, and now wants to \"move\" to Ledger B) there is an issue that will cause the registration to the new ledger to fail.
"},{"location":"features/Multitenancy/","title":"Multi-tenancy in ACA-Py","text":"

Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.

This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.

"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":"
  • General Concept
  • Base and Sub Wallets
  • Usage
  • Multi-tenant Admin API
  • Managed vs Unmanaged Mode
  • Managed Mode
  • Unmanaged Mode
  • Mode Usage
  • Message Routing
  • Relaying
  • Mediation
  • Webhooks
  • Webhook URLs
  • Identifying the wallet
  • Authentication
  • Getting a token
    • Method 1: Register new tenant
    • Method 2: Get tenant token
  • JWT Secret
  • SwaggerUI
  • Tenant Management
  • Update a tenant
  • Remove a tenant
  • Per tenant settings
"},{"location":"features/Multitenancy/#general-concept","title":"General Concept","text":"

When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.

"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"

Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.

The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.

The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.

"},{"location":"features/Multitenancy/#usage","title":"Usage","text":"

Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin startup parameter. See Multi-tenant Admin API below for more info on the admin API.

The --jwt-secret startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.

Example:

# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#multi-tenant-admin-api","title":"Multi-tenant Admin API","text":"

The multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the Authorization header as specified in Authentication).

Multi-tenancy related actions are grouped under the /multitenancy path or the multitenancy topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the --multitenant-admin startup parameter can be used.

See the SwaggerUI for the exact API definition for multi-tenancy.

"},{"location":"features/Multitenancy/#managed-vs-unmanaged-mode","title":"Managed vs Unmanaged Mode","text":"

Multi-tenancy in ACA-Py is designed with two key management modes in mind.

"},{"location":"features/Multitenancy/#managed-mode","title":"Managed Mode","text":"

In managed mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.

"},{"location":"features/Multitenancy/#unmanaged-mode","title":"Unmanaged Mode","text":"

In unmanaged mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See Authentication for more info.

It is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.

Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.

"},{"location":"features/Multitenancy/#mode-usage","title":"Mode Usage","text":"

The mode used can be specified when creating a wallet using the key_management_mode parameter.

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"

In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. Hyperledger Aries defines two types of routing methods, mediation and relaying.

See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.

"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"

In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.

"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"

ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:

  1. The base wallet has a default mediator set that will be used by sub wallets.
  2. Use --mediator-invitation to connect to the mediator, request mediation, and set it as the default mediator
  3. Use default-mediator-id if you're already connected to the mediator and mediation is granted (e.g. after restart).
  4. When a sub wallet creates a connection or key it will be registered at the mediator via the base wallet connection. The base wallet will still act as a relay and route the messages to the correct sub wallets.
  5. Pro: Not every wallet needs to create a connection with the mediator
  6. Con: Sub wallets have no control over the mediator.
  7. Sub wallet creates a connection with mediator and requests mediation
  8. Use mediation as you would in a non-multi-tenant agent, however, the base wallet will still act as a relay.
  9. You can set the default mediator to use for connections (using the mediation API).
  10. Pro: Sub wallets have control over the mediator.
  11. Con: Every wallet

The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.

A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.

+---------------------+      +----------------------+      +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+      +----------------------+      +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"

ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.

When creating a wallet wallet_dispatch_type be used to specify how webhooks for the wallet should be dispatched. The options are:

  • default: Dispatch only to webhooks associated with this wallet.
  • base: Dispatch only to webhooks associated with the base wallet.
  • both: Dispatch to both webhook targets.

If either default or both is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls option.

Example:

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_webhook_urls\": [\n    \"https://webhook-url.com/path\",\n    \"https://another-url.com/site\"\n  ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"

When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.

For HTTP events the wallet id is included as the x-wallet-id header. For WebSockets, the wallet id is included in the enclosing JSON object.

HTTP example:

POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n    // event payload\n}\n

WebSocket example:

{\n  \"topic\": \"{topic}\",\n  \"wallet_id\": \"{wallet_id}\",\n  \"payload\": {\n    // event payload\n  }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"

When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key header. As there is only a single wallet, this provides sufficient authentication and authorization.

For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.

Example

GET /connections [headers=\"Authorization: Bearer {token}]\n

The Authorization header is in addition to the Admin API key. So if the admin-api-key is enabled (which should be enabled in production) both the Authorization and the x-api-key headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key should be provided.

"},{"location":"features/Multitenancy/#getting-a-token","title":"Getting a token","text":"

A token can be obtained in two ways. The first method is the token parameter from the response of the create wallet (POST /multitenancy/wallet) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token) endpoint.

"},{"location":"features/Multitenancy/#method-1-register-new-tenant","title":"Method 1: Register new tenant","text":"

This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token as well as other useful information like your wallet id will be returned to you.

Example

new_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample.png\",\n  \"key_management_mode\": \"managed\",\n  \"label\": \"example-label-02\",\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_key\": \"example-encryption-key-02\",\n  \"wallet_name\": \"example-name-02\",\n  \"wallet_type\": \"askar\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook\"\n  ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02\",\n    \"image_url\": \"https://aries.ca/images/sample.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"

This method allows you to retrieve a tenant token for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key and the wallet_id of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d { \"wallet_key\": \"example-encryption-key-02\" }\n

Response

{\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n

In unmanaged mode, the get token endpoint also requires the wallet_key parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.

{\n  \"wallet_id\": \"wallet_id\",\n  // \"wallet_key\" in only present in unmanaged mode\n  \"wallet_key\": \"wallet_key\"\n}\n

In unmanaged mode, sending the wallet_key to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.

"},{"location":"features/Multitenancy/#jwt-secret","title":"JWT Secret","text":"

For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret param is added to the ACA-Py agent that will be used for JWT creation and verification.

"},{"location":"features/Multitenancy/#swaggerui","title":"SwaggerUI","text":"

When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize button at the top to set the correct authentication headers. Make sure to also include the Bearer part in the input field. This won't be automatically added.

"},{"location":"features/Multitenancy/#tenant-management","title":"Tenant Management","text":"

After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.

"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"

The following properties can be updated: image_url, label, wallet_dispatch_type, and wallet_webhook_urls for tenants of a multitenancy wallet. To update these properties you will PUT a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID} admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.

Example

update_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n  \"label\": \"example-label-02-updated\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook/updated\"\n  ]\n}'\n
echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook/updated\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02-updated\",\n    \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n

An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error

"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"

The following information is required to delete a tenant:

  • wallet_id
  • wallet_key
  • {Admin_Api_Key} if admin is protected

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n

Response

{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"

To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:

Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_role
  • POST /multitenancy/wallet

Added extra_settings dict field to request schema. extra_settings can be configured in the request body as below:

Example Request

{\n    \"wallet_name\": \" ... \",\n    \"default_label\": \" ... \",\n    \"wallet_type\": \" ... \",\n    \"wallet_key\": \" ... \",\n    \"key_management_mode\": \"managed\",\n    \"wallet_webhook_urls\": [],\n    \"wallet_dispatch_type\": \"base\",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"public-invites\": true\n    },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n  -H \"Content-Type: application/json\" \\\n  -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n  -d @-\n
  • PUT /multitenancy/wallet/{wallet_id}

Added extra_settings dict field to request schema.

Example Request

  {\n    \"wallet_webhook_urls\": [ ... ],\n    \"wallet_dispatch_type\": \"default\",\n    \"label\": \" ... \",\n    \"image_url\": \" ... \",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"ACAPY_PUBLIC_INVITES\": false\n    },\n  }\n
  echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: Aca-Py Plug-Ins","text":""},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-it-work","title":"What's in a Plug-In and How does it Work?","text":"

Plug-ins are loaded on Aca-Py startup based on the following parameters:

  • --plugin - identifies the plug-in library to load
  • --block-plugin - identifies plug-ins (including built-ins) that are not to be loaded
  • --plugin-config - identify a configuration parameter for a plug-in
  • --plugin-config-value - identify a value for a plug-in configuration

The --plug-in parameter specifies a package that is loaded by Aca-Py at runtime, and extends Aca-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.

The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py routes.py (to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).

You can discover which plug-ins are installed in an aca-py instance by calling (in the \"server\" section) the GET /plugins endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config to inspect the Aca-Py configuration, which will include the configuration for the external plug-ins.)

"},{"location":"features/PlugIns/#setup-method","title":"setup method","text":"

If a setup method is provided, it will be called. If not, the message_types.py and routes.py will be explicitly loaded.

This would be in the package/module __init__.py:

async def setup(context: InjectionContext):\n    pass\n

TODO I couldn't find an implementation of a custom setup in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.

"},{"location":"features/PlugIns/#message_typespy","title":"message_types.py","text":"

When loading a plug-in, if there is a message_types.py available, Aca-Py will check the following attributes to initialize the protocol(s):

  • MESSAGE_TYPES - identifies message types supported by the protocol
  • CONTROLLERS - identifies protocol controllers
"},{"location":"features/PlugIns/#routespy","title":"routes.py","text":"

If routes.py is available, then Aca-Py will call the following functions to initialize the Admin endpoints:

  • register() - registers routes for the new Admin endpoints
  • register_events() - registers an events this package will listen for/respond to
"},{"location":"features/PlugIns/#definitionpy","title":"definition.py","text":"

If definition.py is available, Aca-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):

versions = [\n    {\n        \"major_version\": 1,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v1_0\",\n    },\n    {\n        \"major_version\": 2,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v2_0\",\n    },\n]\n

The attributes are:

  • major_version - specifies the protocol major version
  • current_minor_version - specifies the protocol minor version
  • minimum_minor_version - specifies the minimum supported version (if a lower version is installed in another agent)
  • path - specifies the sub-path within the package for this version
"},{"location":"features/PlugIns/#loading-aca-py-plug-ins-at-runtime","title":"Loading Aca-Py Plug-Ins at Runtime","text":"

The load sequence for a plug-in (the \"Startup\" class depends on how Aca-Py is running - upgrade, provision or start):

sequenceDiagram\n  participant Startup\n  Note right of Startup: Configuration is loaded on startup<br/>from aca-py config params\n    Startup->>+ArgParse: configure\n    ArgParse->>settings:  [\"external_plugins\"]\n    ArgParse->>settings:  [\"blocked_plugins\"]\n\n    Startup->>+Conductor: setup()\n      Note right of Conductor: Each configured plug-in is validated and loaded\n      Conductor->>DefaultContext:  build_context()\n      DefaultContext->>DefaultContext:  load_plugins()\n      DefaultContext->>+PluginRegistry:  register_package() (for built-in protocols)\n        PluginRegistry->>PluginRegistry:  register_plugin() (for each sub-package)\n      DefaultContext->>PluginRegistry:  register_plugin() (for non-protocol built-ins)\n      loop for each external plug-in\n      DefaultContext->>PluginRegistry:  register_plugin()\n      alt if a setup method is provided\n        PluginRegistry->>ExternalPlugIn:  has setup\n      else if routes and/or message_types are provided\n        PluginRegistry->>ExternalPlugIn:  has routes\n        PluginRegistry->>ExternalPlugIn:  has message_types\n      end\n      opt if definition is provided\n        PluginRegistry->>ExternalPlugIn:  definition()\n      end\n      end\n      DefaultContext->>PluginRegistry:  init_context()\n        loop for each external plug-in\n        alt if a setup method is provided\n          PluginRegistry->>ExternalPlugIn:  setup()\n        else if a setup method is NOT provided\n          PluginRegistry->>PluginRegistry:  load_protocols()\n          PluginRegistry->>PluginRegistry:  load_protocol_version()\n          PluginRegistry->>ProtocolRegistry:  register_message_types()\n          PluginRegistry->>ProtocolRegistry:  register_controllers()\n        end\n        PluginRegistry->>PluginRegistry:  register_protocol_events()\n      end\n\n      Conductor->>Conductor:  load_transports()\n\n      Note right of Conductor: If the admin server is enabled, plug-in routes are added\n      Conductor->>AdminServer:  create admin server if enabled\n\n    Startup->>Conductor: start()\n      Conductor->>Conductor:  start_transports()\n      Conductor->>AdminServer:  start()\n\n    Note right of Startup: the following represents an<br/>admin server api request\n    Startup->>AdminServer:  setup_context() (called on each request)\n      AdminServer->>PluginRegistry:  register_admin_routes()\n      loop for each external plug-in\n        PluginRegistry->>ExternalPlugIn:  routes.register() (to register endpoints)\n      end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"

When developing a new plug-in:

  • If you are providing a new protocol or defining message types, you should include a definition.py file.
  • If you are providing a new protocol or defining message types, you should include a message_types.py file.
  • If you are providing additional Admin endpoints, you should include a routes.py file.
  • If you are providing any other functionality, you should provide a setup.py file to initialize the custom functionality. No guidance is currently available for this option.
"},{"location":"features/PlugIns/#pip-vs-poetry-support","title":"PIP vs Poetry Support","text":"

Most Aca-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.

"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"

TBD

"},{"location":"features/PlugIns/#aca-py-plug-ins","title":"Aca-Py Plug-ins","text":"

This list was originally published in this hackmd document.

Maintainer Name Features Last Update Link BCGov Redis Events Inbound/Outbound message queue Sep 2022 https://github.com/bcgov/aries-acapy-plugin-redis-events Hyperledger Aries Toolbox UI for ACA-py Aug 2022 https://github.com/hyperledger/aries-toolbox Hyperledger Aries ACApy Plugin Toolbox Protocol Handlers Aug 2022 https://github.com/hyperledger/aries-acapy-plugin-toolbox Indicio Data Transfer Specific Data import Aug 2022 https://github.com/Indicio-tech/aries-acapy-plugin-data-transfer Indicio Question & Answer Non-Aries Protocol Aug 2022 https://github.com/Indicio-tech/acapy-plugin-qa Indicio Acapy-plugin-pickup Fetching Messages from Mediator Aug 2022 https://github.com/Indicio-tech/acapy-plugin-pickup Indicio Machine Readable GF Governance Framework Mar 2022 https://github.com/Indicio-tech/mrgf Indicio Cache Redis Cache for Scalability Jul 2022 https://github.com/Indicio-tech/aries-acapy-cache-redis SICPA Dlab Kafka Events Event Bus Integration Aug 2022 https://github.com/sicpa-dlab/aries-acapy-plugin-kafka-events SICPA Dlab DidComm Resolver Universal Resolver for DIDComm Aug 2022 https://github.com/sicpa-dlab/acapy-resolver-didcomm SICPA Dlab Universal Resolver Multi-ledger Reading Jul 2021 https://github.com/sicpa-dlab/acapy-resolver-universal DDX mydata-did-protocol Oct 2022 https://github.com/decentralised-dataexchange/acapy-mydata-did-protocol BCGov Basic Message Storage Basic message storage (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/basicmessage_storage BCGov Multi-tenant Provider Multi-tenant Provider (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/multitenant_provider BCGov Traction Innkeeper Innkeeper (traction) Feb 2023 https://github.com/bcgov/traction/tree/develop/plugins/traction_innkeeper"},{"location":"features/PlugIns/#references","title":"References","text":"

The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)

Configuration params:

  • https://github.com/hyperledger/aries-cloudagent-python/issues/1121
  • https://hackmd.io/ROUzENdpQ12cz3UB9qk1nA
  • https://github.com/hyperledger/aries-cloudagent-python/pull/1226

Loading plug-ins:

  • https://github.com/hyperledger/aries-cloudagent-python/pull/1086

Versioning for plug-ins:

  • https://github.com/hyperledger/aries-cloudagent-python/pull/443
"},{"location":"features/QualifiedDIDs/","title":"Qualified DIDs In ACA-Py","text":""},{"location":"features/QualifiedDIDs/#context","title":"Context","text":"

In the past, ACA-Py has used \"unqualified\" DIDs by convention established early on in the Aries ecosystem, before the concept of Peer DIDs, or DIDs that existed only between peers and were not (necessarily) published to a distributed ledger, fully matured. These \"unqualified\" DIDs were effectively Indy Nyms that had not been published to an Indy network. Key material and service endpoints were communicated by embedding the DID Document for the \"DID\" in DID Exchange request and response messages.

For those familiar with the DID Core Specification, it is a stretch to refer to these unqualified DIDs as DIDs. Usage of these DIDs will be phased out, as dictated by Aries RFC 0793: Unqualified DID Transition. These DIDs will be phased out in favor of the did:peer DID Method. ACA-Py's support for this method and it's use in DID Exchange and DID Rotation is dictated below.

"},{"location":"features/QualifiedDIDs/#did-exchange","title":"DID Exchange","text":"

When using DID Exchange as initiated by an Out-of-Band invitation:

  • POST /out-of-band/create-invitation accepts two parameters (in addition to others):
  • use_did_method: a DID Method (options: did:peer:2 did:peer:4) indicating that a DID of that type is created (if necessary), and used in the invitation. If a DID of the type has to be created, it is flagged as the \"invitation\" DID and used in all future invitations so that connection reuse is the default behaviour.
    • This is the recommend approach, and we further recommend using did:peer:4.
  • use_did: a complete DID, which will be used for the invitation being established. This supports the edge case of an entity wanting to use a new DID for every invitation. It is the responsibility of the controller to create the DID before passing it in.
  • If not provided, the 0.11.0 behaviour of an unqualified DID is used.
    • We expect this behaviour will change in a later release to be that use_did_method=\"did:peer:4\" is the default, which is created and (re)used.
  • The provided handshake protocol list must also include didexchange/1.1. Optionally, didexchage/1.0 may also be provided, thus enabling backwards compatibility with agents that do not yet support didexchage/1.0 and use of unqualified DIDs.

When receiving an OOB invitation or creating a DID Exchange request to a known Public DID:

  • POST /didexchange/create-request and POST /didexchange/{conn_id}/accept-invitation accepts two parameters (in addition to others):
  • use_did_method: a DID Method (options: did:peer:2 did:peer:4) indicating that a DID of that type should be created and used for the connection.
    • This is the recommend approach, and we further recommend using did:peer:4.
  • use_did: a complete DID, which will be used for the connection being established. This supports the edge case of an entity wanting to use the same DID for more than one connection. It is the responsibility of the controller to create the DID before passing it in.
  • If neither option is provided, the 0.11.0 behaviour of an unqualified DID is created if DID Exchange 1.0 is used, and a DID Peer 4 is used if DID Exchange 1.1 is used.
    • We expect this behaviour will change in a later release to be that a did:peer:4 is created and DID Exchange 1.1 is always used.
  • When auto-accept is used with DID Exchange, then an unqualified DID is created if DID Exchange 1.0 is being used, and a DID Peer 4 is used if DID Exchange 1.1 is used.

With these changes, an existing ACA-Py installation using unqualified DIDs can upgrade to use qualified DIDs:

  • Reactively in 0.12.0 and later, by using like DIDs from the other agent.
  • Proactively, by adding the use_did or use_did_method parameter on the POST /out-of-band/create-invitation, POST /didexchange/create-request. and POST /didexchange/{conn_id}/accept_invitation endpoints and specifying did:peer:2 or did_peer:4.
  • The other agent must be able to process the selected DID Method.
  • Proactively, by updating to use DID Exchange v1.1 and having the other side auto-accept the connection.
"},{"location":"features/QualifiedDIDs/#did-rotation","title":"DID Rotation","text":"

As part of the transition to qualified DIDs, existing connections may be updated to qualified DIDs using the DID Rotate protocol. This is not strictly required; since DIDComm v1 depends on recipient keys for correlating a received message back to a connection, the DID itself is mostly ignored. However, as we transition to DIDComm v2 or if it is desired to update the keys associated with a connection, DID Rotate may be used to update keys and service endpoints.

The steps to do so are:

  • The rotating party creates a new DID using POST /wallet/did/create (or through the endpoints provided by a plugged in DID Method, if relevant).
  • For example, the rotating party will likely create a new did:peer:4.
  • The rotating party initiates the rotation with POST /did-rotate/{conn_id}/rotate providing the created DID as the to_did in the body of the Admin API request.
  • If the receiving party supports DID rotation, a did_rotate webhook will be emitted indicating success.
"},{"location":"features/SelectiveDisclosureJWTs/","title":"SD-JWT Implementation in ACA-Py","text":"

This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.

This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.

In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.

"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"

The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss, iat, exp, and cnf are always visible.

The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:

{\n    \"birthdate\": \"1940-01-01\",\n    \"address\": {\n        \"street_address\": \"123 Main St\",\n        \"locality\": \"Anytown\",\n        \"region\": \"Anystate\",\n        \"country\": \"US\",\n    },\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\"

The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.

"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"

The issuer can decide to treat the address claim in the above example payload as a block that can either be disclosed completely or not at all.

The issuer lists out all the claims inside \"address\" in the non_sd_list, but not address itself:

non_sd_list = [\n    \"address.street_address\",\n    \"address.locality\",\n    \"address.region\",\n    \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"

The issuer may instead decide to make the address claim contents selectively disclosable individually.

The issuer lists only \"address\" in the non_sd_list.

non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"

The issuer may also decide to make the address claim contents selectively disclosable recursively, i.e., the address claim is made selectively disclosable as well as its sub-claims.

The issuer lists neither address nor the subclaims of address in the non_sd_list, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list need not be defined explicitly.

"},{"location":"features/SelectiveDisclosureJWTs/#walk-through-of-sd-jwt-implementation","title":"Walk-Through of SD-JWT Implementation","text":""},{"location":"features/SelectiveDisclosureJWTs/#signing-sd-jwts","title":"Signing SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtsign-endpoint","title":"Example input to /wallet/sd-jwt/sign endpoint","text":"
{\n  \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n  \"headers\": {},\n  \"payload\": {\n    \"sub\": \"user_42\",\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"email\": \"johndoe@example.com\",\n    \"phone_number\": \"+1-202-555-0101\",\n    \"phone_number_verified\": true,\n    \"address\": {\n      \"street_address\": \"123 Main St\",\n      \"locality\": \"Anytown\",\n      \"region\": \"Anystate\",\n      \"country\": \"US\"\n    },\n    \"birthdate\": \"1940-01-01\",\n    \"updated_at\": 1570000000,\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000\n  },\n  \"non_sd_list\": [\n    \"given_name\",\n    \"family_name\",\n    \"nationalities\"\n  ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"
\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n

The sd_jwt_sign() method:

  • Creates the list of claims that are selectively disclosable
  • Uses the non_sd_list compared against the list of JSON paths for all claims to create the list of JSON paths for selectively disclosable claims
  • Separates list splices if necessary
  • Sorts the sd_list so that the claims deepest in the structure are handled first
    • Since we will wrap the selectively disclosable claim keys, the JSON paths for nested structures do not work properly when the claim key is wrapped in an object
  • Uses the JSON paths in the sd_list to find each selectively disclosable claim and wrap it in the SDObj defined by the sd-jwt Python library and removes/replaces the original entry
  • For list items, the element itself is wrapped
  • For other objects, the dictionary key is wrapped
  • With this modified payload, the SDJWTIssuerACAPy.issue() method:
  • Checks if there are selectively disclosable claims at any level in the payload
  • Assembles the SD-JWT payload and creates the disclosures
  • Calls SDJWTIssuerACAPy._create_signed_jws(), which is redefined in order to use the ACA-Py jwt_sign method and which creates the JWT
  • Combines and returns the signed JWT with its disclosures and option key binding JWT, as indicated in the specification
"},{"location":"features/SelectiveDisclosureJWTs/#verifying-sd-jwts","title":"Verifying SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtverify-endpoint","title":"Example input to /wallet/sd-jwt/verify endpoint","text":"

Using the output from the /wallet/sd-jwt/sign example above, we have decided to only reveal two of the selectively disclosable claims (user and updated_at) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.

{\n  \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"

Note that attributes in the non_sd_list (given_name, family_name, and nationalities), as well as essential verification data (iss, iat, exp) are visible directly within the payload. The disclosures include only the values for the user and updated_at claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"] list.

{\n  \"headers\": {\n    \"typ\": \"JWT\",\n    \"alg\": \"EdDSA\",\n    \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n  },\n  \"payload\": {\n    \"_sd\": [\n      \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n      \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n      \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n      \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n      \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n      \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n      \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n    ],\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"nationalities\": [\n      {\n        \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n      },\n      {\n        \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n      },\n      {\n        \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n      }\n    ],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000,\n    \"_sd_alg\": \"sha-256\"\n  },\n  \"valid\": true,\n  \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n  \"disclosures\": [\n    [\n      \"xvDX00fjZferiNiPod51qQ\",\n      \"updated_at\",\n      1570000000\n    ],\n    [\n      \"X99s3_LixBcor_hntREZcg\",\n      \"sub\",\n      \"user_42\"\n    ]\n  ]\n}\n

The sd_jwt_verify() method:

  • Parses the SD-JWT presentation into its component parts: JWT, disclosures, and optional key binding
  • The JWT payload is parsed from its headers and signature
  • Creates a list of plaintext disclosures
  • Calls SDJWTVerifierACAPy._verify_sd_jwt, which is redefined in order to use the ACA-Py jwt_verify method, and which returns the verified JWT
  • If key binding is used, the key binding JWT is verified and checked against the expected audience and nonce values
"},{"location":"features/SupportedRFCs/","title":"Aries AIP and RFCs Supported in Aries Cloud Agent Python","text":"

This document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on Hyperledger Discord or through an issue in this repo.

Last Update: 2024-05-01, Release 0.12.1

The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.

"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"

See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other Aries Frameworks and Agents.

AIP Version Supported Notes AIP 1.0 Fully supported. AIP 2.0 Fully supported, with a couple of very minor exceptions noted below.

A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.

"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at ghcr.io/hyperledger/aries-cloudagent-python. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using the Ed25519Signature2018, BbsBlsSignature2020 and BbsBlsSignatureProof2020 signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Deprecated Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov DID published on an Indy network. did:sov did:web Resolution only did:key did:peer Algorithms 2/3 and 4 Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type will eventually be the same as askar when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Deprecated To be removed in the next Major/Minor release of ACA-Py Full support for the features of the \"indy-wallet\" secure storage capabilities found in the Indy SDK.

New installations of ACA-Py should NOT use the Indy SDK. Existing deployments using the Indy SDK should transition to Aries Askar and related components as soon as possible.

"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Invitations using peer dids supporting connection reuse Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"

All RFCs listed in AIP 1.0 are fully supported in ACA-Py. The following table provides notes about the implementation of specific RFCs.

RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol The agent supports Connection/DID exchange initiated from both plaintext invitations and public DIDs that enable bypassing the invitation message."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"

All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.

RFC Supported Notes Fully Supported"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

The running agent provides a Swagger User Interface that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.

"},{"location":"features/UsingOpenAPI/#aca-py-openapi-raw-output-characteristics","title":"ACA-Py, OpenAPI Raw Output Characteristics","text":"

ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json or from the browser Swagger User Interface directly.

The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec. This tool starts ACA-Py, retrieves the swagger.json file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json language output. The output of this tool enables comparison with the checked-in open-api/swagger.json and open-api/openapi.json, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation is turned off via the open-api/openAPIJSON.config file, so warning messages are printed for non-conformance, but the json is still output. Most of the warnings reported by generate-open-api-spec relate to missing operationId fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId annotations via tags.

The generate-open-api-spec tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript. It is recommended that the generate-open-api-spec is run prior to each release, and the resulting open-api/openapi.json file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.

"},{"location":"features/UsingOpenAPI/#generating-language-wrappers-for-aca-py","title":"Generating Language Wrappers for ACA-Py","text":"

There are inevitably differences around best practice for method naming based on coding language and organization standards.

Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId fields. This allows the greatest flexibility in conforming to external naming requirements.

Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.

The OpenAPI Tools was found to offer some nice features when generating Typescript. It creates separate files for each class and allows the use of a .openapi-generator-ignore file to override generation if there is a spec file issue that needs to be maintained manually.

If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.

Another suggestion for code generation is to keep the modelPropertyNaming set to original when generating code. Although it is tempting to try and enable marshalling into standard naming formats such as camelCase, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model shows when looking at the ACA-Py Swagger UI in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshalling correct in all circumstances when changing model name format.

"},{"location":"features/UsingOpenAPI/#existing-language-wrappers-for-aca-py","title":"Existing Language Wrappers for ACA-Py","text":""},{"location":"features/UsingOpenAPI/#python","title":"Python","text":"
  • Aries Cloud Controller Python (GitHub / didx-xyz)
  • Aries Cloud Controller (PyPi)
  • Traction (GitHub / bcgov)
  • acapy-client (GitHub / Indicio-tech)
"},{"location":"features/UsingOpenAPI/#go","title":"Go","text":"
  • go-acapy-client (GitHub / Idej)
"},{"location":"features/UsingOpenAPI/#java","title":"Java","text":"
  • ACA-Py Java Client Library (GitHub / hyperledger-labs)
"},{"location":"features/devcontainer/","title":"ACA-Py Development with Dev Container","text":"

The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer and will use VS Code to illustrate.

By no means is ACA-Py limited to these tools; they are merely examples.

For information on running demos and tests using provided shell scripts, see DevReadMe readme.

"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"

The primary use case for this devcontainer is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.

There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose all within this container.

"},{"location":"features/devcontainer/#files","title":"Files","text":"

The .devcontainer folder contains the devcontainer.json file which defines this container. We are using a Dockerfile and post-install.sh to build and configure the container run image. The Dockerfile is simple but in place for simplifying image enhancements (ex. adding poetry to the image). The post-install.sh will install some additional development libraries (including for BDD support).

"},{"location":"features/devcontainer/#devcontainer","title":"Devcontainer","text":"

What are Development Containers?

A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.

see https://containers.dev.

In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:

  • Docker Version: 20.10.24
  • VS Code Version: 1.79.0
  • Dev Container Extension Version: v0.295.0
"},{"location":"features/devcontainer/#open-aca-py-in-the-devcontainer","title":"Open ACA-Py in the devcontainer","text":"

To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:

  1. Open Visual Studio Code, and use the Command Palette and use Dev Containers: Open Folder in Container...
  2. Open Visual Studio Code and File|Open Folder..., you should be prompted to Reopen in Container.

NOTE follow any prompts to install Python Extension or reload window for Pylance when first building the container.

ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window as some extensions seem to require this in order to work as expected.

"},{"location":"features/devcontainer/#devcontainerjson","title":"devcontainer.json","text":"

When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.9 image (bash shell) and loading it with all the ACA-Py requirements (and black). We also load a few Visual Studio settings (for running Pytests and formatting with Flake and Black).

"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"

The Python libraries / dependencies are installed using poetry. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run black .). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.

In VS Code, open a Terminal, you should be able to run the following commands:

python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\nblack . --check\npoetry --version\n

The first command should show you that aries_cloudagent module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit installed) and Pull Requests.

When running ruff check . in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13) - that's ok. If there are actual ruff errors, you should see something like:

error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"

We have added Black formatter and Ruff extensions. Although we have added launch settings for both ruff and black, you can also use the extension commands from the command palette.

  • Ruff: Format Document
  • Ruff: Fix all auto-fixable problems

More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window to ensure the extensions are loaded correctly.

"},{"location":"features/devcontainer/#running-docker-in-docker-demos","title":"Running docker-in-docker demos","text":"

Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n

If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":"
  • Development Containers (devcontainers): https://containers.dev
  • Visual Studio Code: https://code.visualstudio.com
  • Dev Containers Extension: marketplace.visualstudio.com
  • Docker: https://www.docker.com
  • Docker Compose: https://docs.docker.com/compose/
"},{"location":"features/devcontainer/#aca-py-debugging","title":"ACA-Py Debugging","text":"

To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json and settings.json, please cut and paste what you want/need.

cp -R .vscode-sample .vscode\n

This will add a launch.json, settings.json and multiple ACA-Py configuration files for developing with different scenarios.

  • Faber: Simple agent to simulate an issuer
  • Alice: Simple agent to simulate a holder
  • Endorser: Simulates the endorser agent in an endorsement required environment
  • Author: Simulates an author agent in a endorsement required environment
  • Multitenant Admin: Includes settings for a multitenant/wallet scenario

Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.

For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url config. If you want to use a non localhost server then you will need to change the url.

"},{"location":"features/devcontainer/#faber","title":"Faber","text":"
  • admin api url = http://localhost:9041
  • study the demo to understand the steps to have the agent in the correct state. Make your public dids and schemas, cred-defs, etc.
"},{"location":"features/devcontainer/#alice","title":"Alice","text":"
  • admin api url = http://localhost:9011
  • study the demo to get a connection with faber
"},{"location":"features/devcontainer/#endorser","title":"Endorser","text":"
  • admin api url = http://localhost:9031
  • This config is useful if you want to develop in an environment that requires endorsement. You can run the demo with ./run_demo faber --endorser-role author to see all the steps to become and endorser.
"},{"location":"features/devcontainer/#author","title":"Author","text":"
  • admin api url = http://localhost:9021
  • This config is useful if you want to develop in an environment that requires endorsement. You can run the demo with ./run_demo faber --endorser-role author to see all the steps to become and author. You need to uncomment the configurations for automating the connection to endorser.
"},{"location":"features/devcontainer/#multitenant-admin","title":"Multitenant-Admin","text":"
  • admin api url = http://localhost:9051
  • This is for a multitenant environment where you can create multiple tenants with subwallets with one agent. See Multitenancy
"},{"location":"features/devcontainer/#try-running-faber-and-alice-at-the-same-time-and-add-break-points-and-recreate-the-demo","title":"Try running Faber and Alice at the same time and add break points and recreate the demo","text":"

To run your ACA-Py code in debug mode, go to the Run and Debug view, select the agent(s) you want to start and click Start Debugging (F5).

This will start your source code as a running ACA-Py instance, all configuration is in the *.yml files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000). You could change this or another ledger such as http://test.bcovrin.vonx.io. These are purposefully, very simple configurations.

For example, open aries_cloudagent/admin/server.py and set a breakpoint in async def status_handler(self, request: web.BaseRequest):, then call GET /status in the Admin Console and hit your breakpoint.

"},{"location":"features/devcontainer/#pytest","title":"Pytest","text":"

Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests will scan and find the tests.

See Python Testing for more details, and Test Commands for usage.

WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File to start debugging.

WARNING: the project configuration found in pyproject.toml include performing ruff checks when we run pytest. Including ruff does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.

"},{"location":"features/devcontainer/#next-steps","title":"Next Steps","text":"

At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal. This isn't a panacea but should get you going in the right direction and provide you with some development tools.

"},{"location":"gettingStarted/","title":"Becoming an Indy/Aries Developer","text":"

This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own Aries agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an Aries agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.

Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.

  • I've heard of Indy, but I don't know the basics
  • I know about Indy, but what is Aries?
  • Demos - Business Level
  • Aries Agents in Context: The Big Picture
  • Aries Internals - Deployment Components
  • An overview of Aries messaging
  • Demos - Aries Developer
  • Establishing a connection between Aries Agents
  • Issuing an AnonCreds credential: From Issuer to Holder/Prover
  • Presenting an Indy credential: From Holder/Prover to Verifier
  • Next steps: Creating your own Aries Agent
  • What should I work on? Options for Aries/Indy Developers
  • Deeper Dive: DIDComm Messages
  • Deeper Dive: DIDComm Message Routing and Encryption
  • Deeper Dive: Routing Example
  • To Do: Deeper Dive: Running and Connecting to an Indy Network
  • Steps and APIs to support credential revocation with Aries agent
  • Deeper Dive: Aca-Py Plug-Ins

Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.

"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between Aries Agents","text":"

Use an ACA-Py issuer/verifier to establish a connection with an Aries mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/AriesAgentArchitecture/","title":"Aries Cloud Agent Internals: Agent and Controller","text":"

This section talks in particular about the architecture of this Aries cloud agent implementation. An instance of an Aries agent is actually made up of to two parts - the agent itself and a controller.

The agent handles all of the core Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.

Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.

As such, the agent is just a configured dependency in an Aries cloud agent deployment. Thus, the vast majority of Aries developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of Aries cloud agent maintainers will focus on adding and maintaining the agent dependency.

Want more details about the agent and controller internals? Take a look at the Aries cloud agent deployment model document.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBasics/","title":"What is Aries?","text":"

Hyperledger Aries provides a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interaction between those clients.

A Hyperledger Aries agent (such as the one in this repository):

  • enables establishing connections with other DIDComm-based agents (using DIDComm encryption envelopes),
  • exchanges messages between connected agents to execute message protocols (using DIDComm protocols)
  • sends notifications about protocol events to a controller, and
  • exposes an API for responses from the controller with direction in handling protocol events.

The concepts and features that make up the Aries project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs. The Aries Working Group meets weekly to expand the design and components of Aries.

The Aries Cloud Agent Python currently only supports Hyperledger Indy-based verifiable credentials and public ledger. Longer term (as we'll see later in this guide) protocols will be extended or added to support other verifiable credential implementations and public ledgers.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBigPicture/","title":"Aries Agents in context: The Big Picture","text":"

Aries agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.

The agents in the picture shares many attributes:

  • They have some sort of storage for keys and other data related to their role as an agent
  • They interact with other agents using secure. peer-to-peer messaging protocols
  • They have some associated mechanism to provide \"business rules\" to control the behavior of the agent
  • That is often a person for phone, tablet, laptop, etc. based agents
  • That is often backend enterprise systems for enterprise agents
  • Business rules for cloud agents are often about the routing of messages to and from edge agents

While there can be many other agent setups, the picture above shows the most common ones - edge agents for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.

Misleading in the picture is that (almost) all agents connect directly to the Ledger network. In this picture it's the Sovrin ledger, but that could be any Indy network (e.g. set of nodes running indy-node software) and in future, ledgers from other providers. That implies most agents embed the ledger SDK (e.g. indy-sdk) and makes calls to the ledger SDK to interact with the ledger and other SDK controlled resources (e.g. secure storage). Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the ledger - they do it directly. Super small IoT devices are an instance of an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the ledger.

While current Aries agents currently only support Indy-based ledgers, the intention is to add support for other ledgers.

The (most common) purpose of cloud agents is to enable secure and privacy preserving routing of messages between edge agents. Rather than messages going directly from edge agent to edge agent (which is often impossible - for example sending to a mobile agent), messages sent from edge agent to edge agent are routed through a sequence of cloud agents. Some of those cloud agents might be controlled by the sender, some by the receiver and others might be gateways owned by agent vendors (called \"Agencies\"). In all cases, an edge agent tells routing agents \"here's how to send messages to me\", so a routing agent sending a message only has to know how to send a peer-to-peer message. While quite complicated, the protocols used by the agents largely take care of this complexity, and most developers don't have to know much about it.

Note the many caveats in this section - \"most common\", \"commonly\", etc. There are many small building blocks available in Aries and underlying components that can be combined in infinite ways. We recommend not worrying about the alternate use cases for now. Focus on understanding the common use cases while remembering that other configurations are possible.

We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how indy-node or indy-sdk work to be able to build your first Aries-based application. Later in this guide we'll covering the starting point you do need to know.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesDeveloperDemos/","title":"Developer Demos and Samples of Aries Agent","text":"

Here are some demos that developers can use to get up to speed on Aries. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.

"},{"location":"gettingStarted/AriesDeveloperDemos/#open-api-demo","title":"Open API demo","text":"

This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.

Collaborating Agents OpenAPI Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"

Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.

Python-based Alice/Faber Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/AriesDeveloperDemos/#indicio-developer-demo","title":"Indicio Developer Demo","text":"

Minimal Aca-Py demo that can be used by developers to isolat and test features:

  • Minimal Setup (everything runs in containers)
  • Quickly reproduce an issue or demonstrate a feature by writing one simple script or pytest tests.

Indicio Aca-Py Minimal Example

"},{"location":"gettingStarted/AriesMessaging/","title":"An overview of Aries messaging","text":"

Aries Agents communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. Aries agents use (an early instance of) the did:peer DID method, which uses DIDs that are not published to a public ledger, but only shared privately between the communicating parties - usually just two agents.

Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:

  • The \"establish connection\" protocol enables two agents to establish a connection through a series of messages - an invitation, a connection request and a connection response.
  • The \"issue credential\" protocol enables an agent to issue a credential to another agent.
  • The \"present proof\" protocol enables an agent to request and receive a proof from another agent.

Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.

Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.

Developers building Aries agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesRoutingExample/","title":"Aries Routing - an example","text":"

In this example, we'll walk through an example of complex routing in Aries, outlining some of the possibilities that can be implemented.

We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.

What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?

"},{"location":"gettingStarted/AriesRoutingExample/#the-scenario","title":"The Scenario","text":"

Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.

"},{"location":"gettingStarted/AriesRoutingExample/#all-the-dids","title":"All the DIDs","text":"

A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:

  • Bob and Alice
  • Bob and his Routing Agent
  • Bob and Agency
  • Bob's Routing Agent and Agency

That's a lot more than just the Bob and Alice relationship we usually think about!

"},{"location":"gettingStarted/AriesRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"

From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):

  • The public keys for agents referenced in the routing
  • The services of type did-communication, including:
  • the one serviceEndpoint
  • the recipientKeys array of referenced keys for the ultimate target(s) of the message
  • the routingKeys array of referenced keys for the mediators

Let's look at the did-communication service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:

  • Bob and Alice:
  • The serviceEndpoint that Bob tells Alice about is the endpoint for the Agency.

    • We'll use for the endpoint the Agency's public DID. That way the Agency can change rotate the keys for the endpoint without all of its clients from having to update every DIDDoc with the new key.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for Alice.

  • The routingKeys entries is a reference to the public key for the Routing Agent.

  • Bob and his Routing Agent:

  • The serviceEndpoint is empty because Bob's iPhone has no endpoint. See the note below for more on this.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for the Routing Agent.
  • The routingKeys array is empty.

  • Bob and Agency:

  • The serviceEndpoint is the endpoint for Bob's Routing Agent.
  • The recipientKeys entry is a key reference for Bob's iPhone specifically for the Agency.
  • The routingKeys is a single entry for the key reference for the Routing Agent key.

  • Bob's Routing Agent and Agency:

  • The serviceEndpoint is the endpoint for Bob's Routing Agent.
  • The recipientKeys entry is a key reference for Bob's Routing Agent specifically for the Agency.
  • The routingKeys array is empty.

The null serviceEndpoint for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.

"},{"location":"gettingStarted/AriesRoutingExample/#preparing-bobs-diddoc-for-alice","title":"Preparing Bob's DIDDoc for Alice","text":"

Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".

We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:

  • Bob's iPhone agent generates a new DID for Alice and prepares, and partially completes, a DIDDoc
  • Bob messages the Routing Agent to send the newly created DID and to get a new public key for the Alice relationship.
  • The Routing Agent records the DID for Alice and the keypair to be used for messages from Alice.
  • The Routing Agent sends the DID to the Agency to let the Agency know that messages for the new DID are to go to the Routing Agent.
  • The Routing Agent sends the data to Bob's iPhone agent.
  • Bob's iPhone agent fills in the rest of the DIDDoc:
  • the public key for the Routing Agent for the Alice relationship
  • the did-communication service endpoint is set to the Agency public DID and
  • the routing keys array with the values of the Agency public DID key reference and the Routing Agent key reference

Note: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.

With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.

At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.

Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:

  • Prepare the message for Bob's Agent.
  • Encrypt and place that message into a \"Forward\" message for Bob's Routing Agent.
  • Encrypt and send the \"Forward\" message to Bob's Agency endpoint.
"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"

To be completed.

"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"

Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?

Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:

  • Issuers indicate that a credential will support revocation when creating the credential definition (CredDef).
  • Issuers create a Revocation Registry definition object of a given size (MaxSize -- the number of credentials that can use the RevReg) and publish it to the ledger (or more precisely, the verifiable data registry). In doing that, a Tails file is also created and published somewhere on the Internet, accessible to all Holders.
  • Issuers create and publish an initial Revocation Registry Entry that defines the state of all credentials within the RevReg, either all active or all revoked. It's a really bad idea to create a RevReg starting with \"all revoked\", so don't do that.
  • Issuers issue credentials and note the \"revocation ID\" of each credential. The \"revocation Id\" is a compound key consisting of the RevRegId from which the credential was issued, and the index within that registry of that credential. An index (from 1 to Max Size of the registry -- or perhaps 0 to Max Size - 1) can only be associated with one issued credential.
  • At some point, a RevReg is all used up (full), and the Issuer must create another one. Ideally, this does not cause an extra delay in the process of issuing credentials.
  • At some point, the Issuer revokes the credential of a holder, using the revocation Id of the relevant credential.
  • At some point, either in conjunction with each revocation, or for a batch of revocations, the Issuer publishes the RevReg(s) associated with a CredDef to the ledger. If there are multiple revocations spread across multiple RevRegs, there may be multiple writes to the ledger.

Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:

  • Issuers flag that revocation will be used when creating the CredDef and the desired size of the RevReg. ACA-Py takes case of creating the initial RevReg(s) without further action by the controller.
  • Two RevRegs are initially created, so there is no delay when one fills up, and another is needed. In ongoing operations, when one RevReg fills up, the other active RevReg is used, and a new RevReg is created.
  • On creation of each RevReg, its corresponding tails file is published by ACA-Py.
  • On Issuance, the controller receives the logical \u201crevocation ID\" (combination of RevRegId+Index) of the issued credential to track.
  • On Revocation, the controller passes in the logical \u201crevocation ID\" of the credential to be revoked, including a \u201cnotify holder\u201d flag. ACA-Py records the revocation as pending and, if asked, sends a notification to the holder using a DIDComm message (Aries RFC 0183: Revocation Notification).
  • The Issuer requests that the revocations for a CredDefId be published. ACA-Py figures out what RevRegs contain pending revocation and so need to be published, and publishes each.

That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.

From experience, we\u2019ve added to two extra features to deal with unexpected conditions:

  • When using an Indy (or similar) ledger, if the local copy of a RevReg gets out of sync with the ledger copy (perhaps due to a failed ledger write), the Framework can create an update transaction to \u201cfix\u201d the issue. This is needed for a revocation state using deltas-type solution (like Indy), but not for a ledger that publishes revocation states containing the entire state of each credential.
  • From time to time there may be a need to \u201crotate\u201d a RevReg \u2014 to mark existing, active RevRegs as \u201cdecommissioned\u201d, and create new ones in their place. We\u2019ve added an endpoint (api call) for that.
"},{"location":"gettingStarted/CredentialRevocation/#using-aca-py-revocation","title":"Using ACA-Py Revocation","text":"

The following are the ACA-Py steps and APIs involved in handling credential revocation.

To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.

Include the command line parameter --tails-server-base-url <indy-tails-server url>

  1. Publish credential definition

    Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.

    POST /credential-definitions\n{\n  \"schema_id\": schema_id,\n  \"support_revocation\": true,\n  # Only needed if support_revocation is true. Defaults to 100\n  \"revocation_registry_size\": size_int,\n  \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n  \"credential_definition_id\": \"credential_definition_id\"\n}\n
  2. Issue credential

    This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.

    POST /issue-credential/send-offer\n{\n    \"cred_def_id\": credential_definition_id,\n    \"revoc_reg_id\": revocation_registry_id\n    \"auto_remove\": False, # We need the credential exchange record when revoking\n    ...\n}\nResponse\n{\n    \"credential_exchange_id\": credential_exchange_id\n}\n
  3. Revoking credential

    POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>\n}\n

    If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations to publish pending revocations in batches. Revocation are not written to ledger until this is called.

  4. When asking for proof, specify the time span when the credential is NOT revoked

     POST /present-proof/send-request\n {\n   \"connection_id\": ...,\n   \"proof_request\": {\n     \"requested_attributes\": [\n       {\n         \"name\": ...\n         \"restrictions\": ...,\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"requested_predicates\": [\n       {\n         \"name\": ...\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"non_revoked\": # Optional, only check revocation if specified\n     {\n       \"from\": <seconds from Unix Epoch> # Optional, default is 0\n       \"to\": <seconds from Unix Epoch>\n     }\n   }\n }\n
"},{"location":"gettingStarted/CredentialRevocation/#revocation-notification","title":"Revocation Notification","text":"

ACA-Py supports Revocation Notification v1.0.

Note: The optional ~please_ack is not currently supported.

"},{"location":"gettingStarted/CredentialRevocation/#issuer-role","title":"Issuer Role","text":"

To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:

  • notify - A boolean value indicating whether or not a notification should be sent. If the argument --notify-revocation is used on startup, this value defaults to true. Otherwise, it will default to false. This value overrides the --notify-revocation flag; the value of notify always takes precedence.
  • connection_id - Connection ID for the connection of the credential holder. This is required when notify is true.
  • thread_id - Message Thread ID of the credential exchange message that resulted in the credential now being revoked. This is required when notify is true
  • comment - An optional comment presented to the credential holder as part of the revocation notification. This field might contain the reason for revocation or some other human readable information about the revocation.

Your request might look something like:

POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>,\n    \"notify\": true,\n    \"connection_id\": <connection id>,\n    \"thread_id\": <thread id>,\n    \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"

On receipt of a revocation notification, an event with topic acapy::revocation-notification::received and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.

If the argument --monitor-revocation-notification is used on startup, a webhook with the topic revocation-notification and a payload containing the thread ID and comment is emitted to registered webhook urls.

"},{"location":"gettingStarted/CredentialRevocation/#manually-creating-revocation-registries","title":"Manually Creating Revocation Registries","text":"

NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.

The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.

However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.

There are several endpoints that must be called, and they must be called in this order:

  1. Create revoc registry POST /revocation/create-registry

  2. you need to provide the credential definition id and the size of the registry

  3. Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}

  4. here you need to provide the full URI that will be written to the ledger, for example:

{\n  \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
  1. Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition

  2. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  3. Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file

  4. the tails server will check that the registry definition is already written to the ledger

  5. Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry

  6. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  7. this operation MUST be performed on the the new revoc registry def BEFORE any revocation operations are performed
"},{"location":"gettingStarted/CredentialRevocation/#revocation-registry-rotation","title":"Revocation Registry Rotation","text":"

From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init, generated, posted, active, full, decommissioned. When issuing revocable credentials, the work is done with the active registry record. There are always 2 active registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active registry. When rotating, all registry records (except records in init state) are decommissioned and a new pair of active registry records are created.

Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate

It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id} after rotation and before issuance (if possible).

"},{"location":"gettingStarted/DIDcommMsgs/","title":"Deeper Dive: DIDComm Messaging","text":"

DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type. The namespace is currently defined to be a public DID that should be globally resolvable to a protocol specification. Currently, \"core\" messages use a DID that is not yet globally resolvable - Daniel Hardman has the keys associated with the DID.

Link: Message Types

As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.

"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"

In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"

The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"

Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"

Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.

"},{"location":"gettingStarted/IndyAriesDevOptions/","title":"What should I work on? Options for Aries/Indy Developers","text":"

Now that you know the basics of the Indy/Aries eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.

This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node (the Indy public ledger) or the indy-sdk. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.

In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.

"},{"location":"gettingStarted/IndyAriesDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"

If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in this repository (aries-cloudagent-python).

If you want to build a mobile agent, there are open source options available, including Aries-MobileAgent-Xamarin (aka \"Aries MAX\"), which is built on Aries Framework .NET, and Aries Mobile Agent React Native, which is built on Aries Framework JavaScript.

As a developer building applications that use/embed Aries agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.

Note that if building apps is what you want to do, you don't need to do a deep dive into the Aries SDK, the Indy SDK or the Indy Node public ledger. You need to know the concepts, but it's not a requirement that know the code base intimately.

"},{"location":"gettingStarted/IndyAriesDevOptions/#contributing-to-aries-cloudagent-python","title":"Contributing to aries-cloudagent-python","text":"

Of course as you build applications using aries-cloudagent-python, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.

"},{"location":"gettingStarted/IndyAriesDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"

aries-cloudagent-python currently supports only Hyperledger Indy-based public ledgers and verifiable credentials exchange. A goal of Hyperledger Aries is to be ledger-agnostic, and to support other ledgers. We're experimenting with adding support for other ledgers, and would welcome assistance in doing that.

"},{"location":"gettingStarted/IndyAriesDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"

Although controllers for an aries-cloudagent-python instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the aries-cloudagent-python, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader Aries community so that this and other codebases improve.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-aries-sdk","title":"Improving Aries SDK","text":"

This code base and other Aries agent implementations currently embed the indy-sdk. However, much of the code in the indy-sdk is being migrated into a variety of Aries language specific repositories. How this migration is to be done is still being decided, but it makes sense that the agent-type things be moved to Aries repositories. A number of language specific Aries SDK repos have been created and are being populated.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-the-indy-sdk","title":"Improving the Indy SDK","text":"

Dropping down a level from Aries and into Indy, the indy-sdk needs to continue to evolve. The code base is robust, of high quality and well thought out, but it needs to continue to add new capabilities and improve existing features. The indy-sdk is implemented in Rust, to produce a C-callable library that can be used by client libraries built in a variety of languages.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-indy-node","title":"Improving Indy Node","text":"

If you are interested in getting into the public ledger part of Indy, particularly if you are going to be a Sovrin Steward, you should take a deep look into indy-node. Like the indy-sdk, indy-node is robust, of high quality and is well thought out. As the network grows, use cases change and new cryptographic primitives move into the mainstream, indy-node capabilities will need to evolve. indy-node is coded in Python.

"},{"location":"gettingStarted/IndyAriesDevOptions/#working-in-cryptography","title":"Working in Cryptography","text":"

Finally, at the deepest level, and core to all of the projects is the cryptography in Hyperledger Ursa. If you are a cryptographer, that's where you want to be - and we want you there.

"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"

NOTE: If you are developer building apps on top of Aries and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. Aries takes care of those details for you. The introduction linked here should be sufficient.

If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).

Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - almost a year! We've got much more relevant examples later in this guide.

As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and Aries.

"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"

Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"

Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"

Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"

Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:

  • The agent is on a mobile device that does not have a persistent connection and so uses a cloud agent.
  • The person does not want to allow correlation of their agent across relationships and so they use a shared, common endpoint (e.g. https://agents-R-Us.ca) that they are \"hidden in a crowd\".
  • An enterprise wants a single gateway to the many enterprise agents they have in their organization.

Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.

The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).

"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"

To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:

  • Prepare the message to be sent to the recipient
  • Successively encrypt and wrap the message for each intermediate mediator in a \"forward\" message - an envelope.
  • Encrypt and send the message to the first agent in the routing

Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.

Link: DIDDoc conventions for inbound routing

"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"

Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.

Link: Mediators and Relays

"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"

The DIDComm encryption handling is handling within the Aries agent, and not really something a developer building applications using an agent needs to worry about. Further, within an Aries agent, the handling of the encryption is left to libraries to handle - ultimately calling dependencies from Hyperledger Ursa. To encrypt a message, the agent code calls a pack() function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack() function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.

Much thought has also gone into repudiable and non-repudiable messaging, as described here.

"},{"location":"gettingStarted/YourOwnAriesAgent/","title":"Creating Your Own Aries Agent","text":"

Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.

"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"

The aca-py agent supports message tracing, according to the Tracing RFC.

Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.

Tracing is configured globally for the agent.

"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"

The following options can be specified when starting the aca-py agent:

  --trace               Generate tracing events.\n  --trace-target <trace-target>\n                        Target for trace events (\"log\", \"message\", or http\n                        endpoint).\n  --trace-tag <trace-tag>\n                        Tag to be included when logging events.\n  --trace-label <trace-label>\n                        Label (agent name) used logging events.\n

The --trace option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log).

Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...} in the JSON payload to the API call (for credential and proof exchanges).

"},{"location":"testing/AgentTracing/#enabling-tracing-in-the-alicefaber-demo","title":"Enabling Tracing in the Alice/Faber Demo","text":"

The run_demo script supports the following parameters and environment variables.

Environment variables:

TRACE_ENABLED          Flag to enable tracing\n\nTRACE_TARGET_URL       Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET             Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG              Tag to be included in all logged trace events\n

Parameters:

--trace-log            Enables tracing to the standard log output\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http           Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n

When running the Faber controller, tracing can be enabled using the T menu option:

Faber      | Connected\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X]\n

When Exchange Tracing is ON, all exchanges will include tracing.

"},{"location":"testing/AgentTracing/#logging-trace-events-to-an-elk-stack","title":"Logging Trace Events to an ELK Stack","text":"

You can use the ELK stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:

DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"

ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url option, which requires the WEBHOOK_URL environment variable. Configure an end point running on the docker host system, port 8888, use the following:

WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/INTEGRATION-TESTS/","title":"Integration Tests for Aca-py using Behave","text":"

Integration tests for aca-py are implemented using Behave functional tests to drive aca-py agents based on the alice/faber demo framework.

If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness tests before you submit your pull requests.

"},{"location":"testing/INTEGRATION-TESTS/#getting-started","title":"Getting Started","text":"

To run the aca-py Behave tests, open a bash shell run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\n./run_bdd -t ~@taa_required\n

Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).

Note also that some tests require a ledger with TAA enabled, how to run these tests will be described later.

By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:

# run the above commands, up to cd aries-cloudagent-python/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

To run the tests against the back-end askar libraries (as opposed to indy-sdk) run the following:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n

(Note that wallet-type is currently the only extra argument supported.)

You can run individual tests by specifying the tag(s):

./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/INTEGRATION-TESTS/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"

To run a local von-network with TAA enabled,run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n

You can then run the TAA-enabled tests as follows:

./run_bdd -t @taa_required\n

or:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To override the default port settings:

AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tags>\n

(Note that since the test run multiple agents you require up to 60 available ports.)

"},{"location":"testing/INTEGRATION-TESTS/#aca-py-integration-tests-vs-aries-agent-test-harness-aath","title":"Aca-py Integration Tests vs Aries Agent Test Harness (AATH)","text":"

Aca-py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running aca-py agent (or in the case of AATH, against any compatible Aries agent), however the aca-py integration tests focus on aca-py specific features.

AATH:

  • Main purpose is to test interoperability between Aries agents
  • Implements detailed tests based on Aries RFC's (runs different scenarios, tests exception paths, etc.)
  • Runs Aries agents using Docker images (agents run for the duration of the tests)
  • Uses a standard \"backchannel\" to support integration of any Aries agent

Aca-py integration tests:

  • Main purpose is to test aca-py
  • Implements tests based on Aries RFC's, but not to the level of detail as AATH (runs (mostly) happy path scenarios against multiple agent configurations)
  • Tests aca-py specific configurations and features
  • Starts and stops agents for each tests to test different aca-py configurations
  • Uses the same Python framework as used for the interactive Alice/Faber demo
"},{"location":"testing/INTEGRATION-TESTS/#configuration-driven-tests","title":"Configuration-driven Tests","text":"

Aca-py integration tests use the same configuration approach as AATH, documented here.

In addition to support for external schemas, credential data etc, the aca-py integration tests support configuration of the aca-py agents that are used to run the test. For example:

Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n  Given \"3\" agents\n     | name  | role     | capabilities        |\n     | Acme  | issuer   | <Acme_capabilities> |\n     | Faber | verifier | <Acme_capabilities> |\n     | Bob   | prover   | <Bob_capabilities>  |\n  And \"<issuer>\" and \"Bob\" have an existing connection\n  And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n  ...\n\n  Examples:\n     | issuer | Acme_capabilities        | Bob_capabilities | Schema_name    | Credential_data          | Proof_request  |\n     | Acme   | --public-did             |                  | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n     | Faber  | --public-did  --mediator | --mediator       | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n

In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.

The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.

"},{"location":"testing/INTEGRATION-TESTS/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All Aca-py Agents Under Test","text":"

You can specify parameters that are applied to all aca-py agents using the ACAPY_ARG_FILE environment variable, for example:

ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

... will apply the parameters in the postgres-indy-args.yml file (which just happens to configure a postgres wallet) to all agents under test.

Or the following:

ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n

... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).

Any aca-py argument can be included in the yml file, and order-of-precedence applies (see https://pypi.org/project/ConfigArgParse/).

"},{"location":"testing/INTEGRATION-TESTS/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"

Aca-py integration tests support the following environment-driven configuration:

  • LEDGER_URL - specify the ledger url
  • TAILS_NETWORK - specify the docker network the tails server is running on
  • PUBLIC_TAILS_URL - specify the public url of the tails server
  • ACAPY_ARG_FILE - specify global aca-py parameters (see above)
"},{"location":"testing/INTEGRATION-TESTS/#running-specific-test-scenarios","title":"Running specific test scenarios","text":"

Behave tests are tagged using the same standard tags as used in AATH.

To run a specific set of Aca-py integration tests (or exclude specific tests):

./run_bdd -t tag1 -t ~tag2\n

(All command line parameters are passed to the behave command, so all parameters supported by behave can be used.)

"},{"location":"testing/INTEGRATION-TESTS/#aries-agent-test-harness-aca-py-tests","title":"Aries Agent Test Harness ACA-Py Tests","text":"

This video is a presentation by Aries Cloud Agent Python (ACA-Py) developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.

"},{"location":"testing/Logging/","title":"Logging docs","text":"

ACA_Py supports multiple configurations of logging.

"},{"location":"testing/Logging/#log-level","title":"Log level","text":"

ACA-Py's logging is based on python's logging lib. Log levels DEBUG, INFO and WARNING are available. Other log levels fall back to WARNING.

"},{"location":"testing/Logging/#per-tenant-logging","title":"Per Tenant Logging","text":"

Supports writing of log messages to a file with wallet_id as the tenant identifier for each. To enable this, both multitenant mode (--multitenant) and writing to log file option (--log-file) are required. If both --multitenant and --log-file are not passed when starting up ACA-Py, then it will use default_logging_config.ini config (backward compatible) and not log at a per tenant level.

"},{"location":"testing/Logging/#command-line-arguments","title":"Command Line Arguments","text":"
  • --log-level - The log level to log on std out
  • --log-file - Enables writing of logs to file. The provided value becomes path to a file to log to. If no value or empty string is provided then it will try to get the path from the config file
  • --log-config - Specifies a custom logging configuration file

Example:

./bin/aca-py start --log-level debug --log-file acapy.log --log-config aries_cloudagent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./aries_cloudagent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"

The log level can be configured using the environment variable ACAPY_LOG_LEVEL. The log file can be set by ACAPY_LOG_FILE. The log config can be set by ACAPY_LOG_CONFIG.

Example:

ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#acapy-config-file","title":"Acapy Config File","text":"

Following parameters can be used in a configuration file like this.

log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n

Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.

"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"

The path to config file is provided via --log-config.

Find an example in default_logging_config.ini.

You can find more detail description in the logging documentation.

For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler and StreamHandler handlers. Custom TimedRotatingFileMultiProcessHandler handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name, when, interval and backupCount can be passed as args=('acapy.log', 'd', 7, 1,) (also shown below). Note: backupCount of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here

[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n

For DictConfig (dict logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini file.

version: 1\nformatters:\n  default:\n    format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n  console:\n    class: logging.StreamHandler\n    level: DEBUG\n    formatter: default\n    stream: ext://sys.stderr\n  rotating_file:\n    class: logging.handlers.TimedRotatingFileMultiProcessHandler\n    level: DEBUG\n    filename: 'acapy.log'\n    when: 'd'\n    interval: 7\n    backupCount: 1\n    formatter: default\nroot:\n  level: INFO\n  handlers:\n    - console\n    - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting Aries Cloud Agent Python","text":"

This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.

Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.

"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":"
  • Unable to Connect to Ledger
  • Local ledger running?
  • Any Firewalls
  • Damaged, Unpublishable Revocation Registry
"},{"location":"testing/Troubleshooting/#unable-to-connect-to-ledger","title":"Unable to Connect to Ledger","text":"

The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.

"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"

Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:

  • Any errors in the startup of von-network?
  • Is the von-network webserver (usually at https:/localhost:9000) accessible? If so, can you click on and see the Genesis File?
  • Do you even need a local ledger? If not, you can use a public sandbox ledger, such as the BCovrin Test ledger, likely by just prefacing your ACA-Py command with LEDGER_URL=http://test.bcovrin.vonx.io. For example, when running the Alice-Faber demo in the demo folder, you can run (for example), the Faber agent using the command: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber
"},{"location":"testing/Troubleshooting/#any-firewalls","title":"Any Firewalls","text":"

Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.

"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"

We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, aca-py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (Andrew/s PR # 1804 (merged) should mitigate but probably won't completely eliminate this from happening).

For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:

To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:

  • GET /revocation/registry/<id>/issued - counts of the number of issued/revoked within a registry
  • GET /revocation/registry/<id>/issued/details - details of all credentials issued/revoked within a registry
  • GET /revocation/registry/<id>/issued/indy_recs - calculated rev_reg_delta from the ledger
  • This is used to compare ledger revoked vs wallet revoked credentials, which is essentially the state of the RevReg on the ledger and in ACA-Py. Where there is a difference, we have an error.
  • PUT /revocation/registry/<id>/fix-revocation-entry-state - publish an update to the RevReg state on the ledger to bring it into alignment with what is in the ACA-Py instance.
  • There is a boolean parameter (apply_ledger_update) to control whether the ledger entry actually gets published so, if you are so inclined, you can call the endpoint to see what the transaction would be, before you actually try to do a ledger update. This will return:
    • rev_reg_delta - same as the \".../indy_recs\" endpoint
    • accum_calculated - transaction to write to ledger
    • accum_fixed - If apply_ledger_update, the transaction actually written to the ledger

Note that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.

We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.

We add an integration test that demonstrates/tests this issue here.

To run the scenario either manually or using the integration tests, you can do the following:

  • Start von-network in TAA mode:
  • ./manage start --taa-sample --logs
  • Start the tails server as usual:
  • ./manage start --logs
  • To run the scenario manually, start faber and let the agent know it needs to TAA-accept before doing any ledger writes:
  • ./run_demo faber --revocation --taa-accept, and then you can run through all the transactions using the Swagger page.
  • To run the scenario via an integration test, run:
  • ./run_bdd -t @taa_required
"},{"location":"testing/UnitTests/","title":"ACA-Py Unit Tests","text":"

The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.

This video is a presentation of the material covered in this document by developer @shaangill025.

"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":"
  • ./scripts/run_tests
  • ./scripts/run_tests aries_cloudagent/protocols/out_of_band/v1_0/tests
  • ./scripts/run_tests_indy includes Indy specific tests
"},{"location":"testing/UnitTests/#pytest","title":"Pytest","text":"

Example: aries_cloudagent/core/tests/test_event_bus.py

@pytest.fixture\ndef event_bus():\n    yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n    yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n    event = Event(topic=\"anything\", payload=\"payload\")\n    yield event\n\nclass MockProcessor:\n    def __init__(self):\n        self.profile = None\n        self.event = None\n\n    async def __call__(self, profile, event):\n        self.profile = profile\n        self.event = event\n\n\n@pytest.fixture\ndef processor():\n    yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n    \"\"\"Test subscribe and unsubscribe.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    assert event_bus.topic_patterns_to_subscribers\n    assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n    event_bus.unsubscribe(re.compile(\".*\"), processor)\n    assert not event_bus.topic_patterns_to_subscribers\n

From aries_cloudagent/core/event_bus.py

class EventBus:\n    def __init__(self):\n        self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n        if pattern not in self.topic_patterns_to_subscribers:\n            self.topic_patterns_to_subscribers[pattern] = []\n        self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n    if pattern in self.topic_patterns_to_subscribers:\n        try:\n            index = self.topic_patterns_to_subscribers[pattern].index(processor)\n        except ValueError:\n            return\n        del self.topic_patterns_to_subscribers[pattern][index]\n        if not self.topic_patterns_to_subscribers[pattern]:\n            del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n    \"\"\"Test subscriber receives event.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    await event_bus.notify(profile, event)\n    assert processor.profile == profile\n    assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n    partials = []\n    for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n        match = pattern.match(event.topic)\n\n        if not match:\n            continue\n\n        for subscriber in subscribers:\n            partials.append(\n                partial(\n                    subscriber,\n                    profile,\n                    event.with_metadata(EventMetadata(pattern, match)),\n                )\n            )\n\n    for processor in partials:\n        try:\n            await processor()\n        except Exception:\n            LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"

From: aries_cloudagent/protocols/didexchange/v1_0/tests/test.manager.py

class TestDidExchangeManager(AsyncTestCase, TestConfig):\n    async def setUp(self):\n        self.responder = MockResponder()\n\n        self.oob_mock = async_mock.MagicMock(\n            clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n        )\n\n        self.route_manager = async_mock.MagicMock(RouteManager)\n        ...\n        self.profile = InMemoryProfile.test_profile(\n            {\n                \"default_endpoint\": \"http://aries.ca/endpoint\",\n                \"default_label\": \"This guy\",\n                \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n                \"debug.auto_accept_invites\": True,\n                \"debug.auto_accept_requests\": True,\n                \"multitenant.enabled\": True,\n                \"wallet.id\": True,\n            },\n            bind={\n                BaseResponder: self.responder,\n                OobMessageProcessor: self.oob_mock,\n                RouteManager: self.route_manager,\n                ...\n            },\n        )\n        ...\n\n    async def test_receive_invitation_no_auto_accept(self):\n        async with self.profile.session() as session:\n            mediation_record = MediationRecord(\n                role=MediationRecord.ROLE_CLIENT,\n                state=MediationRecord.STATE_GRANTED,\n                connection_id=self.test_mediator_conn_id,\n                routing_keys=self.test_mediator_routing_keys,\n                endpoint=self.test_mediator_endpoint,\n            )\n            await mediation_record.save(session)\n            with async_mock.patch.object(\n                self.multitenant_mgr, \"get_default_mediator\"\n            ) as mock_get_default_mediator:\n                mock_get_default_mediator.return_value = mediation_record\n                invi_rec = await self.oob_manager.create_invitation(\n                    my_endpoint=\"testendpoint\",\n                    hs_protos=[HSProto.RFC23],\n                )\n\n                invitee_record = await self.manager.receive_invitation(\n                    invi_rec.invitation,\n                    auto_accept=False,\n                )\n                assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n    self,\n    invitation: OOBInvitationMessage,\n    their_public_did: Optional[str] = None,\n    auto_accept: Optional[bool] = None,\n    alias: Optional[str] = None,\n    mediation_id: Optional[str] = None,\n) -> ConnRecord:\n    ...\n    accept = (\n        ConnRecord.ACCEPT_AUTO\n        if (\n            auto_accept\n            or (\n                auto_accept is None\n                and self.profile.settings.get(\"debug.auto_accept_invites\")\n            )\n        )\n        else ConnRecord.ACCEPT_MANUAL\n    )\n    service_item = invitation.services[0]\n    # Create connection record\n    conn_rec = ConnRecord(\n        invitation_key=(\n            DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n            if isinstance(service_item, OOBService)\n            else None\n        ),\n        invitation_msg_id=invitation._id,\n        their_label=invitation.label,\n        their_role=ConnRecord.Role.RESPONDER.rfc23,\n        state=ConnRecord.State.INVITATION.rfc23,\n        accept=accept,\n        alias=alias,\n        their_public_did=their_public_did,\n        connection_protocol=DIDX_PROTO,\n    )\n\n    async with self.profile.session() as session:\n        await conn_rec.save(\n            session,\n            reason=\"Created new connection record from invitation\",\n            log_params={\n                \"invitation\": invitation,\n                \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n            },\n        )\n\n        # Save the invitation for later processing\n        ...\n\n    return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":"
  • Error catching
  with self.assertRaises(DIDXManagerError) as ctx:\n     ...\n  assert \" ... error ...\" in str(ctx.exception)\n
  • function.assert_called_once_with(parameters) function.assert_called_once()

  • pytest.mark setup in setup.cfg can be attributed at function or class level. Example, @pytest.mark.askar

  • Code coverage

"}]} \ No newline at end of file diff --git a/main/sitemap.xml.gz b/main/sitemap.xml.gz index e1aaab0483693b1c04301022307ec5beddecf75d..9bc8fb4157610efce3afa2ce08d5cee57915251c 100644 GIT binary patch delta 12 Tcmb=gXOr*d;8=HVB3mT@8eRlT delta 12 Tcmb=gXOr*d;8=TTB3mT@8Yu)m