diff --git a/main/demo/ReusingAConnection/index.html b/main/demo/ReusingAConnection/index.html index 5e5c19551d..33cfd9404e 100644 --- a/main/demo/ReusingAConnection/index.html +++ b/main/demo/ReusingAConnection/index.html @@ -2191,30 +2191,24 @@

Reusing a ConnectionReusing a Connection
./run_demo faber --reuse-connections --events
+
+

To run faber using a did_peer and reusable connections:

+
DEMO_EXTRA_AGENT_ARGS="[\"--emit-did-peer-2\"]" ./run_demo faber --reuse-connections --events
+
+

To run this demo using a multi-use invitation (from Faber):

+
DEMO_EXTRA_AGENT_ARGS="[\"--emit-did-peer-2\"]" ./run_demo faber --reuse-connections --multi-use-invitations --events
+
diff --git a/main/demo/index.html b/main/demo/index.html index 538225dfb0..25a0558c72 100644 --- a/main/demo/index.html +++ b/main/demo/index.html @@ -2673,18 +2673,22 @@

Revocation This will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol. There is no other affect on the operation of the agents. -Note that you can't (currently) use the DID Exchange protocol to connect with any of the available mobile agents. +With DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, and connection re-use: -### Endorser - -This is described in [Endorser.md](Endorser.md) +- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations +- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation) +- `--multi-use-invitations` - inviter will issue multi-use invitations -### Run Indy-SDK Backend +### Endorser -This runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask): +This is described in [Endorser.md](Endorser.md) -```bash -./run_demo faber --wallet-type indy +### Run Indy-SDK Backend + +This runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask): + +```bash +./run_demo faber --wallet-type indy

Mediation

To enable mediation, run the alice or faber demo with the --mediation option:

diff --git a/main/search/search_index.json b/main/search/search_index.json index 760437a516..078ed5f117 100644 --- a/main/search/search_index.json +++ b/main/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Hyperledger Aries Cloud Agent - Python","text":"

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

Full access to an organized set of all of the ACA-Py documents is available at https://aca-py.org. Check it out! It's much easier to navigate than this GitHub repo for reading the documentation.

"},{"location":"#overview","title":"Overview","text":"

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using DIDComm messaging and Hyperledger Aries protocols. The \"cloud\" in the name means that ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.

ACA-Py is built on the Aries concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py\u2019s supported Aries protocols include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.

To use ACA-Py you create a business logic controller that \"talks to\" an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the Aries and DIDComm protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type Aries protocols.

This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.

"},{"location":"#multi-tenant","title":"Multi-Tenant","text":"

ACA-Py supports \"multi-tenant\" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an \"issuer-as-a-service\", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a \"cloud wallet\" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.

"},{"location":"#mediator-service","title":"Mediator Service","text":"

Startup options allow the use of an ACA-Py as an Aries mediator using core Aries protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to Aries agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio Technologies. Learn more about deploying a mediator here. See the Aries Mediator Service for a \"best practices\" configuration of an Aries mediator.

"},{"location":"#indy-transaction-endorsing","title":"Indy Transaction Endorsing","text":"

ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.

"},{"location":"#scaled-deployments","title":"Scaled Deployments","text":"

ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.

"},{"location":"#vc-api-endpoints","title":"VC-API Endpoints","text":"

A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.

"},{"location":"#example-uses","title":"Example Uses","text":"

The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:

"},{"location":"#getting-started","title":"Getting Started","text":"

For those new to SSI, Aries and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.

The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.

Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You\u2019ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.

"},{"location":"#understanding-the-architecture","title":"Understanding the Architecture","text":"

There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.

You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the Aries ACA-Py Plugins repository. Check them out -- it might have the very plugin you need!

"},{"location":"#installation-and-usage","title":"Installation and Usage","text":"

Use the \"install and go\" page for developers if you are comfortable with Trust over IP and Aries concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the /demo directory there is a full set of demos for developers to use in getting started, and the demo read me is a great starting point for developers to use an \"in-browser\" approach to run a zero-install example. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.

If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging, if you are unfamiliar with poetry please see our cheat sheet

"},{"location":"#about-the-aca-py-admin-api","title":"About the ACA-Py Admin API","text":"

The overview of ACA-Py\u2019s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.

An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.

Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.

"},{"location":"#troubleshooting","title":"Troubleshooting","text":"

There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the \"aries-cloudagent-python\" channel on the Hyperledger Discord chat server (invitation here).

"},{"location":"#credit","title":"Credit","text":"

The initial implementation of ACA-Py was developed by the Government of British Columbia\u2019s Digital Trust Team in Canada. To learn more about what\u2019s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.

See the MAINTAINERS.md file for a list of the current ACA-Py maintainers, and the guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing \u2014\u00a0guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.

"},{"location":"#license","title":"License","text":"

Apache License Version 2.0

"},{"location":"CHANGELOG/","title":"Aries Cloud Agent Python Changelog","text":""},{"location":"CHANGELOG/#0120rc2","title":"0.12.0rc2","text":""},{"location":"CHANGELOG/#march-5-2024","title":"March 5, 2024","text":"

Release 0.12.0 is a relative large release but currently with no breaking changes. We expect there will be breaking changes (at least in the handling of endorsement) before the 0.12.0 release is finalized, hence the minor version update.

The rc0 release candidate introduced a regression via [PR #2705] that has been reverted in rc1 and later via [PR #2789]. Further investigation is needed to determine how to accomplish the goal of [PR #2705] (\"feat: inject profile\") without the regression. The rc2 and later releases address a regression related to the sending of a revocation notification from the issuer to the holder of a newly revoked credential, fixed in [PR #2814]

Much progress has been made on did:peer support in this release, with the handling of inbound DID Peer 1 added, and inbound and outbound support for DID Peer 2 and 4. The goal of that work is to eliminate the remaining places where \"unqualified\" DIDs remain, and to enable the \"connection reuse\" in the Out of Band protocol when using DID Peer 2 and 4 DIDs. Work continues in supporting ledger agnostic AnonCreds, and the new Hyperledger AnonCreds Rust library. Attention was also given in the release to the handling of JSON-LD Data Integrity Verifiable Credentials, with more expected before the release is finalized. In addition to those updates, there were fixes and improvements across the codebase.

The most visible change in this release is the re-organization of the ACA-Py documentation, moving the vast majority of the documents to the folders within the docs folder -- a long overdue change that will allow us to soon publish the documents on https://aca-py.org directly from the ACA-Py repository, rather than from the separate aries-acapy-docs currently being used.

A big developer improvement is a revamping of the test handling to eliminate ~2500 warnings that were previously generated in the test suite. Nice job @ff137!

"},{"location":"CHANGELOG/#0120rc2-breaking-changes","title":"0.12.0rc2 Breaking Changes","text":"

There are no breaking changes in 0.12.0rc2.

"},{"location":"CHANGELOG/#0120rc2-categorized-list-of-pull-requests","title":"0.12.0rc2 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0110","title":"0.11.0","text":""},{"location":"CHANGELOG/#november-24-2023","title":"November 24, 2023","text":"

Release 0.11.0 is a relatively large release of new features, fixes, and internal updates. 0.11.0 is planned to be the last significant update before we begin the transition to using the ledger agnostic AnonCreds Rust in a release that is expected to bring Admin/Controller API changes. We plan to do patches to the 0.11.x branch while the transition is made to using [Anoncreds Rust].

An important addition to ACA-Py is support for signing and verifying SD-JWT verifiable credentials. We expect this to be the first of the changes to extend ACA-Py to support OpenID4VC protocols.

This release and Release 0.10.5 contain a high priority fix to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details. Anyone using JSON-LD presentations is recommended to upgrade to one of these versions of ACA-Py as soon as possible.

In the CI/CD realm, substantial changes were applied to the source base in switching from:

These are necessary and important modernization changes, with the latter two triggering many (largely mechanical) changes to the codebase.

"},{"location":"CHANGELOG/#0110-breaking-changes","title":"0.11.0 Breaking Changes","text":"

In addition to the impacts of the change for developers in switching from pip to Poetry, the only significant breaking change is the (overdue) transition of ACA-Py to always use the new DIDComm message type prefix, changing the DID Message prefix from the old hardcoded did:sov:BzCbsNYhMrjHiqZDTUASHg;spec to the new hardcoded https://didcomm.org value, and using the new DIDComm MIME type in place of the old. The vast majority (all?) Aries deployments have long since been updated to accept both values, so this change just forces the use of the newer value in sending messages. In updating this, we retained the old configuration parameters most deployments were using (--emit-new-didcomm-prefix and --emit-new-didcomm-mime-type) but updated the code to set the configuration parameters to true even if the parameters were not set. See [PR #2517].

The JSON-LD verifiable credential handling of JSON-LD contexts has been updated to pre-load the base contexts into the repository code so they are not fetched at run time. This is a security best practice for JSON-LD, and prevents errors in production when, from time to time, the JSON-LD contexts are unavailable because of outages of the web servers where they are hosted. See [PR #2587].

A Problem Report message is now sent when a request for a credential is received and there is no associated Credential Exchange Record. This may happen, for example, if an issuer decides to delete a Credential Exchange Record that has not be answered for a long time, and the holder responds after the delete. See [PR #2577].

"},{"location":"CHANGELOG/#0110-categorized-list-of-pull-requests","title":"0.11.0 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#2289-migrate-to-poetry-2436-gavinok","title":"2289 Migrate to Poetry #2436 Gavinok","text":""},{"location":"CHANGELOG/#0105","title":"0.10.5","text":""},{"location":"CHANGELOG/#november-21-2023","title":"November 21, 2023","text":"

Release 0.10.5 is a high priority patch release to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details.

Anyone using JSON-LD presentations is recommended to upgrade to this version of ACA-Py as soon as possible.

"},{"location":"CHANGELOG/#0105-categorized-list-of-pull-requests","title":"0.10.5 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0104","title":"0.10.4","text":""},{"location":"CHANGELOG/#october-9-2023","title":"October 9, 2023","text":"

Release 0.10.4 is a patch release to correct an issue with the handling of did:key routing keys in some mediator scenarios, notably with the use of [Aries Framework Kotlin]. See the details in the PR and [Issue #2531 Routing for agents behind a aca-py based mediator is broken].

Thanks to codespree for raising the issue and providing the fix.

Aries Framework Kotlin

"},{"location":"CHANGELOG/#0104-categorized-list-of-pull-requests","title":"0.10.4 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0103","title":"0.10.3","text":""},{"location":"CHANGELOG/#september-29-2023","title":"September 29, 2023","text":"

Release 0.10.3 is a patch release to add an upgrade process for very old versions of Aries Cloud Agent Python (circa 0.5.2). If you have a long time deployment of an issuer that uses revocation, this release could correct internal data (tags in secure storage) related to revocation registries. Details of the about the triggering problem can be found in [Issue #2485].

The upgrade is applied by running the following command for the ACA-Py instance to be upgraded:

./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg

"},{"location":"CHANGELOG/#0103-categorized-list-of-pull-requests","title":"0.10.3 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0102","title":"0.10.2","text":""},{"location":"CHANGELOG/#september-22-2023","title":"September 22, 2023","text":"

Release 0.10.2 is a patch release for 0.10.1 that addresses three specific regressions found in deploying Release 0.10.1. The regressions are to fix:

"},{"location":"CHANGELOG/#0102-categorized-list-of-pull-requests","title":"0.10.2 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0101","title":"0.10.1","text":""},{"location":"CHANGELOG/#august-29-2023","title":"August 29, 2023","text":"

Release 0.10.1 contains a breaking change, an important fix for a regression introduced in 0.8.2 that impacts certain deployments, and a number of fixes and updates. Included in the updates is a significant internal reorganization of the DID and connection management code that was done to enable more flexible uses of different DID Methods, such as being able to use did:web DIDs for DIDComm messaging connections. The work also paves the way for coming updates related to support for did:peer DIDs for DIDComm. For details on the change see [PR #2409], which includes some of the best pull request documentation ever created.

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

The regression fix is for ACA-Py deployments that use multi-use invitations but do NOT use the --auto-accept-connection-requests flag/processing. A change in 0.8.2 (PR [#2223]) suppressed an extra webhook event firing during the processing after receiving a connection request. An unexpected side effect of that change was that the subsequent webhook event also did not fire, and as a result, the controller did not get any event signalling a new connection request had been received via the multi-use invitation. The update in this release ensures the proper event fires and the controller receives the webhook.

See below for the breaking changes and a categorized list of the pull requests included in this release.

Updates in the CI/CD area include adding the publishing of a nightly container image that includes any changes in the main branch since the last nightly was published. This allows getting the \"latest and greatest\" code via a container image vs. having to install ACA-Py from the repository. In addition, Snyk scanning was added to the CI pipeline, and Indy SDK tests were removed from the pipeline.

"},{"location":"CHANGELOG/#0101-breaking-changes","title":"0.10.1 Breaking Changes","text":"

[#2352] is a breaking change related to the storage of presentation exchange records in ACA-Py. In previous releases, presentation exchange protocol state data records were retained in ACA-Py secure storage after the completion of protocol instances. With this release the default behavior changes to deleting those records by default, unless the ----preserve-exchange-records flag is set in the configuration. This extends the use of that flag that previously applied only to issue credential records. The extension matches the initial intention of the flag--that it cover both issue credential and present proof exchanges. The \"best practices\" for ACA-Py is that the controller (business logic) store any long-lasting business information needed for the service that is using the Aries Agent, and ACA-Py storage should be used only for data necessary for the operation of the agent. In particular, protocol state data should be held in ACA-Py only as long as the protocol is running (as it is needed by ACA-Py), and once a protocol instance completes, the controller should extract and store the business information from the protocol state before it is deleted from ACA-Py storage.

"},{"location":"CHANGELOG/#0100-categorized-list-of-pull-requests","title":"0.10.0 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0100","title":"0.10.0","text":""},{"location":"CHANGELOG/#august-29-2023_1","title":"August 29, 2023","text":"

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

"},{"location":"CHANGELOG/#090","title":"0.9.0","text":""},{"location":"CHANGELOG/#july-24-2023","title":"July 24, 2023","text":"

Release 0.9.0 is an important upgrade that changes (PR [#2302]) the dependency on the now archived Hyperledger Ursa project to its updated, improved replacement, AnonCreds CL-Signatures. This important change is ONLY available when using Aries Askar as the wallet type, which brings in both [Indy VDR] and the CL-Signatures via the latest version of CredX from the indy-shared-rs repository. The update is NOT available to those that are using the Indy SDK. All new deployments of ACA-Py SHOULD use Aries Askar. Further, we strongly recommend that all deployments using the Indy SDK with ACA-Py upgrade their installation to use Aries Askar and the related components using the migration scripts available. An Indy SDK to Askar migration document added to the aca-py.org documentation site, and a deprecation warning added to the ACA-Py startup.

The second big change in this release is that we have upgraded the primary Python version from 3.6 to 3.9 (PR [#2247]). In this case, primary means that Python 3.9 is used to run the unit and integration tests on all Pull Requests. We also do nightly runs of the main branch using Python 3.10. As of this release we have dropped Python 3.6, 3.7 and 3.8, and introduced new dependencies that are not supported in those versions of Python. For those that use the published ACA-Py container images, the upgrade should be easily handled. If you are pulling ACA-Py into your own image, or a non-containerized environment, this is a breaking change that you will need to address.

Please see the next section for all breaking changes, and the subsequent section for a categorized list of all pull requests in this release.

"},{"location":"CHANGELOG/#breaking-changes","title":"Breaking Changes","text":"

In addition to the breaking Python 3.6 to 3.9 upgrade, there are two other breaking changes that may impact some deployments.

[#2034] allows for additional flexibility in using public DIDs in invitations, and adds a restriction that \"implicit\" invitations must be proactively enabled using a flag (--requests-through-public-did). Previously, such requests would always be accepted if --auto-accept was enabled, which could lead to unexpected connections being established.

[#2170] is a change to improve message handling in the face of delivery errors when using a persistent queue implementation such as the ACA-Py Redis Plugin. If you are using the Redis plugin, you MUST upgrade to Redis Plugin Release 0.1.0 in conjunction with deploying this ACA-Py release. For those using their own persistent queue solution, see the PR [#2170] comments for information about changes you might need to make to your deployment.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#082","title":"0.8.2","text":""},{"location":"CHANGELOG/#june-29-2023","title":"June 29, 2023","text":"

Release 0.8.2 contains a number of minor fixes and updates to ACA-Py, including the correction of a regression in Release 0.8.0 related to the use of plugins (see [#2255]). Highlights include making it easier to use tracing in a development environment to collect detailed performance information about what is going in within ACA-Py.

This release pulls in indy-shared-rs Release 3.3 which fixes a serious issue in AnonCreds verification, as described in issue [#2036], where the verification of a presentation with multiple revocable credentials fails when using Aries Askar and the other shared components. This issue occurs only when using Aries Askar and indy-credx Release 3.3.

An important new feature in this release is the ability to set some instance configuration settings at the tenant level of a multi-tenant deployment. See PR [#2233].

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_1","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#081","title":"0.8.1","text":""},{"location":"CHANGELOG/#april-5-2023","title":"April 5, 2023","text":"

Version 0.8.1 is an urgent update to Release 0.8.0 to address an inability to execute the upgrade command. The upgrade command is needed for 0.8.0 Pull Request [#2116] - \"UPGRADE: Fix multi-use invitation performance\", which is useful for (at least) deployments of ACA-Py as a mediator. In the release, the upgrade process is revamped, and documented in Upgrading ACA-Py.

Key points about upgrading for those with production, pre-0.8.1 ACA-Py deployments:

"},{"location":"CHANGELOG/#postgres-support-with-aries-askar","title":"Postgres Support with Aries Askar","text":"

Recent changes to Aries Askar have resulted in Askar supporting Postgres version 11 and greater. If you are on Postgres 10 or earlier and want to upgrade to use Askar, you must migrate your database to Postgres 10.

We have also noted that in some container orchestration environments such as Red Hat's OpenShift and possibly other Kubernetes distributions, Askar using Postgres versions greater than 14 do not install correctly. Please monitor [Issue #2199] for an update to this limitation. We have found that Postgres 15 does install correctly in other environments (such as in docker compose setups).

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_2","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#080","title":"0.8.0","text":""},{"location":"CHANGELOG/#march-14-2023","title":"March 14, 2023","text":"

0.8.0 is a breaking change that contains all updates since release 0.7.5. It extends the previously tagged 1.0.0-rc1 release because it is not clear when the 1.0.0 release will be finalized. Many of the PRs in this release were previously included in the 1.0.0-rc1 release. The categorized list of PRs separates those that are new from those in the 1.0.0-rc1 release candidate.

There are not a lot of new Aries Framework features in this release, as the focus has been on cleanup and optimization. The biggest addition is the inclusion with ACA-Py of a universal resolver interface, allowing an instance to have both local resolvers for some DID Methods and a call out to an external universal resolver for other DID Methods. Another significant new capability is full support for Hyperledger Indy transaction endorsement for Authors and Endorsers. A new repo aries-endorser-service has been created that is a pre-configured instance of ACA-Py for use as an Endorser service.

A recently completed feature that is outside of ACA-Py is a script to migrate existing ACA-Py storage from Indy SDK format to Aries Askar format. This enables existing deployments to switch to using the newer Aries Askar components. For details see the converter in the aries-acapy-tools repository.

"},{"location":"CHANGELOG/#container-publishing-updated","title":"Container Publishing Updated","text":"

With this release, a new automated process publishes container images in the Hyperledger container image repository. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\"). Additional information about the container image publication process can be found in the document Container Images and Github Actions.

The ACA-Py container images are based on Python 3.6 and 3.9 slim-bullseye images, and are designed to support linux/386 (x86), linux/amd64 (x64), and linux/arm64. However, for this release, the publication of multi-architecture containers is disabled. We are working to enable that through the updating of some dependencies that lack that capability. There are two flavors of image built for each Python version. One contains only the Indy/Aries Shared Libraries only (Aries Askar, Indy VDR and Indy Shared RS, supporting only the use of --wallet-type askar). The other (labelled indy) contains the Indy/Aries shared libraries and the Indy SDK (considered deprecated). For new deployments, we recommend using the Python 3.9 Shared Library images. For existing deployments, we recommend migrating to those images.

Those currently using the container images published by BC Gov on Docker Hub should change to use those published to the Hyperledger Package Repository under aries-cloudagent-python.

"},{"location":"CHANGELOG/#breaking-changes-and-upgrades","title":"Breaking Changes and Upgrades","text":""},{"location":"CHANGELOG/#pr-2034-implicit-connections","title":"PR #2034 -- Implicit connections","text":"

The break impacts existing deployments that support implicit connections, those initiated by another agent using a Public DID for this instance instead of an explicit invitation. Such deployments need to add the configuration parameter --requests-through-public-did to continue to support that feature. The use case is that an ACA-Py instance publishes a public DID on a ledger with a DIDComm service in the DIDDoc. Other agents resolve that DID, and attempt to establish a connection with the ACA-Py instance using the service endpoint. This is called an \"implicit\" connection in RFC 0023 DID Exchange.

"},{"location":"CHANGELOG/#pr-1913-unrevealed-attributes-in-presentations","title":"PR #1913 -- Unrevealed attributes in presentations","text":"

Updates the handling of \"unrevealed attributes\" during verification of AnonCreds presentations, allowing them to be used in a presentation, with additional data that can be checked if for unrevealed attributes. As few implementations of Aries wallets support unrevealed attributes in an AnonCreds presentation, this is unlikely to impact any deployments.

"},{"location":"CHANGELOG/#pr-2145-update-webhook-message-to-terse-form-by-default-added-startup-flag-debug-webhooks-for-full-form","title":"PR #2145 - Update webhook message to terse form by default, added startup flag --debug-webhooks for full form","text":"

The default behavior in ACA-Py has been to keep the full text of all messages in the protocol state object, and include the full protocol state object in the webhooks sent to the controller. When the messages include an object that is very large in all the messages, the webhook may become too big to be passed via HTTP. For example, issuing a credential with a photo as one of the claims may result in a number of copies of the photo in the protocol state object and hence, very large webhooks. This change reduces the size of the webhook message by eliminating redundant data in the protocol state of the \"Issue Credential\" message as the default, and adds a new parameter to use the old behavior.

"},{"location":"CHANGELOG/#upgrade-pr-2116-upgrade-fix-multi-use-invitation-performance","title":"UPGRADE PR #2116 - UPGRADE: Fix multi-use invitation performance","text":"

The way that multiuse invitations in previous versions of ACA-Py caused performance to degrade over time. An update was made to add state into the tag names that eliminated the need to scan the tags when querying storage for the invitation.

If you are using multiuse invitations in your existing (pre-0.8.0 deployment of ACA-Py, you can run an upgrade to apply this change. To run upgrade from previous versions, use the following command using the 0.8.0 version of ACA-Py, adding you wallet settings:

aca-py upgrade <other wallet config settings> --from-version=v0.7.5 --upgrade-config-path ./upgrade.yml

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_3","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#075","title":"0.7.5","text":""},{"location":"CHANGELOG/#october-26-2022","title":"October 26, 2022","text":"

0.7.5 is a patch release to deal primarily to add PR #1881 DID Exchange in ACA-Py 0.7.4 with explicit invitations and without auto-accept broken. A couple of other PRs were added to the release, as listed below, and in Milestone 0.7.5.

"},{"location":"CHANGELOG/#list-of-pull-requests","title":"List of Pull Requests","text":""},{"location":"CHANGELOG/#074","title":"0.7.4","text":""},{"location":"CHANGELOG/#june-30-2022","title":"June 30, 2022","text":"

Existing multitenant JWTs invalidated when a new JWT is generated: If you have a pre-existing implementation with existing Admin API authorization JWTs, invoking the endpoint to get a JWT now invalidates the existing JWT. Previously an identical JWT would be created. Please see this comment on PR #1725 for more details.

0.7.4 is a significant release focused on stability and production deployments. As the \"patch\" release number indicates, there were no breaking changes in the Admin API, but a huge volume of updates and improvements. Highlights of this release include:

In addition, there are a significant number of general enhancements, bug fixes, documentation updates and code management improvements.

This release is a reflection of the many groups stressing ACA-Py in production environments, reporting issues and the resulting solutions. We also have a very large number of contributors to ACA-Py, with this release having PRs from 22 different individuals. A big thank you to all of those using ACA-Py, raising issues and providing solutions.

"},{"location":"CHANGELOG/#major-enhancements","title":"Major Enhancements","text":"

A lot of work has been put into this release related to performance and load testing, with significant updates being made to the key \"shared component\" ACA-Py dependencies (Aries Askar, Indy VDR) and Indy Shared RS (including CredX). We now recommend using those components (by using --wallet-type askar in the ACA-Py startup parameters) for new ACA-Py deployments. A wallet migration tool from indy-sdk storage to Askar storage is still needed before migrating existing deployment to Askar. A big thanks to those creating/reporting on stress test scenarios, and especially the team at LISSI for creating the aries-cloudagent-loadgenerator to make load testing so easy! And of course to the core ACA-Py team for addressing the findings.

The largest enhancement is in the area of the endorsing of Hyperledger Indy ledger transactions, enabling an instance of ACA-Py to act as an Endorser for Indy authors needing endorsements to write objects to an Indy ledger. We're working on an Aries Endorser Service based on the new capabilities in ACA-Py, an Endorser to be easily operated by an organization, ideally with a controller starter kit supporting a basic human and automated approvals business workflow. Contributions welcome!

A focus towards the end of the 0.7.4 development and release cycle was on the handling of AnonCreds revocation in ACA-Py. Most important, a production issue was uncovered where by an ACA-Py issuer's local Revocation Registry data could get out of sync with what was published on an Indy ledger, resulting in an inability to publish new RevRegEntry transactions -- making new revocations impossible. As a result, we have added some new endpoints to enable an update to the RevReg storage such that RevRegEntry transactions can again be published to the ledger. Other changes were added related to revocation in general and in the handling of tails files in particular.

The team has worked a lot on evolving the persistent queue (PQ) approach available in ACA-Py. We have landed on a design for the queues for inbound and outbound messages using a default in-memory implementation, and the ability to replace the default method with implementations created via an ACA-Py plugin. There are two concrete, out-of-the-box external persistent queuing solutions available for Redis and Kafka. Those ACA-Py persistent queue implementation repositories will soon be migrated to the Aries project within the Hyperledger Foundation's GitHub organization. Anyone else can implement their own queuing plugin as long as it uses the same interface.

Several new ways to control ACA-Py configurations were added, including new startup parameters, Admin API parameters to control instances of protocols, and additional web hook notifications.

A number of fixes were made to the Credential Exchange protocols, both for V1 and V2, and for both AnonCreds and W3C format VCs. Nothing new was added and there no changes in the APIs.

As well there were a number of internal fixes, dependency updates, documentation and demo changes, developer tools and release management updates. All the usual stuff needed for a healthy, growing codebase.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_4","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#073","title":"0.7.3","text":""},{"location":"CHANGELOG/#january-10-2022","title":"January 10, 2022","text":"

This release includes some new AIP 2.0 features out (Revocation Notification and Discover Features 2.0), a major new feature for those using Indy ledger (multi-ledger support), a new \"version upgrade\" process that automates updating data in secure storage required after a new release, and a fix for a critical bug in some mediator scenarios. The release also includes several new pieces of documentation (upgrade processing, storage database information and logging) and some other documentation updates that make the ACA-Py Read The Docs site useful again. And of course, some recent bug fixes and cleanups are included.

There is a BREAKING CHANGE for those deploying ACA-Py with an external outbound queue implementation (see PR #1501). As far as we know, there is only one organization that has such an implementation and they were involved in the creation of this PR, so we are not making this release a minor or major update. However, anyone else using an external queue should be aware of the impact of this PR that is included in the release.

For those that have an existing deployment of ACA-Py with long-lasting connection records, an upgrade is needed to use RFC 434 Out of Band and the \"reuse connection\" as the invitee. In PR #1453 (details below) a performance improvement was made when finding a connection for reuse. The new approach (adding a tag to the connection to enable searching) applies only to connections made using this ACA-Py release and later, and \"as-is\" connections made using earlier releases of ACA-Py will not be found as reuse candidates. A new \"Upgrade deployment\" capability (#1557, described below) must be executed to update your deployment to add tags for all existing connections.

The Supported RFCs document has been updated to reflect the addition of the AIP 2.0 RFCs for which support was added.

The following is an annotated list of PRs in the release, including a link to each PR.

"},{"location":"CHANGELOG/#072","title":"0.7.2","text":""},{"location":"CHANGELOG/#november-15-2021","title":"November 15, 2021","text":"

A mostly maintenance release with some key updates and cleanups based on community deployments and discovery. With usage in the field increasing, we're cleaning up edge cases and issues related to volume deployments.

The most significant new feature for users of Indy ledgers is a simplified approach for transaction authors getting their transactions signed by an endorser. Transaction author controllers now do almost nothing other than configuring their instance to use an Endorser, and ACA-Py takes care of the rest. Documentation of that feature is here.

"},{"location":"CHANGELOG/#071","title":"0.7.1","text":""},{"location":"CHANGELOG/#august-31-2021","title":"August 31, 2021","text":"

A relatively minor maintenance release to address issues found since the 0.7.0 Release. Includes some cleanups of JSON-LD Verifiable Credentials and Verifiable Presentations

"},{"location":"CHANGELOG/#070","title":"0.7.0","text":""},{"location":"CHANGELOG/#july-14-2021","title":"July 14, 2021","text":"

Another significant release, this version adds support for multiple new protocols, credential formats, and extension methods.

"},{"location":"CHANGELOG/#060","title":"0.6.0","text":""},{"location":"CHANGELOG/#february-25-2021","title":"February 25, 2021","text":"

This is a significant release of ACA-Py with several new features, as well as changes to the internal architecture in order to set the groundwork for using the new shared component libraries: indy-vdr, indy-credx, and aries-askar.

"},{"location":"CHANGELOG/#mediator-support","title":"Mediator support","text":"

While ACA-Py had previous support for a basic routing protocol, this was never fully developed or used in practice. Starting with this release, inbound and outbound connections can be established through a mediator agent using the Aries Mediator Coordination Protocol. This work was initially contributed by Adam Burdett and Daniel Bluhm of Indicio on behalf of SICPA. Read more about mediation support.

"},{"location":"CHANGELOG/#multi-tenancy-support","title":"Multi-Tenancy support","text":"

Started by BMW and completed by Animo Solutions and Anon Solutions on behalf of SICPA, this feature allows for a single ACA-Py instance to host multiple wallet instances. This can greatly reduce the resources required when many identities are being handled. Read more about multi-tenancy support.

"},{"location":"CHANGELOG/#new-connection-protocols","title":"New connection protocol(s)","text":"

In addition to the Aries 0160 Connections RFC, ACA-Py now supports the Aries DID Exchange Protocol for connection establishment and reuse, as well as the Aries Out-of-Band Protocol for representing connection invitations and other pre-connection requests.

"},{"location":"CHANGELOG/#issue-credential-v2","title":"Issue-Credential v2","text":"

This release includes an initial implementation of the Aries Issue Credential v2 protocol.

"},{"location":"CHANGELOG/#notable-changes-for-administrators","title":"Notable changes for administrators","text":""},{"location":"CHANGELOG/#notable-changes-for-plugin-writers","title":"Notable changes for plugin writers","text":"

The following are breaking changes to the internal APIs which may impact Python code extensions.

python= async with profile.session() as session: storage = session.inject(BaseStorage)

"},{"location":"CHANGELOG/#056","title":"0.5.6","text":""},{"location":"CHANGELOG/#october-19-2020","title":"October 19, 2020","text":""},{"location":"CHANGELOG/#055","title":"0.5.5","text":""},{"location":"CHANGELOG/#october-9-2020","title":"October 9, 2020","text":""},{"location":"CHANGELOG/#054","title":"0.5.4","text":""},{"location":"CHANGELOG/#august-24-2020","title":"August 24, 2020","text":""},{"location":"CHANGELOG/#053","title":"0.5.3","text":""},{"location":"CHANGELOG/#july-23-2020","title":"July 23, 2020","text":""},{"location":"CHANGELOG/#052","title":"0.5.2","text":""},{"location":"CHANGELOG/#june-26-2020","title":"June 26, 2020","text":""},{"location":"CHANGELOG/#051","title":"0.5.1","text":""},{"location":"CHANGELOG/#april-23-2020","title":"April 23, 2020","text":""},{"location":"CHANGELOG/#050","title":"0.5.0","text":""},{"location":"CHANGELOG/#april-21-2020","title":"April 21, 2020","text":""},{"location":"CHANGELOG/#045","title":"0.4.5","text":""},{"location":"CHANGELOG/#march-3-2020","title":"March 3, 2020","text":""},{"location":"CHANGELOG/#044","title":"0.4.4","text":""},{"location":"CHANGELOG/#february-28-2020","title":"February 28, 2020","text":""},{"location":"CHANGELOG/#043","title":"0.4.3","text":""},{"location":"CHANGELOG/#february-26-2020","title":"February 26, 2020","text":""},{"location":"CHANGELOG/#042","title":"0.4.2","text":""},{"location":"CHANGELOG/#february-8-2020","title":"February 8, 2020","text":""},{"location":"CHANGELOG/#041","title":"0.4.1","text":""},{"location":"CHANGELOG/#january-31-2020","title":"January 31, 2020","text":""},{"location":"CHANGELOG/#040","title":"0.4.0","text":""},{"location":"CHANGELOG/#december-10-2019","title":"December 10, 2019","text":""},{"location":"CHANGELOG/#035","title":"0.3.5","text":""},{"location":"CHANGELOG/#november-1-2019","title":"November 1, 2019","text":""},{"location":"CHANGELOG/#034","title":"0.3.4","text":""},{"location":"CHANGELOG/#october-23-2019","title":"October 23, 2019","text":""},{"location":"CHANGELOG/#033","title":"0.3.3","text":""},{"location":"CHANGELOG/#september-27-2019","title":"September 27, 2019","text":""},{"location":"CHANGELOG/#032","title":"0.3.2","text":""},{"location":"CHANGELOG/#september-3-2019","title":"September 3, 2019","text":""},{"location":"CHANGELOG/#031","title":"0.3.1","text":""},{"location":"CHANGELOG/#august-15-2019","title":"August 15, 2019","text":""},{"location":"CHANGELOG/#030","title":"0.3.0","text":""},{"location":"CHANGELOG/#august-9-2019","title":"August 9, 2019","text":""},{"location":"CHANGELOG/#021","title":"0.2.1","text":""},{"location":"CHANGELOG/#july-16-2019","title":"July 16, 2019","text":""},{"location":"CHANGELOG/#020","title":"0.2.0","text":""},{"location":"CHANGELOG/#july-16-2019_1","title":"July 16, 2019","text":"

This is the first PyPI release. The history begins with the transfer of aca-py from bcgov to hyperledger.

"},{"location":"CODE_OF_CONDUCT/","title":"Hyperledger Code of Conduct","text":"

Hyperledger is a collaborative project at The Linux Foundation. It is an open-source and open community project where participants choose to work together, and in that process experience differences in language, location, nationality, and experience. In such a diverse environment, misunderstandings and disagreements happen, which in most cases can be resolved informally. In rare cases, however, behavior can intimidate, harass, or otherwise disrupt one or more people in the community, which Hyperledger will not tolerate.

A Code of Conduct is useful to define accepted and acceptable behaviors and to promote high standards of professional practice. It also provides a benchmark for self evaluation and acts as a vehicle for better identity of the organization.

This code (CoC) applies to any member of the Hyperledger community \u2013 developers, participants in meetings, teleconferences, mailing lists, conferences or functions, etc. Note that this code complements rather than replaces legal rights and obligations pertaining to any particular situation.

"},{"location":"CODE_OF_CONDUCT/#statement-of-intent","title":"Statement of Intent","text":"

Hyperledger is committed to maintain a positive work environment. This commitment calls for a workplace where participants at all levels behave according to the rules of the following code. A foundational concept of this code is that we all share responsibility for our work environment.

"},{"location":"CODE_OF_CONDUCT/#code","title":"Code","text":"
  1. Treat each other with respect, professionalism, fairness, and sensitivity to our many differences and strengths, including in situations of high pressure and urgency.

  2. Never harass or bully anyone verbally, physically or sexually.

  3. Never discriminate on the basis of personal characteristics or group membership.

  4. Communicate constructively and avoid demeaning or insulting behavior or language.

  5. Seek, accept, and offer objective work criticism, and acknowledge properly the contributions of others.

  6. Be honest about your own qualifications, and about any circumstances that might lead to conflicts of interest.

  7. Respect the privacy of others and the confidentiality of data you access.

  8. With respect to cultural differences, be conservative in what you do and liberal in what you accept from others, but not to the point of accepting disrespectful, unprofessional or unfair or unwelcome behavior or advances.

  9. Promote the rules of this Code and take action (especially if you are in a leadership position) to bring the discussion back to a more civil level whenever inappropriate behaviors are observed.

  10. Stay on topic: Make sure that you are posting to the correct channel and avoid off-topic discussions. Remember when you update an issue or respond to an email you are potentially sending to a large number of people.

  11. Step down considerately: Members of every project come and go, and the Hyperledger is no different. When you leave or disengage from the project, in whole or in part, we ask that you do so in a way that minimizes disruption to the project. This means you should tell people you are leaving and take the proper steps to ensure that others can pick up where you left off.

"},{"location":"CODE_OF_CONDUCT/#glossary","title":"Glossary","text":""},{"location":"CODE_OF_CONDUCT/#demeaning-behavior","title":"Demeaning Behavior","text":"

is acting in a way that reduces another person's dignity, sense of self-worth or respect within the community.

"},{"location":"CODE_OF_CONDUCT/#discrimination","title":"Discrimination","text":"

is the prejudicial treatment of an individual based on criteria such as: physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.

"},{"location":"CODE_OF_CONDUCT/#insulting-behavior","title":"Insulting Behavior","text":"

is treating another person with scorn or disrespect.

"},{"location":"CODE_OF_CONDUCT/#acknowledgement","title":"Acknowledgement","text":"

is a record of the origin(s) and author(s) of a contribution.

"},{"location":"CODE_OF_CONDUCT/#harassment","title":"Harassment","text":"

is any conduct, verbal or physical, that has the intent or effect of interfering with an individual, or that creates an intimidating, hostile, or offensive environment.

"},{"location":"CODE_OF_CONDUCT/#leadership-position","title":"Leadership Position","text":"

includes group Chairs, project maintainers, staff members, and Board members.

"},{"location":"CODE_OF_CONDUCT/#participant","title":"Participant","text":"

includes the following persons:

"},{"location":"CODE_OF_CONDUCT/#respect","title":"Respect","text":"

is the genuine consideration you have for someone (if only because of their status as participant in Hyperledger, like yourself), and that you show by treating them in a polite and kind way.

"},{"location":"CODE_OF_CONDUCT/#sexual-harassment","title":"Sexual Harassment","text":"

includes visual displays of degrading sexual images, sexually suggestive conduct, offensive remarks of a sexual nature, requests for sexual favors, unwelcome physical contact, and sexual assault.

"},{"location":"CODE_OF_CONDUCT/#unwelcome-behavior","title":"Unwelcome Behavior","text":"

Hard to define? Some questions to ask yourself are:

"},{"location":"CODE_OF_CONDUCT/#unwelcome-sexual-advance","title":"Unwelcome Sexual Advance","text":"

includes requests for sexual favors, and other verbal or physical conduct of a sexual nature, where:

"},{"location":"CODE_OF_CONDUCT/#workplace-bullying","title":"Workplace Bullying","text":"

is a tendency of individuals or groups to use persistent aggressive or unreasonable behavior (e.g. verbal or written abuse, offensive conduct or any interference which undermines or impedes work) against a co-worker or any professional relations.

"},{"location":"CODE_OF_CONDUCT/#work-environment","title":"Work Environment","text":"

is the set of all available means of collaboration, including, but not limited to messages to mailing lists, private correspondence, Web pages, chat channels, phone and video teleconferences, and any kind of face-to-face meetings or discussions.

"},{"location":"CODE_OF_CONDUCT/#incident-procedure","title":"Incident Procedure","text":"

To report incidents or to appeal reports of incidents, send email to Mike Dolan (mdolan@linuxfoundation.org) or Angela Brown (angela@linuxfoundation.org). Please include any available relevant information, including links to any publicly accessible material relating to the matter. Every effort will be taken to ensure a safe and collegial environment in which to collaborate on matters relating to the Project. In order to protect the community, the Project reserves the right to take appropriate action, potentially including the removal of an individual from any and all participation in the project. The Project will work towards an equitable resolution in the event of a misunderstanding.

"},{"location":"CODE_OF_CONDUCT/#credits","title":"Credits","text":"

This code is based on the W3C\u2019s Code of Ethics and Professional Conduct with some additions from the Cloud Foundry\u2018s Code of Conduct.

"},{"location":"CONTRIBUTING/","title":"How to contribute","text":"

You are encouraged to contribute to the repository by forking and submitting a pull request.

For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.

(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)

Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main branch. Pull requests should have a descriptive name, include a summary of all changes made in the pull request description, and include unit tests that provide good coverage of the feature or fix. A Continuous Integration (CI) pipeline is executed on all PRs before review and contributors are expected to address all CI issues identified. Where appropriate, PRs that impact the end-user and developer demos in the repo should include updates or extensions to those demos to cover the new capabilities.

If you would like to propose a significant change, please open an issue first to discuss the work with the community.

Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).

"},{"location":"CONTRIBUTING/#development-tools","title":"Development Tools","text":""},{"location":"CONTRIBUTING/#pre-commit","title":"Pre-commit","text":"

A configuration for pre-commit is included in this repository. This is an optional tool to help contributors commit code that follows the formatting requirements enforced by the CI pipeline. Additionally, it can be used to help contributors write descriptive commit messages that can be parsed by changelog generators.

On each commit, pre-commit hooks will run that verify the committed code complies with ruff and is formatted with black. To install the ruff and black checks:

pre-commit install\n

To install the commit message linter:

pre-commit install --hook-type commit-msg\n
"},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#maintainer-scopes-github-roles-and-github-teams","title":"Maintainer Scopes, GitHub Roles and GitHub Teams","text":"

Maintainers are assigned the following scopes in this repository:

Scope Definition GitHub Role GitHub Team Admin Admin aries-admins Maintainer The GitHub Maintain role Maintain aries-cloudagent-python committers Triage The GitHub Triage role Triage aries triage Read The GitHub Read role Read Aries Contributors Read The GitHub Read role Read TOC Read The GitHub Read role Read aries-framework-go-ext committers"},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"GitHub ID Name Scope LFID Discord ID Email Company Affiliation andrewwhitehead Andrew Whitehead Admin cywolf@gmail.com BC Gov dbluhm Daniel Bluhm Admin daniel@indicio.tech Indicio PBC dhh1128 Daniel Hardman Admin daniel.hardman@gmail.com Provident shaangill025 Shaanjot Gill Maintainer gill.shaanjots@gmail.com BC Gov swcurran Stephen Curran Admin swcurran@cloudcompass.ca BC Gov TelegramSam Sam Curren Maintainer telegramsam@gmail.com Indicio PBC TimoGlastra Timo Glastra Admin timo@animo.id Animo Solutions WadeBarnes Wade Barnes Admin wade@neoterictech.ca BC Gov usingtechnology Jason Sherman Maintainer tools@usingtechnolo.gy BC Gov"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name GitHub ID Scope LFID Discord ID Email Company Affiliation"},{"location":"MAINTAINERS/#the-duties-of-a-maintainer","title":"The Duties of a Maintainer","text":"

Maintainers are expected to perform the following duties for this repository. The duties are listed in more or less priority order:

"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

This community welcomes contributions. Interested contributors are encouraged to progress to become maintainers. To become a maintainer the following steps occur, roughly in order.

"},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

Being a maintainer is not a status symbol or a title to be carried indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

The process to move a maintainer from active to emeritus status is comparable to the process for adding a maintainer, outlined above. In the case of voluntary resignation, the Pull Request can be merged following a maintainer PR approval. If the removal is for any other reason, the following steps SHOULD be followed:

Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

"},{"location":"PUBLISHING/","title":"How to Publish a New Version","text":"

The code to be published should be in the main branch. Make sure that all the PRs to go in the release are merged, and decide on the release tag. Should it be a release candidate or the final tag, and should it be a major, minor or patch release, per semver rules.

Once ready to do a release, create a local branch that includes the following updates:

  1. Create a PR branch from an updated main branch.

  2. Update the CHANGELOG.md to add the new release. Only create a new section when working on the first release candidate for a new release. When transitioning from one release candidate to the next, or to an official release, just update the title and date of the change log section.

  3. Include details of the merged PRs included in this release. General process to follow:

  4. Gather the set of PRs since the last release and put them into a list. A good tool to use for this is the github-changelog-generator. Steps:

  5. Create a read only GitHub token for your account on this page: https://github.com/settings/tokens with a scope of repo / public_repo.
  6. Use a command like the following, adjusting the tag parameters as appropriate. docker run -it --rm -v \"$(pwd)\":/usr/local/src/your-app githubchangeloggenerator/github-changelog-generator --user hyperledger --project aries-cloudagent-python --output 0.11.0rc2.md --since-tag 0.10.4 --future-release 0.11.1rc2 --release-branch main --token <your-token>
  7. In the generated file, use only the PR list -- we don't include the list of closed issues in the Change Log.

In some cases, the approach above fails because of too many API calls. An alternate approach to getting the list of PRs in the right format is to use OpenAI ChatGPT.

Prepare the following ChatGPT request. Don't hit enter yet--you have to add the data.

Generate from this the github pull request number, the github id of the author and the title of the pull request in a tab-delimited list

Get a list of the merged PRs since the last release by displaying the PR list in the GitHub UI, highlighting/copying the PRs and pasting them below the ChatGPT request, one page after another. Hit <Enter>, let the AI magic work, and you should have a list of the PRs in a nice table with a Copy link that you should click.

Once you have that, open this Google Sheet and highlight the A1 cell and paste in the ChatGPT data. A formula in column E will have the properly formatted changelog entries. Double check the list with the GitHub UI to make sure that ChatGPT isn't messing with you and you have the needed data.

If using ChatGPT doesn't appeal to you, try this scary sed/command line approach:

/Approved/d\n/updated /d\n/^$/d\n/^ [0-9]/d\ns/was merged.*//\n/^@/d\ns# by \\(.*\\) # [\\1](https://github.com/\\1)#\ns/^ //\ns#  \\#\\([0-9]*\\)# [\\#\\1](https://github.com/hyperledger/aries-cloudagent-python/pull/\\1) #\ns/  / /g\n/^Version/d\n/tasks done/d\ns/^/- /\n

Once you have the list of PRs:

Additional information about the container image publication process can be found in the document Container Images and Github Actions.

  1. Update the ACA-Py Read The Docs site by building the new \"latest\" (main branch) and activating and building the new release. Appropriate permissions are required to publish the new documentation version.

  2. Update the https://aca-py.org website with the latest documentation by creating a PR and tag of the latest documentation from this site. Details are provided in the aries-acapy-docs repository.

"},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

"},{"location":"UpdateRTD/","title":"Managing Aries Cloud Agent Python Read The Docs Documentation","text":"

This document describes how to maintain the Read The Docs documentation that is generated from the ACA-Py code base. As the structure of the ACA-Py code evolves, the RTD files need to be regenerated and possibly updated, as described here.

"},{"location":"UpdateRTD/#generating-aca-py-read-the-docs-rtd-documentation","title":"Generating ACA-Py Read The Docs (RTD) documentation","text":""},{"location":"UpdateRTD/#before-you-start","title":"Before you start","text":"

To test generate and view the RTD documentation locally, you must install Sphinx and the Sphinx RTD theme. Follow the instructions on the respective pages to install and verify the installation on your system.

"},{"location":"UpdateRTD/#generate-module-files","title":"Generate Module Files","text":"

To rebuild the project and settings from scratch (you'll need to move the generated index file up a level):

rm -rf generated; sphinx-apidoc -f -M -o ./generated ../aries_cloudagent/ $(find ../aries_cloudagent/ -name '*tests*')

Note that the find command that is used to exclude any of the test python files from the RTD documentation.

Check the git status in your repo to see if the generator updates, adds or removes any existing RTD modules.

"},{"location":"UpdateRTD/#reviewing-the-files-locally","title":"Reviewing the files locally","text":"

To auto-generate the module documentation locally run:

sphinx-build -b html -a -E -c ./ ./ ./_build\n

Once generated, go into the _build folder and open index.html in a browser. Note that the _build is .gitignore'd and so will not be part of a git push.

"},{"location":"UpdateRTD/#look-for-errors","title":"Look for Errors","text":"

This is the hard part; looking for errors in docstrings added by devs. Some tips:

Other than that, please investigate and fix things that you find. If there are fixes, it's usually to adhere to the rules around processing docstrings, and especially around JSON samples.

"},{"location":"UpdateRTD/#checking-for-missing-modules","title":"Checking for missing modules","text":"

The file index.rst in the ACA-Py docs folder drive the RTD generation. It picks up all the modules in the source code, starting from the root ../aries_cloudagent folder. However, some modules are not picked up automatically from the root and have to be manually added to index.rst. To do that:

If any are missing, you likely need to add them to the index.rst file in the toctree section of the file. You will see there are already several instances of that, notably \"connections\" and \"protocols\".

"},{"location":"UpdateRTD/#updating-the-readthedocsorg-site","title":"Updating the readthedocs.org site","text":"

The RTD documentation is not currently auto-generated, so a manual re-generation of the documentation is still required.

TODO: Automate this when new tags are applied to the repository.

"},{"location":"aca-py.org/","title":"Welcome!","text":"

Welcome to the Aries Cloud Agent Python documentation site. On this site you will find documentation for recent releases of ACA-Py. You'll find a few of the older versions of ACA-Py (pre-0.8.0), all versions since 0.8.0, and the main branch, which is the latest and greatest.

All of the documentation here is extracted from the Aries Cloud Agent Python repository. If you want to contribute to the documentation, please start there.

Ready to go? Scan the tabs in the page header to find the documentation you need now!

"},{"location":"aca-py.org/#code-internals-documentation","title":"Code Internals Documentation","text":"

In addition to this documentation site, the ACA-Py community also maintains an ACA-Py internals documentation site. The internals documentation consists of the docstrings extracted from the ACA-Py Python code and covers all of the (non-test) modules in the codebase. Check it out on the Aries Cloud Agent-Python ReadTheDocs site. As with this site, the ReadTheDocs documentation is version specific.

Got questions?

"},{"location":"assets/","title":"Assets Folder for Documentation","text":"

Put any assets (images, source for images, videos, etc.) in this folder to be referenced in the various documents for this repo.

"},{"location":"assets/#plantuml-source-and-images","title":"Plantuml Source and Images","text":"

Plantuml diagrams are stored in this folder in source form in files ending in .puml and are generated manually using the ./genPlantuml script. The script uses a docker image from docker-hub and can be run without downloading any dependencies.

If you don't want to use the script, download plantuml and a command line utility and use that for the plantuml generation. I preferred not having any dependencies used (other than docker) and couldn't find a nice way to run plantuml headless from a command line.

"},{"location":"assets/#to-do","title":"To Do","text":"

It would be better to use a local Dockerfile vs. one found on Docker Hub. The one I did find was simple and straight forward.

I couldn't tell if the svg generation was working so just went with png. Not sure which would be better.

"},{"location":"demo/","title":"Aries Cloud Agent Python (ACA-Py) Demos","text":"

There are several demos available for ACA-Py mostly (but not only) aimed at developers learning how to deploy an instance of the agent and an ACA-Py controller to implement an application.

"},{"location":"demo/#table-of-contents","title":"Table of Contents","text":""},{"location":"demo/#the-alicefaber-python-demo","title":"The Alice/Faber Python demo","text":"

The Alice/Faber demo is the (in)famous first verifiable credentials demo. Alice, a former student of Faber College (\"Knowledge is Good\"), connects with the College, is issued a credential about her degree and then is asked by the College for a proof. There are a variety of ways of running the demo. The easiest is in your browser using a site (\"Play with VON\") that let's you run docker containers without installing anything. Alternatively, you can run locally on docker (our recommendation), or using python on your local machine. Each approach is covered below.

"},{"location":"demo/#running-in-a-browser","title":"Running in a Browser","text":"

In your browser, go to the docker playground service Play with Docker. On the title screen, click \"Start\". On the next screen, click (in the left menu) \"+Add a new instance\". That will start up a terminal in your browser. Run the following commands to start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Alice's agent is now running.

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-in-docker","title":"Running in Docker","text":"

Running the demo in docker requires having a von-network (a Hyperledger Indy public ledger sandbox) instance running in docker locally. See the VON Network Tutorial for guidance on starting and stopping your own local Hyperledger Indy instance.

Open three bash shells. For Windows users, git-bash is highly recommended. bash is the default shell in Linux and Mac terminal sessions.

In the first terminal window, start von-network by following the Building and Starting instructions.

In the second terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the faber agent by issuing the following command:

  ./run_demo faber\n

In the third terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the alice agent by issuing the following command:

  ./run_demo alice\n

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-locally","title":"Running Locally","text":"

The following is an approach to to running the Alice and Faber demo using Python3 running on a bare machine. There are other ways to run the components, but this covers the general approach.

We don't recommend this approach if you are just trying this demo, as you will likely run into issues with the specific setup of your machine.

"},{"location":"demo/#installing-prerequisites","title":"Installing Prerequisites","text":"

We assume you have a running Python 3 environment. To install the prerequisites specific to running the agent/controller examples in your Python environment, run the following command from this repo's demo folder. The precise command to run may vary based on your Python environment setup.

pip3 install -r demo/requirements.txt\n

While that process will include the installation of the Indy python prerequisite, you still have to build and install the libindy code for your platform. Follow the installation instructions in the indy-sdk repo for your platform.

"},{"location":"demo/#start-a-local-indy-ledger","title":"Start a local Indy ledger","text":"

Start a local von-network Hyperledger Indy network running in Docker by following the VON Network Building and Starting instructions.

We strongly recommend you use Docker for the local Indy network until you really, really need to know the details of running an Indy Node instance on a bare machine.

"},{"location":"demo/#genesis-file-handling","title":"Genesis File handling","text":"

Assuming you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section. If you started the Indy ledger without using VON Network, this information might be helpful.

An Aries agent (or other client) connecting to an Indy ledger must know the contents of the genesis file for the ledger. The genesis file lets the agent/client know the IP addresses of the initial nodes of the ledger, and the agent/client sends ledger requests to those IP addresses. When using the indy-sdk ledger, look for the instructions in that repo for how to find/update the ledger genesis file, and note the path to that file on your local system.

The envrionment variable GENESIS_FILE is used to let the Aries demo agents know the location of the genesis file. Use the path to that file as value of the GENESIS_FILE environment variable in the instructions below. You might want to copy that file to be local to the demo so the path is shorter.

"},{"location":"demo/#run-a-local-postgres-instance","title":"Run a local Postgres instance","text":"

The demo uses the postgres database the wallet persistence. Use the Docker Hub certified postgres image to start up a postgres instance to be used for the wallet storage:

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres -c 'log_statement=all' -c 'logging_collector=on' -c 'log_destination=stderr'\n
"},{"location":"demo/#optional-run-a-von-network-ledger-browser","title":"Optional: Run a von-network ledger browser","text":"

If you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section, as you already have a Ledger browser running, accessible on http://localhost:9000.

If you started the Indy ledger without using VON Network, and you want to be able to browse your local ledger as you run the demo, clone the von-network repo, go into the root of the cloned instance and run the following command, replacing the /path/to/local-genesis.txt with a path to the same genesis file as was used in starting the ledger.

GENESIS_FILE=/path/to/local-genesis.txt PORT=9000 REGISTER_NEW_DIDS=true python -m server.server\n
"},{"location":"demo/#run-the-alice-and-faber-controllersagents","title":"Run the Alice and Faber Controllers/Agents","text":"

With the rest of the pieces running, you can run the Alice and Faber controllers and agents. To do so, cd into the demo folder your clone of this repo in two terminal windows.

If you are using a VON Network instance of Hyperledger, run the following commands:

DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

If you started the Indy ledger without using VON Network, use the following commands, replacing the /path/to/local-genesis.txt with the one for your configuration.

GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

Note that Alice and Faber will each use 5 ports, e.g., using the parameter ... --port 8020 actually uses ports 8020 through 8024. Feel free to use different ports if you want.

Everything running? See the Follow the Script section below for further instructions.

If the demo fails with an error that references the genesis file, a timeout connecting to the Indy Pool, or an Indy 307 error, it's likely a problem with the genesis file handling. Things to check:

"},{"location":"demo/#follow-the-script","title":"Follow The Script","text":"

With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (X) Exit?\n
"},{"location":"demo/#exchanging-messages","title":"Exchanging Messages","text":"

Feel free to use the \"3\" option to send messages back and forth between the agents. Fun, eh? Those are secure, end-to-end encrypted messages.

"},{"location":"demo/#issuing-and-proving-credentials","title":"Issuing and Proving Credentials","text":"

When ready to test the credentials exchange protocols, go to the Faber prompt, enter \"1\" to send a credential, and then \"2\" to request a proof.

You don't need to do anything with Alice's agent - her agent is implemented to automatically receive credentials and respond to proof requests.

Note there is an option \"2a\" to initiate a connectionless proof - you can execute this option but it will only work end-to-end when connecting to Faber from a mobile agent.

"},{"location":"demo/#additional-options-in-the-alicefaber-demo","title":"Additional Options in the Alice/Faber demo","text":"

You can enable support for various ACA-Py features by providing additional command-line arguements when starting up alice or faber.

Note that when the controller starts up the agent, it prints out the ACA-Py startup command with all parameters - you can inspect this command to see what parameters are provided in each case. For more details on the parameters, just start ACA-Py with the --help parameter, for example:

./scripts/run_docker start --help\n
"},{"location":"demo/#revocation","title":"Revocation","text":"

To enable support for revoking credentials, run the faber demo with the --revocation option:

./run_demo faber --revocation\n

Note that you don't specify this option with alice because it's only applicable for the credential issuer (who has to enable revocation when creating a credential definition, and explicitly revoke credentials as appropriate; alice doesn't have to do anything special when revocation is enabled).

You need to run an AnonCreds revocation registry tails server in order to support revocation - the details are described in the Alice gets a Phone demo instructions.

Faber will setup support for revocation automatically, and you will see an extra option in faber's menu to revoke a credential:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (5) Revoke Credential\n    (6) Publish Revocations\n    (7) Rotate Revocation Registry\n    (8) List Revocation Registries\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n  ```\n\nWhen you issue a credential, make a note of the `Revocation registry ID` and `Credential revocation ID`:\n
Faber | Revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Faber | Credential revocation ID: 1
When you revoke a credential you will need to provide those values:\n
[\u00bd/\u00be/\u215a/\u215e/T/X] 5

Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y

Note that you need to Publish the revocation information to the ledger.  Once you've revoked a credential any proof which uses this credential will fail to verify.  \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol.  There is no other affect on the operation of the agents.\n\nNote that you can't (currently) use the DID Exchange protocol to connect with any of the available mobile agents.\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Run Indy-SDK Backend\n\nThis runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask):\n\n```bash\n./run_demo faber --wallet-type indy\n

"},{"location":"demo/#mediation","title":"Mediation","text":"

To enable mediation, run the alice or faber demo with the --mediation option:

./run_demo faber --mediation\n

This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.

"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"

To enable multiple ledger mode, run the alice or faber demo with the --multi-ledger option:

./run_demo faber --multi-ledger\n

The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml.

"},{"location":"demo/#multi-tenancy","title":"Multi-tenancy","text":"

To enable support for multi-tenancy, run the alice or faber demo with the --multitenant option:

./run_demo faber --multitenant\n

(This option can be used with both (or either) alice and/or faber.)

You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (W) Create and/or Enable Wallet\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (W) Create and/or Enable Wallet\n    (X) Exit?\n

When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)

[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber      | Register or switch to wallet new_wallet_12\nFaber      | Created new profile\nFaber      | Profile backend: indy\nFaber      | Profile name: new_wallet_12\nFaber      | No public DID\n... etc\n

Note that faber will create a public DID for this wallet, and will create a schema and credential definition.

Once you have created a new wallet, you must establish a connection between alice and faber (remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").

In faber, create a new invitation:

[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n

In alice, accept the invitation:

[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n

You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:

Show me a screenshot - multi-tenancy via admin API

Note that with multi-tenancy enabled:

Documentation on ACA-Py's multi-tenancy support can be found here.

"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"

There are two options for configuring mediation with multi-tenancy, documented here.

This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.

Run the demo (Alice or Faber) specifying both options:

./run_demo faber --multitenant --mediation\n

This works exactly as the vanilla multi-tenancy, except that all connections are mediated.

"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To overriide the default port settings:

AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n

(The agent requires up to 10 available ports.)

To pass extra arguements to the agent (for example):

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n
"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"

These Alice and Faber scripts (in the demo/runners folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.

The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.

"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"

Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This Aries OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connectiing, issuing a credential, and presenting a proof sequence.

"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"

Another example in the demo/runners folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.

To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:

The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.

A second version of the performance test can be run by adding the parameter --routing to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.

You can also run the demo against a postgres database using the following:

./run_demo performance --arg-file demo/postgres-indy-args.yml\n

(Obvs you need to be running a postgres database - the command to start postgres is in the yml file provided above.)

You can tweak the number of credentials issued using the --count and --batch parameters, and you can run against an Askar database using the --wallet-type askar option (or run using indy-sdk using --wallet-type indy).

An example full set of options is:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n

Or:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"

Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:

The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:

All done? Checkout how we added the missing code segments here.

"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"

In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.

Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo

"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"

There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)

To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:

Open 2 bash shells, and in each run:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

In one shell run Faber:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

... and in the second shell run Alice:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

When Faber has produced an invitation, copy it over to Alice.

Then, in the Faber shell, select option 1 to issue a credential to Alice. (You can select option 2 if you like, to confirm via proof.)

Then, in the Faber shell, enter X to exit the controller, and then run the Acme controller:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n

In the Alice shell, select option 4 (to enter a new invitation) and then copy over Acme's invitation once it's available.

Then, in the Acme shell, you can select option 2 and then option 1, which don't do anything ... yet!!!

"},{"location":"demo/AcmeDemoWorkshop/#asking-alice-for-a-proof-of-education","title":"Asking Alice for a Proof of Education","text":"

In the Acme code acme.py we are going to add code to issue a proof request to Alice, and then validate the received proof.

First the following import statements and constants that we will need near the top of acme.py:

import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n

Next locate the code that is triggered by option 2:

            elif option == \"2\":\n                log_status(\"#20 Request proof of degree from alice\")\n                # TODO presentation requests\n

Replace the # TODO comment with the following code:

                req_attrs = [\n                    {\n                        \"name\": \"name\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"date\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"degree\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    }\n                ]\n                req_preds = []\n                indy_proof_request = {\n                    \"name\": \"Proof of Education\",\n                    \"version\": \"1.0\",\n                    \"nonce\": str(uuid4().int),\n                    \"requested_attributes\": {\n                        f\"0_{req_attr['name']}_uuid\": req_attr\n                        for req_attr in req_attrs\n                    },\n                    \"requested_predicates\": {}\n                }\n                proof_request_web_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"presentation_request\": {\"indy\": indy_proof_request},\n                }\n                # this sends the request to our agent, which forwards it to Alice\n                # (based on the connection_id)\n                await agent.admin_POST(\n                    \"/present-proof-2.0/send-request\",\n                    proof_request_web_request\n                )\n

Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):

        if state == \"presentation-received\":\n            # TODO handle received presentations\n            pass\n

then replace the # TODO comment and the pass statement:

            log_status(\"#27 Process the proof provided by X\")\n            log_status(\"#28 Check if proof is valid\")\n            proof = await self.admin_POST(\n                f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n            )\n            self.log(\"Proof = \", proof[\"verified\"])\n\n            # if presentation is a degree schema (proof of education),\n            # check values received\n            pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n            pres = message[\"by_format\"][\"pres\"][\"indy\"]\n            is_proof_of_education = (\n                pres_req[\"name\"] == \"Proof of Education\"\n            )\n            if is_proof_of_education:\n                log_status(\"#28.1 Received proof of education, check claims\")\n                for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n                    if referent in pres['requested_proof']['revealed_attrs']:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n                        )\n                    else:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            \"(attribute not revealed)\"\n                        )\n                for id_spec in pres[\"identifiers\"]:\n                    # just print out the schema/cred def id's of presented claims\n                    self.log(f\"schema_id: {id_spec['schema_id']}\")\n                    self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n                # TODO placeholder for the next step\n            else:\n                # in case there are any other kinds of proofs received\n                self.log(\"#28.1 Received \", pres_req[\"name\"])\n

Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.

Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!

"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"

Now we can issue a work credential to Alice!

There are two options for this. We can (a) add code under option 1 to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.

We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!

First though we need to register a schema and credential definition. Find this code:

        # acme_schema_name = \"employee id schema\"\n        # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n        await acme_agent.initialize(\n            the_agent=agent,\n            # schema_name=acme_schema_name,\n            # schema_attrs=acme_schema_attrs,\n        )\n\n        # TODO publish schema and cred def\n

... and uncomment the code lines. Replace the # TODO comment with the following code:

        with log_timer(\"Publish schema and cred def duration:\"):\n            # define schema\n            version = format(\n                \"%d.%d.%d\"\n                % (\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                )\n            )\n            # register schema and cred def\n            (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n                \"employee id schema\",\n                version,\n                [\"employee_id\", \"name\", \"date\", \"position\"],\n                support_revocation=False,\n                revocation_registry_size=TAILS_FILE_COUNT,\n            )\n

For option (1) we want to replace the # TODO comment here:

            elif option == \"1\":\n                log_status(\"#13 Issue credential offer to X\")\n                # TODO credential offers\n

with the following code:

                agent.cred_attrs[cred_def_id] = {\n                    \"employee_id\": \"ACME0009\",\n                    \"name\": \"Alice Smith\",\n                    \"date\": date.isoformat(date.today()),\n                    \"position\": \"CEO\"\n                }\n                cred_preview = {\n                    \"@type\": CRED_PREVIEW_TYPE,\n                    \"attributes\": [\n                        {\"name\": n, \"value\": v}\n                        for (n, v) in agent.cred_attrs[cred_def_id].items()\n                    ],\n                }\n                offer_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"comment\": f\"Offer on cred def id {cred_def_id}\",\n                    \"credential_preview\": cred_preview,\n                    \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n                }\n                await agent.admin_POST(\n                    \"/issue-credential-2.0/send-offer\", offer_request\n                )\n

... and then locate the code that handles the credential request callback:

        if state == \"request-received\":\n            # TODO issue credentials based on offer preview in cred ex record\n            pass\n

... and replace the # TODO comment and pass statement with the following code to issue the credential as Acme offered it:

            # issue credentials based on offer preview in cred ex record\n            if not message.get(\"auto_issue\"):\n                await self.admin_POST(\n                    f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n                    {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n                )\n

Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.

"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"

In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.

This demo also introduces revocation of credentials.

"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":""},{"location":"demo/AliceGetsAPhone/#getting-started","title":"Getting Started","text":"

This demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.

If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.

"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"

Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

There are a couple of extra steps you need to take to prepare to run the Faber agent locally:

"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"

ngrok is used to expose public endpoints for services running locally on your computer.

jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.

You can install ngrok from here

You can download jq releases here

"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"

Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.

Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accesible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:

ngrok http 8020\n

This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.

You will see something like this:

Forwarding                    http://abc123.ngrok.io -> http://localhost:8020\nForwarding                    https://abc123.ngrok.io -> http://localhost:8020\n

This creates a public url for ports 8020 on your local machine.

Note that an ngrok process is created automatically for your tails server.

Keep this process running as we'll come back to it in a moment.

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

"},{"location":"demo/AliceGetsAPhone/#run-an-instance-of-indy-tails-server","title":"Run an instance of indy-tails-server","text":"

For revocation to function, we need another component running that is used to store what are called tails files.

If you are not running with revocation enabled you can skip this step.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"

Open a new bash shell, and in a project directory, run:

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n

This will run the required components for the tails server to function and make a tails server available on port 6543.

This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:

ngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n

Note the server name in the url=https://c5789aa0.ngrok.io parameter (https://c5789aa0.ngrok.io) - this is the external url for your tails server. Make sure you use the https url!

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_1","title":"Running in Play with Docker?","text":"

Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.

Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events option when you run the Faber agent to reduce the quantity of information logged to the screen.

"},{"location":"demo/AliceGetsAPhone/#run-faber-with-extra-parameters","title":"Run faber With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"

If you are running in a local bash shell, navigate to the demo directory in your fork/clone of the Aries Cloud Agent Python repository and run:

TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

(Note that we have to start faber with --aip 10 for compatibility with mobile clients.)

The TAILS_NETWORK parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_2","title":"Running in Play with Docker?","text":"

If you are running in Play with Docker, navigate to the demo folder in the clone of Aries Cloud Agent Python and run the following:

PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

The PUBLIC_TAILS_URL parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.

Use the ngrok url for the tails server that you noted earlier.

*Note that you must use the https url for the tails server endpoint.

*Note - you may want to leave off the --events option when you run the Faber agent, if you are finding you are getting too much logging output.

"},{"location":"demo/AliceGetsAPhone/#waiting-for-the-faber-agent-to-start","title":"Waiting for the Faber agent to start ...","text":"

The Preparing agent image... step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:

As part of its startup process, the agent will publish a revocation registry to the ledger.

Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"

When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.

Click here to view screenshot

The mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".

Click here to view screenshot Click here to view screenshot

Switch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.

Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL option, and copy the invitation_url, which will look something like:

https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n

Or this:

http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n

Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.

"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"

We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.

In the Faber console, select option 1 to send a credential to the mobile agent.

Click here to view screenshot

The Faber agent outputs details to the console; e.g.,

Faber      | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber      | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber      | Credential revocation ID: 1\nFaber      | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n

The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id later, so we recommend that you copy it it now and paste it into a text file or someplace that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id} endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created endpoint).

"},{"location":"demo/AliceGetsAPhone/#accept-the-credential","title":"Accept the Credential","text":"

The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"

We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.

In the Faber console, select option 2 to send a proof request to the mobile agent.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#present-the-proof","title":"Present the Proof","text":"

The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot

If the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.

The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.

"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"

In the Faber console window, the proof should be received as validated.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"

If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber options 5 and 6). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.

Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"

A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.

This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)

If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).

Then in the faber demo, select option 2a - Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.

Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py code in the repository.

"},{"location":"demo/AliceGetsAPhone/#conclusion","title":"Conclusion","text":"

That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.

"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"

ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.

The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.

"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"

Clone this repository to a directory on your local:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

Open up a second shell (so you have 2 shells open in the demo directory) and in one shell:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n

... and in the other:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Note that you start the faber agent with AIP2.0 options. (When you specify --cred-type json-ld faber will set aip to 20 automatically, so the --aip option is not strictly required). Note as well the use of the LEDGER_URL. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.

Also note that the above will only work with the /issue-credential-2.0/create-offer endpoint. If you want to use the /issue-credential-2.0/send endpoint - which automates each step of the credential exchange - you will need to include the --no-auto option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).

(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh and ./alice-local.sh scripts in the demo directory.)

Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.

(If you are running with --no-auto you will also need to call the /connections/{conn_id}/accept-invitation endpoint in alice's admin api swagger page.)

Now open up two browser windows to the Faber and Alice admin api swagger pages.

Using the Faber admin api, you have to create a DID with the appropriate:

Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.

For example, in Faber's swagger page call the /wallet/did/create endpoint with the following payload:

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

This will return something like:

{\n  \"result\": {\n    \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n    \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n    \"posture\": \"wallet_only\",\n    \"key_type\": \"bls12381g2\",\n    \"method\": \"key\"\n  }\n}\n

You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).

You will need to create a DID as above for Alice as well (/wallet/did/create etc ...).

Congratulations, you are now ready to start issuing JSON-LD credentials!

To issue a credential, use the /issue-credential-2.0/send-offer endpoint. (You can also use the /issue-credential-2.0/send) endpoint, if, as mentioned above, you have included the --no-auto when starting both of the agents.)

You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:

{\n  \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request, /store, etc endpoints to complete the protocol.

To see the issued credential, call the /credentials/w3c endpoint on Alice's admin api - this will return something like:

{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\"\n      ],\n      \"types\": [\n        \"UniversityDegreeCredential\",\n        \"VerifiableCredential\"\n      ],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n      \"subject_ids\": [],\n      \"proof_types\": [\n        \"BbsBlsSignature2020\"\n      ],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\n          \"VerifiableCredential\",\n          \"UniversityDegreeCredential\"\n        ],\n        \"issuer\": \"did:key:zUC71Kd...poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n          \"created\": \"2021-05-19T16:19:44.458170\",\n          \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n    }\n  ]\n}\n

If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records) and check the state. If the state is credential-received, then the credential has been received but not stored, in this case just call the /store endpoint for this credential exchange.

"},{"location":"demo/AliceWantsAJsonCredential/#building-more-realistic-json-ld-credentials","title":"Building More Realistic JSON-LD Credentials","text":"

The above example uses the https://www.w3.org/2018/credentials/examples/v1 context, which should never be used in a real application.

To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.

"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"

You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).

You first include https://schema.org in the @context block of the credential as follows:

\"@context\": [\n  \"https://www.w3.org/2018/credentials/v1\",\n  \"https://schema.org\"\n],\n

Then you review the attributes and objects defined by https://schema.org and decide what you need to include in your credential.

For example to issue a credetial with givenName, familyName and alumniOf attributes, submit the following:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"alumniOf\": \"Example University\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that with https://schema.org, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject in the above with:

\"credentialSubject\": {\n  \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n  \"givenName\": \"Sally\",\n  \"familyName\": \"Student\",\n  \"alumniOf\": \"Example University\",\n  \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n

... and the credential issuance should fail, however https://schema.org defines a @vocab that by default all terms derive from (see here).

You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName and familyName):

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"student\": {\n            \"type\": \"Person\",\n            \"givenName\": \"Sally\",\n            \"familyName\": \"Student\",\n            \"alumniOf\": \"Example University\"\n          }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"

The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org, you just shouldn't uste this directly in your credential.)

"},{"location":"demo/AliceWantsAJsonCredential/#credential-issue-example","title":"Credential Issue Example","text":"

The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id, issuer and credentialSubject.id with your local values):

{\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"filter\": {\n        \"ld_proof\": {\n            \"credential\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://w3id.org/citizenship/v1\"\n                ],\n                \"type\": [\n                    \"VerifiableCredential\",\n                    \"PermanentResident\"\n                ],\n                \"id\": \"https://credential.example.com/residents/1234567890\",\n                \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n                \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n                \"credentialSubject\": {\n                    \"type\": [\n                        \"PermanentResident\"\n                    ],\n                    \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n                    \"givenName\": \"ALICE\",\n                    \"familyName\": \"SMITH\",\n                    \"gender\": \"Female\",\n                    \"birthCountry\": \"Bahamas\",\n                    \"birthDate\": \"1958-07-17\"\n                }\n            },\n            \"options\": {\n                \"proofType\": \"BbsBlsSignature2020\"\n            }\n        }\n    }\n}\n

Copy and paste this content into Faber's /issue-credential-2.0/send-offer endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.

In Alice's swagger page, submit the /credentials/records/w3c endpoint to see the issued credential.

"},{"location":"demo/AliceWantsAJsonCredential/#request-presentation-example","title":"Request Presentation Example","text":"

To request a proof, submit the following (with appropriate connection_id) to Faber's /present-proof-2.0/send-request endpoint:

{\n    \"comment\": \"string\",\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"presentation_request\": {\n        \"dif\": {\n            \"options\": {\n                \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n                \"domain\": \"4jt78h47fh47\"\n            },\n            \"presentation_definition\": {\n                \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n                \"format\": {\n                    \"ldp_vp\": {\n                        \"proof_type\": [\n                            \"BbsBlsSignature2020\"\n                        ]\n                    }\n                },\n                \"input_descriptors\": [\n                    {\n                        \"id\": \"citizenship_input_1\",\n                        \"name\": \"EU Driver's License\",\n                        \"schema\": [\n                            {\n                                \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n                            },\n                            {\n                                \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n                            }\n                        ],\n                        \"constraints\": {\n                            \"limit_disclosure\": \"required\",\n                            \"is_holder\": [\n                                {\n                                    \"directive\": \"required\",\n                                    \"field_id\": [\n                                        \"1f44d55f-f161-4938-a659-f8026467f126\"\n                                    ]\n                                }\n                            ],\n                            \"fields\": [\n                                {\n                                    \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                                    \"path\": [\n                                        \"$.credentialSubject.familyName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\",\n                                    \"filter\": {\n                                        \"const\": \"SMITH\"\n                                    }\n                                },\n                                {\n                                    \"path\": [\n                                        \"$.credentialSubject.givenName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\"\n                                }\n                            ]\n                        }\n                    }\n                ]\n            }\n        }\n    }\n}\n

Note that the is_holder property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName). Later on, the received presentation will be signed and verifiable only if is_holder with \"directive\": \"required\" is included in the presentation request.

There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation:

{\n  \"dif\": {\n  }\n}\n

There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.

Firstly, Alice can include the received presentation request in the body to the /send-presentation endpoint, and can include additional constraints on the fields:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"is_holder\": [\n                {\n                    \"directive\": \"required\",\n                    \"field_id\": [\n                        \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"332be361-823a-4863-b18b-c3b930c5623e\"\n                    ],\n                }\n            ],\n            \"fields\": [\n              {\n                \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              },\n              {\n                  \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n                  \"path\": [\n                      \"$.id\"\n                  ],\n                  \"purpose\": \"Specify the id of the credential to present\",\n                  \"filter\": {\n                      \"const\": \"https://credential.example.com/residents/1234567890\"\n                  }\n              }\n            ]\n          }\n        }\n      ]\n    }\n  }\n}\n

Note the additional constraint on \"path\": [ \"$.id\" ] - this restricts the presented credential to the one with the matching credential.id. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.

Another option is for Alice to specify the credential record_id - this is an internal value within ACA-Py:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"fields\": [\n              {\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              }\n            ]\n          }\n        }\n      ]\n    },\n    \"record_ids\": {\n      \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"

TBD the following credential is based on the W3C Vaccination schema:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://w3id.org/vaccination/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n            \"type\": \"VaccinationEvent\",\n            \"batchNumber\": \"1183738569\",\n            \"administeringCentre\": \"MoH\",\n            \"healthProfessional\": \"MoH\",\n            \"countryOfVaccination\": \"NZ\",\n            \"recipient\": {\n              \"type\": \"VaccineRecipient\",\n              \"givenName\": \"JOHN\",\n              \"familyName\": \"SMITH\",\n              \"gender\": \"Male\",\n              \"birthDate\": \"1958-07-17\"\n            },\n            \"vaccine\": {\n              \"type\": \"Vaccine\",\n              \"disease\": \"COVID-19\",\n              \"atcCode\": \"J07BX03\",\n              \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n              \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n            }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/Aries-Workshop/","title":"A Hyperledger Aries/AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/Aries-Workshop/#introduction","title":"Introduction","text":"

Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.

To run the labs, you\u2019ll need a Hyperledger Aries agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant Aries agent built on Hyperledger Aries Cloud Agent Python (ACA-Py). Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.

The four labs in this workshop are laid out as follows:

Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py

Jump in!

"},{"location":"demo/Aries-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"

Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.

"},{"location":"demo/Aries-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"
  1. Get a compatible Aries Mobile Wallet to use with your Aries Traction tenant. There are a number to choose from. We suggest that you use one of these:
    1. BC Wallet from the Government of British Columbia
    2. Orbit Wallet from Northern Block
  2. Click this Traction Sandbox link to go to the Sandbox login page to create your own Traction Tenant Aries agent. Once there, do the following:
    1. Click \"Create Request!\", fill in at least the required form fields, and click \"Submit\".
    2. Your new Traction Tenant's Wallet ID and Wallet Key will be displayed. SAVE THOSE IMMEDIATELY SO THAT YOU HAVE THEM TO ACCESS YOUR TENANT. You only get to see/save them once!
      1. You will need those each time you open your Traction Tenant agent. Putting them into a Password Manager is a great idea!
      2. We can't recover your Wallet ID and Wallet Key, so if you lose them you have to start the entire process again.
  3. Go back to the Traction Sandbox login and this time, use your Wallet ID/Key to log in to your brand new Traction Tenant agent. You might want to bookmark the site.
  4. Make your new Traction Tenant a verifiable credential issuer by:
    1. Clicking on the \"User\" (folder icon) menu (top right), and choosing \"Profile\"
    2. Clicking the \u201cBCovrin Test\u201d Action in the Endorser section.
      1. When done, you will have your own public DID (displayed on the page) that has been published on the BCovrin Test Ledger (can you find it?). Your DID will be used to publish other AnonCreds transactions so you can issue verifiable credentials.
  5. Connect from your Traction Tenant to your mobile Wallet app by:
    1. Selecting on the left menu \"Connections\" and then \"Invitations\"
    2. Click the \"Single Use Connection\" button, give the connection an alias (maybe \"My Wallet\"), and click \"Submit.\"
    3. Scan the resulting QR code with your initialized mobile Wallet and follow the prompts. Once you connect, type a quick \"Hi!\" message to the Traction Agent and you should get an automated message back.
    4. Check the Traction Tenant menu item \"Connections\u2192Connections\" to see the status of your connection \u2013 it should be active.
    5. If anything didn't work in the sequence, here are some things to try:
    6. If the Traction Tenant connection is not active, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.
    7. We've created a Traction Sandbox Workshop FAQ and Questions GitHub issue that you can check to see if your question is already answered, and if not, you can add your question as comment on the issue, and we'll get back to you.

That's it--you should be ready to start issuing and receiving verifiable credentials.

"},{"location":"demo/Aries-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"

::: todo To Do: Update lab to use this schema: H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0 :::

In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:

"},{"location":"demo/Aries-Workshop/#lab-2-steps-to-follow","title":"Lab 2: Steps to Follow","text":"
  1. Log into your Traction Sandbox. You did record your Wallet ID and Key, right?
    1. If not \u2014 jump back to Lab 1 to create a new Traction Tenant, and to a connection to your mobile Wallet.
  2. Create a Schema:
    1. Click the menu item \u201cConfiguration\u201d and then \u201cSchema Storage\u201d.
    2. Click \u201cAdd Schema From Ledger\u201d and fill in the Schema Id with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0.
      1. By doing this, you (as the issuer) will be using a previously published schema. Click here to see the schema on the ledger.
    3. To see the details about your schema, hit the Expand (>) link, and then the subsequent > to \u201cView Raw Content.\"
  3. With the schema in place, it's time to become an issuer. To do that, you have to create a Credential Definition. Click on the \u201cCredential\u201d icon in the \u201cCredential Definition\u201d column of your schema to create the Credential Definition (CredDef) for the Schema. The \u201cTag\u201d can be any value you want \u2014 it is an issuer defined part of the identifier for the Credential Definition. Wait for the operation to complete. Click the \u201cRefresh\u201d button if needed to see the Create icon has been replaced with the identifier for your CredDef.
  4. Move to the menu item \"Configuration \u2192 Credential Definition Storage\" to see the CredDef you created, If you want, expand it to view the raw data. In this case, the raw data does not show the actual CredDef, but rather the Traction data about the CredDef. You can again use the BCovrin Test ledger browser to see your new, published CredDef.

Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.

"},{"location":"demo/Aries-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"

In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.

"},{"location":"demo/Aries-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Issue a Credential:
    1. Click the menu item \u201cIssuance\u201d and then \u201cOffer a Credential\u201d.
    2. Select the Credential Definition of the credential you want to issue.
    3. Select the Contact Name to whom you are issuing the credential\u2014the alias of the connection you made to your mobile Wallet.
    4. Click the \u201cEnter Credential Value\u201d to popup a data entry form for the attributes to populate.
      1. When you enter the date values that you want to use in predicates (e.g., \u201cOlder than 19\u201d), put the date into the following format: YYYYMMDD, e.g., 20231001. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.
      2. We suggest you use realistic dates for Date of Birth (DOB) (e.g., 20-ish years in the past) and expiry (e.g., 3 years in the future) to make using them in predicates easier.
    5. Click \u201cSave\u201d when you are finished entering the attributes and review the information you have entered.
    6. When you are ready, click \u201cSend Offer\u201d to initiate the issuance of the credential.
  3. Receive the Credential:
    1. Open up your mobile Wallet and look for a notification about the credential offer. Where that appears may vary based on the Wallet you are using.
    2. Review the offer and then click the \u201cAccept\u201d button.
    3. Your new credential should be saved to your wallet.
  4. Review the Issuance Data:
    1. Back in your Traction Tenant, refresh the list to see the updates status of the issuance you just completed (should be \u201ccredential_issued\u201d or \u201ccredential_acked\u201d, depending on the Wallet you are using).
    2. Expand the issuance and again to \u201cView Raw Content.\u201d to see the data that was exchanged between the Traction Issuer and the Wallet.
  5. If you want, repeat the process for other credentials types your Traction Tenant is capable of issuing.

That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.

"},{"location":"demo/Aries-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"

In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.

"},{"location":"demo/Aries-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Create and send a presentation request:
    1. Click the menu item \u201cVerification\u201d and then the button \u201cCreate Presentation Request\u201d.
    2. Select the Connection to whom you are sending the request\u2014the alias of the connection you made to your mobile Wallet.
    3. Update the example Presentation Request to match the credential that you want to request. Keep it simple for your first request\u2014it\u2019s easy to iterate in Traction to make your request more complicated. If you used the schema we suggested in Lab 1, just use the default presentation request. It should just work! If not, start from it, and:
      1. Update the value of \u201cschema_name\u201d to the name(s) of the schema for the credential(s) you issued.
      2. Update the group name(s) to something that makes sense for your credential(s) and make sure the attributes listed match your credential(s).
      3. Update (or perhaps remove) the \u201crequest_predicates\u201d JSON item, if it is not applicable to your credential.
    4. Update the optional fields (\u201cAuto Verify\u201d and \u201cOptional Comment\u201d) as you see fit. The \u201cOptional Comment\u201d goes into the list of Verifications so you can keep track of the different presentation requests you create.
    5. Click \u201cSubmit\u201d when your presentation request is ready.
  3. Respond to the Presentation Request:
    1. Open up your mobile Wallet and look for a notification about receiving a presentation request. Where that appears may vary based on the Wallet you are using.
    2. Review the information you are being asked to share, and then click the \u201cShare\u201d button to send the presentation.
  4. Review the Presentation Request Result:
    1. Back in your Traction Tenant, refresh the Verifications list to see the updated status of the presentation request you just completed. It should be something positive, like \u201cpresentation_received\u201d if all went well. It may be different depending on the Wallet you are using.
    2. If you want, expand the presentation request and \u201cView Raw Content.\u201d to see the presentation request, and presentation data exchanged between the Traction Verifier and the Wallet.
  5. Repeat the process, making the presentation request more complicated:
    1. From the list of presentations, use the arrow icon action to copy an existing presentation request and just re-run it, or evolve it.
    2. Ideas:
    3. Add predicates using date of birth (\u201colder than\u201d) and expiry (\u201cnot expired today\u201d).
      1. The p_value should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD format (the integer form of the date).
      2. The p_type should be >= for the \u201colder than\u201d, and =< for \u201cnot expired\u201d. See the table below for the form of the expression form.
    4. Add a second credential group with a restriction for a different credential to the request, so the presentation is derived from two source credentials.
p_value p_type credential_data 20230527 <= expiry_dateint 20030527 >= dob_dateint

That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!

"},{"location":"demo/Aries-Workshop/#whats-next","title":"What's Next","text":"

The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.

Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!

"},{"location":"demo/Aries-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"

Are you going to build an app that uses Traction or an instance of the Aries Cloud Agent Python (ACA-Py)? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.

To access and use your Tenant's OpenAPI (aka Swagger) interface:

The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.

We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.

"},{"location":"demo/Aries-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"

If you are challenged to use Traction or [Aries Cloud Agent Python] to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.

"},{"location":"demo/AriesOpenAPIDemo/","title":"Aries OpenAPI Demo","text":"

What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.

"},{"location":"demo/AriesOpenAPIDemo/#contents","title":"Contents","text":""},{"location":"demo/AriesOpenAPIDemo/#getting-started","title":"Getting Started","text":"

We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.

Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.

For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.

Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg. In that:

"},{"location":"demo/AriesOpenAPIDemo/#running-in-a-browser","title":"Running in a Browser","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"

In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct....

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent","title":"Start the Alice Agent","text":"

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).

Once the Alice agent has started up (with the invite: prompt displayed), click the link near the top of the screen 8031. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct....

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!

You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.

"},{"location":"demo/AriesOpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"

To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"

To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.

In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"

To start Alice's agent, open up a second terminal window and in it, change to the same demo directory as where Faber's agent was started above. Once there, start Alice's agent:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR) that may appear.

If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"

When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:

docker stop faber\ndocker stop alice\n
"},{"location":"demo/AriesOpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"

Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.

From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.

In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.

Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation. That means you must:

  1. scroll to and find that endpoint;
  2. click on the endpoint name to expand its section of the UI;
  3. click on the Try it out button;
  4. fill in any data necessary to run the command;
  5. click Execute;
  6. check the response to see if the request worked.

So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.

Enough with the preliminaries, let\u2019s get started!

"},{"location":"demo/AriesOpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"

We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer DID method, a public ledger is not needed.

"},{"location":"demo/AriesOpenAPIDemo/#use-the-faber-agent-to-create-an-invitation","title":"Use the Faber Agent to Create an Invitation","text":"

In the Faber browser tab, navigate to the POST /connections/create-invitation endpoint. Replace the sample body with and empty production ({}) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"

Copy the entire block of the invitation object, from the curly brackets {}, excluding the trailing comma.

Show me a screenshot - Create Invitation Response

Before switching over to the Alice browser tab, scroll to and execute the GET /connections endpoint to see the list of Faber's connections. You should see a connection with a connection_id that is identical to the invitation you just created, and that its state is invitation.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#use-the-alice-agent-to-receive-fabers-invitation","title":"Use the Alice Agent to Receive Faber's Invitation","text":"

Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation Response

A key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.

"},{"location":"demo/AriesOpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"

At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections endpoint.

Show me a screenshot

To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id in the connection response from the previous POST /connections/receive-invitation endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation endpoint and paste the connection_id in the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking Execute should show that the connection has a state of request.

Show me a screenshot - Accept Invitation Request Show me a screenshot - Accept Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-gets-the-request","title":"The Faber Agent Gets the Request","text":"

In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id from the event for the next step.

Show me the event

Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.

"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"

To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request endpoint and paste the connection_id you previously copied into the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking the Execute button should show that the connection has a state of response, which indicates that Faber has accepted Alice's connection request.

Show me a screenshot - Accept Connection Request Show me a screenshot - Accept Connection Request"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-alices-agent","title":"Review the Connection Status in Alice's Agent","text":"

Switch over to the Alice browser tab.

Scroll to and execute GET /connections to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active.

Show me a screenshot - Alice Connection Status

As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.

Show me the event"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"

You are connected! Switch to the Faber browser tab and run the same GET /connections endpoint to see Faber's view of the connection. Its state is also active. Note the connection_id, you\u2019ll need it later in the tutorial.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#basic-messaging-between-agents","title":"Basic Messaging Between Agents","text":"

Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.

"},{"location":"demo/AriesOpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"

On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message endpoint. Click on Try it Out and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}). Enter the connection id of Alice's connection in the field provided. Then click on Execute.

Show me a screenshot"},{"location":"demo/AriesOpenAPIDemo/#receiving-a-basic-message-faber","title":"Receiving a Basic Message (Faber)","text":"

How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:

Show me a screenshot

Faber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).

"},{"location":"demo/AriesOpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"

How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:

Show me a screenshot

Again, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received.

"},{"location":"demo/AriesOpenAPIDemo/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.

Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:

  1. registered a public DID and stored it on the ledger;
  2. created a schema and registered it on the ledger;
  3. created a credential definition and registered it on the ledger.

The schema and credential definition could also be created through this swagger interface.

We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.

To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.

"},{"location":"demo/AriesOpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"

You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public endpoint. Click on Try it Out and Execute and you will see Faber's public DID.

Show me a screenshot

On the ledger browser of the BCovrin ledger, click the Domain page, refresh, and paste the Faber public DID into the Filter: field:

Show me a screenshot

The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:

Show me the ledger transactions

You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created endpoint to get a list of schemas, including the one schema_id that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Likewise use the GET /credential-definitions/created endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST versions of these endpoints. Now you know!

"},{"location":"demo/AriesOpenAPIDemo/#notes","title":"Notes","text":"

The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!

"},{"location":"demo/AriesOpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"

Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.

"},{"location":"demo/AriesOpenAPIDemo/#faber-preparing-to-issue-a-credential","title":"Faber - Preparing to Issue a Credential","text":"

First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections API endpoint, execute, copy it and paste the connection_id value into the same field in the issue credential JSON.

Click here to see a screenshot

For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:

into the filter section's indy subsection. Remove the \"dif\" subsection of the filter section within the JSON, and specify the remaining indy filter criteria as follows:

Finally, set the remaining values as follows: - auto_remove: set to true (no quotes), see note below - comment: set to any string. It's intended to let Alice know something about the credential being offered. - trace: set to false (no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.

By setting auto_remove to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.

"},{"location":"demo/AriesOpenAPIDemo/#faber-issuing-the-credential","title":"Faber - Issuing the Credential","text":"

Finally, we need put into the JSON the data values for the credential_preview section of the JSON. Copy the following and paste it between the square brackets of the attributes item, replacing what is there. Feel free to change the attribute value items, but don't change the labels or names:

      {\n        \"name\": \"name\",\n        \"value\": \"Alice Smith\"\n      },\n      {\n        \"name\": \"timestamp\",\n        \"value\": \"1234567890\"\n      },\n      {\n        \"name\": \"date\",\n        \"value\": \"2018-05-28\"\n      },\n      {\n        \"name\": \"degree\",\n        \"value\": \"Maths\"\n      },\n      {\n        \"name\": \"birthdate_dateint\",\n        \"value\": \"19640101\"\n      }\n

(Note that the birthdate above is used to present later on to pass an \"age proof\".)

OK, finally, you are ready to click Execute. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?

Show me a screenshot - credential offer

To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0 section and execute the GET /issue-credential-2.0/records endpoint. You should see a lot of information about the exchange just initiated.

"},{"location":"demo/AriesOpenAPIDemo/#alice-receives-credential","title":"Alice Receives Credential","text":"

Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.

Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.

Show me a screenshot - issue credential"},{"location":"demo/AriesOpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"

We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records endpoint. Note that within the results, the cred_ex_record just received has a state of credential-received, but not yet done. Let's address that.

Show me a screenshot - check credential exchange status

First, we need the cred_ex_id from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id string. A real controller might use the cred_ex_id for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).

Show me a screenshot - store credential

Now, in Alice\u2019s swagger browser tab, find the credentials section and within that, execute the GET /credentials endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent is the value of the credential_id element used in other calls. referent is the name returned in the indy-sdk call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.

"},{"location":"demo/AriesOpenAPIDemo/#faber-receives-acknowledgment-that-the-credential-was-received","title":"Faber Receives Acknowledgment that the Credential was Received","text":"

On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.

Show me Faber's event activity

Note that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records

Show me a screenshot

You\u2019ve done it, issued a credential! w00t!

"},{"location":"demo/AriesOpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"

Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points","title":"Bonus Points","text":"

If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/ messages. Use the GET /issue-credential-2.0/records to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.

Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential Offer POST /issue-credential-2.0/send-offer REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/AriesOpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"

Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.

"},{"location":"demo/AriesOpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"

From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request endpoint. After hitting Try it Now, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id (there are four instances) and connection_id with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment item to whatever you want.

{\n  \"comment\": \"This is a comment about the reason for the proof\",\n  \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n  \"presentation_request\": {\n    \"indy\": {\n      \"name\": \"Proof of Education\",\n      \"version\": \"1.0\",\n      \"requested_attributes\": {\n        \"0_name_uuid\": {\n          \"name\": \"name\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_date_uuid\": {\n          \"name\": \"date\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_degree_uuid\": {\n          \"name\": \"degree\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_self_attested_thing_uuid\": {\n          \"name\": \"self_attested_thing\"\n        }\n      },\n      \"requested_predicates\": {\n        \"0_age_GE_uuid\": {\n          \"name\": \"birthdate_dateint\",\n          \"p_type\": \"<=\",\n          \"p_value\": 20030101,\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n

(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18), and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py demo code.)

Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute and cross your fingers. If the request fails check your JSON!

Show me a screenshot - send proof request"},{"location":"demo/AriesOpenAPIDemo/#alice-responding-to-the-proof-request","title":"Alice - Responding to the Proof Request","text":"

As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.

Show me Alice's event activity

In a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.

"},{"location":"demo/AriesOpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"

Note that in the response, the state is request-sent. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id} endpoint. That should return a result showing the state as done and verified as true. Proof positive!

You can see some of Faber's activity below:

Show me Faber's event activity"},{"location":"demo/AriesOpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"

As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0 event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points_1","title":"Bonus Points","text":"

If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0 messages.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.

Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof Request POST /present-proof-2.0/send-request REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation REST service Save Proof application data"},{"location":"demo/AriesOpenAPIDemo/#conclusion","title":"Conclusion","text":"

That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.

One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.

"},{"location":"demo/AriesPostmanDemo/","title":"Aries Postman Demo","text":"

In these demos we will use Postman as our controller client.

"},{"location":"demo/AriesPostmanDemo/#contents","title":"Contents","text":""},{"location":"demo/AriesPostmanDemo/#getting-started","title":"Getting Started","text":"

Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.

"},{"location":"demo/AriesPostmanDemo/#installing-postman","title":"Installing Postman","text":"

Download, install and launch postman.

"},{"location":"demo/AriesPostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"

Create a new postman workspace labeled \"acapy-demo\".

"},{"location":"demo/AriesPostmanDemo/#importing-the-environment","title":"Importing the environment","text":"

In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.

Make sure you have the environment set as your active environment.

"},{"location":"demo/AriesPostmanDemo/#importing-the-collections","title":"Importing the collections","text":"

In the collections tab from the left, click the import button.

The following collections are available:

"},{"location":"demo/AriesPostmanDemo/#postman-basics","title":"Postman basics","text":"

Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.

You have your environment where you define variables to be accessed by your collections.

Each collection consists of a series of requests which can be configured independently.

"},{"location":"demo/AriesPostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"

Make sure you have a demo agent available. You can use the following command to deploy one:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n

When running for the first time, please allow some time for the images to build.

"},{"location":"demo/AriesPostmanDemo/#register-new-dids","title":"Register new dids","text":"

The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020 and BbsBlsSignature2020 credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create endpoint.

"},{"location":"demo/AriesPostmanDemo/#issue-credentials","title":"Issue credentials","text":"

For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.

{\n    \"credential\":   { \n        \"@context\": [\n            \"https://www.w3.org/2018/credentials/v1\"\n        ],\n        \"type\": [\n            \"VerifiableCredential\"\n        ],\n        \"issuer\": \"did:example:123\",\n        \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:123\"\n        }\n    },\n    \"options\": {}\n}\n

Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.

"},{"location":"demo/AriesPostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"

Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.

Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.

"},{"location":"demo/AriesPostmanDemo/#verify-credentials","title":"Verify credentials","text":"

You can verify your last issued credential with this endpoint or any issued credential you provide to it.

"},{"location":"demo/AriesPostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"

Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.

"},{"location":"demo/AriesPostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"

The final request is to verify a presentation.

"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"

There are two ways to run the alice/faber demo with endorser support enabled.

"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"

This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.

Start a VON Network instance and a Tails server:

Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n

Start up Alice as normal:

./run_demo alice\n

You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.

If you issue more than 5 credentials, you will see Faber creating a new revocation registry (encluding endorser operations).

"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"

This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:

Start a VON Network and a Tails server using the instructions above.

Start up Faber as Endorser:

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n

Start up Alice as Author:

TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n

Copy the invitation from Faber to Alice to complete the connection.

Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).

This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.

Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).

If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.

"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"

The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.

The requirements on your invitations (such as in the example below) are:

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.

The RFC 0434 Out of Band protocol requirement enables reuse message by the invitee (the Wallet in the flow above) is that the service in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services item of the Out of Band invitation. As noted in the Out of Band specification, reuse cannot be used with such DID types even if the contents are the same.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

The use of conenction reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.

  1. On a command line, run Faber with these parameters: ./run_demo faber --reuse-connection --events.
  2. On a second command line, run Alice as normal, perhaps with the events option: ./run_demo alice --events
  3. Copy the invitation from the Faber terminal and paste it into the Alice terminal at the prompt.
  4. Verify that the connection was established.
  5. If you want, go to the Alice OpenAPI screen (port 8031, path api/docs), and then use the GET Connections to see that Alice has one connection to Faber.
  6. In the Alice terminal, type 4 to get a prompt for a new connection, and paste the same invitation as in Step 3 (above).
  7. Note from the webhook events in the Faber terminal that the reuse message is received from Alice, and as a result, no new connection was created.
  8. Execute again the GET Connections endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.
  9. In the Faber terminal, type 4 to get a new invitation, copy the invitation, in the Alice terminal, type 4 to get prompted for an invitation, and paste in the new invitation from Faber. Again, the reuse webhook event will be visible in the Faber terminal.
  10. Execute again the GET Connections endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.
  11. Notice that in the invitations in Step 3 and 7 both have the same DID in the services.
  12. Try running the demo again without the --reuse-connection parameter and compare the services value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.

While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.

Note that the invitation does NOT have to be a multi-use invitation for reuse to be useful, as long as the other requirements (at the top of this document) are met.

"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"

A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:

--wallet-type askar-anoncreds\n

When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs instead of credx.

There is a new package under aries_cloudagent/anoncreds with code that supports the new library.

There are new endpoints (under /anoncreds) for creating a Schema and Credential Definition. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).

Within the protocols, there are new handler libraries to support the new anoncreds format (these are in parallel to the existing indy libraries).

The existing indy code are in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/indy/handler.py\naries_cloudagent/protocols/indy/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/indy/handler.py\n

The new anoncreds code is in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\naries_cloudagent/protocols/present_proof/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n

The Indy handler checks to see if the wallet type is askar-anoncreds and if so delegates the calls to the anoncreds handler, for example:

        # Temporary shim while the new anoncreds library integration is in progress\n        wallet_type = profile.settings.get_value(\"wallet.type\")\n        if wallet_type == \"askar-anoncreds\":\n            self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n

... and then:

        # Temporary shim while the new anoncreds library integration is in progress\n        if self.anoncreds_handler:\n            return self.anoncreds_handler.get_format_identifier(message_type)\n

To run the alice/faber demo using the new anoncreds library, start the demo with:

--wallet-type askar-anoncreds\n

There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:

--wallet-type askar-anoncreds\n

Everything should just work!!!

Theoretically ATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).

"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"

The changes are significant. Notably:

The Tails File changes are minimal -- nothing about the file itself changed. What changed:

"},{"location":"deploying/AnonCredsWalletType/#outstanding-work","title":"Outstanding work","text":""},{"location":"deploying/AnonCredsWalletType/#retiring-old-indy-and-askar-credx-code","title":"Retiring old Indy and Askar (credx) Code","text":"

The main changes for the Credential and Presentation support are in the following two files:

aries_cloudagent/protocols/issue_credential/v2_0/messages/cred_format.py\naries_cloudagent/protocols/present_proof/v2_0/messages/pres_format.py\n

The INDY handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.

The new code is already in place (in comments). For example for the Credential handler:

        To make the switch from indy to anoncreds replace the above with the following\n        INDY = FormatSpec(\n            \"hlindy/\",\n            DeferLoad(\n                \"aries_cloudagent.protocols.present_proof.v2_0\"\n                \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n            ),\n        )\n

There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.

Some new methods were added within the Ledger class.

New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).

"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"

Aries Cloud Agent - Python is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their Aries stack using the container images graciously provided by BC Gov and hosted through their bcgovimages docker hub account. These images have been critical to the adoption of not only ACA-Py but also Hyperledger Aries and SSI more generally.

Recognizing how critical these images are to the success of ACA-Py and consistent with Hyperledger's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.

"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"

This project builds and publishes the ghcr.io/hyperledger/aries-cloudagent-python image. Multiple variants are available; see Tags.

"},{"location":"deploying/ContainerImagesAndGithubActions/#tags","title":"Tags","text":"

ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. There are currently two main variants:

These two image variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.

The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.

Below is a table of all generated images and their tags:

Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z py3.9-indy-A.B.C-X.Y.Z Indy py3.9-indy-1.16.0-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z and Indy SDK Version A.B.C py3.10-indy-A.B.C-X.Y.Z Indy py3.10-indy-1.16.0-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z and Indy SDK Version A.B.C"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"

There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.

"},{"location":"deploying/ContainerImagesAndGithubActions/#github-actions","title":"Github Actions","text":""},{"location":"deploying/Databases/","title":"Databases","text":"

Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.

"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"

If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n

For this configuration, a folder called wallet will be created which contains a file called sqlite.db.

"},{"location":"deploying/Databases/#postgresql","title":"PostgreSQL","text":"

The wallet can be configured to use PostgreSQL as storage.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n

In this case the hostname for the database is db on port 5432.

A docker-compose file could look like this:

# docker-compose.yml\nversion: '3'\nservices:\n  # acapy ...\n  # database\n  db:\n    image: postgres:10\n    environment:\n      POSTGRES_PASSWORD: mysecretpassword\n      POSTGRES_USER: postgres\n      POSTGRES_DB: postgres\n    ports:\n      - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"

The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.

"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"

Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.

"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"

Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.

Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.

However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.

"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"

The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)

The components are:

  1. Aries Askar: the replacement for the \u201cindy-wallet\u201d part of Indy SDK. Askar is a key management service, handling the creation and use of private keys managed by Aries agents. It\u2019s also the secure storage for DIDs, verifiable credentials, and data used by issuers of verifiable credentials for signing. As the Aries moniker indicates, Askar is suitable for use with any Aries agent, and for managing any keys, whether for use with Indy or any other Verifiable Data Registry (VDR).
  2. Indy VDR: the interface to publishing to and retrieving data from Hyperledger Indy networks. Indy VDR is scoped at the appropriate level for any client application using Hyperledger Indy networks.
  3. CredX: a Rust implementation of AnonCreds that evolved from the Indy SDK implementation. CredX is within the indy-shared-rs repository. It has significant performance enhancements over the version in the Indy SDK, particularly for Issuers.
  4. Hyperledger AnonCreds: a newer implementation of AnonCreds that is \u201cledger-agnostic\u201d \u2014 it can be used with Hyperledger Indy and any other suitable verifiable data registry.

In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.

If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.

"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"

What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.

"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"

If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar configuration parameter. You will automatically be using all of the shared components.

As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.

"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"

If you have an existing deployment, in changing the --wallet-type configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the aries-acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.

Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.

We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.

askar-upgrade \\\n  --strategy dbpw \\\n  --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n  --wallet-name <wallet name> \\\n  --wallet-key <wallet key>\n

It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.

"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"

If you have questions, comments, or suggestions about the upgrade process, please use the Aries Cloud Agent Python channel on Hyperledger Discord, or submit a GitHub issue to the ACA-Py repository.

"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"

Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.

"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"

Poetry manages virtual environments for your projects to ensure clean and isolated development environments.

"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"
poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"
poetry shell\n

Alternatively you can source the environment settings in the current shell

source $(poetry env info --path)/bin/activate\n

for powershell users this would be

(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"

When using poetry shell

exit\n

When using the activate script

deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"

Poetry uses the pyproject.toml file to manage dependencies. Add new dependencies to this file and update existing ones as needed.

"},{"location":"deploying/Poetry/#adding-a-dependency","title":"Adding a Dependency","text":"
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"
poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"
poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"
poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"

Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.

"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"
poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"
poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"

Poetry streamlines the process of building and publishing Python packages.

"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"
poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"
poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"

Extras allow you to specify additional dependencies based on project requirements.

"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"
poetry install -E extras-name\n

for example

poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"

Development dependencies are useful for tasks like testing, linting, and documentation generation.

"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"
poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":""},{"location":"deploying/RedisPlugins/","title":"ACA-Py Redis Plugins","text":""},{"location":"deploying/RedisPlugins/#aries-acapy-plugin-redis-events-redis_queue","title":"aries-acapy-plugin-redis-events redis_queue","text":"

It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configuration yaml","text":"
redis_queue:\n  connection: \n    connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n  ### For Inbound ###\n  inbound:\n    acapy_inbound_topic: \"acapy_inbound\"\n    acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n  ### For Outbound ###\n  outbound:\n    acapy_outbound_topic: \"acapy_outbound\"\n    mediator_mode: false\n\n  ### For Event ###\n  event:\n    event_topic_maps:\n      ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n      ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n      ^acapy::record::([^:])?: acapy-record-$wallet_id\n      acapy::basicmessage::received: acapy-basicmessage-received\n      acapy::problem_report: acapy-problem_report\n      acapy::ping::received: acapy-ping-received\n      acapy::ping::response_received: acapy-ping-response_received\n      acapy::actionmenu::received: acapy-actionmenu-received\n      acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n      acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n      acapy::keylist::updated: acapy-keylist-updated\n      acapy::revocation-notification::received: acapy-revocation-notification-received\n      acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n      acapy::forward::received: acapy-forward-received\n    event_webhook_topic_maps:\n      acapy::basicmessage::received: basicmessages\n      acapy::problem_report: problem_report\n      acapy::ping::received: ping\n      acapy::ping::response_received: ping\n      acapy::actionmenu::received: actionmenu\n      acapy::actionmenu::get-active-menu: get-active-menu\n      acapy::actionmenu::perform-menu-action: perform-menu-action\n      acapy::keylist::updated: keylist\n    deliver_webhook: true\n
"},{"location":"deploying/RedisPlugins/#redis-plugin-usage","title":"Redis Plugin Usage","text":""},{"location":"deploying/RedisPlugins/#redis-plugin-with-docker","title":"Redis Plugin With Docker","text":"

Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.

docker-compose up --build -d\n

More details can be found here.

"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"

Installation

pip install git+https://github.com/bcgov/aries-acapy-plugin-redis-events.git\n

Startup ACA-Py with redis_queue plugin loaded

docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n    --plugin redis_queue.v1_0.events \\\n    --plugin-config plugins-config.yaml \\\n    -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n    # ... the remainder of your startup arguments\n

Regardless of the options above, you will need to startup deliverer and relay/mediator service as a bridge to receive inbound messages. Consider the following to build your docker-compose file which should also start up your redis cluster:

Both relay and mediator demos are also available.

"},{"location":"deploying/RedisPlugins/#aries-acapy-cache-redis-redis_cache","title":"aries-acapy-cache-redis redis_cache","text":"

ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configuration yaml","text":"
redis_cache:\n  connection: \"redis://default:test1234@172.28.0.103:6379\"\n  max_connection: 50\n  credentials:\n    username: \"default\"\n    password: \"test1234\"\n  ssl:\n    cacerts: ./ca.crt\n
"},{"location":"deploying/RedisPlugins/#redis-cache-usage","title":"Redis Cache Usage","text":""},{"location":"deploying/RedisPlugins/#redis-cache-using-docker","title":"Redis Cache Using Docker","text":""},{"location":"deploying/RedisPlugins/#redis-cache-without-docker","title":"Redis Cache Without Docker","text":"

Installation

pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n

Startup ACA-Py with redis_cache plugin loaded

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config plugins-config.yaml \\\n    # ... the remainder of your startup arguments\n

or

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n    --plugin-config-value \"redis_cache.max_connections=90\" \\\n    --plugin-config-value \"redis_cache.credentials.username=username\" \\\n    --plugin-config-value \"redis_cache.credentials.password=password\" \\\n    # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"

If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue or redis_cache plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster (onto the root_profile). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.

"},{"location":"deploying/UpgradingACA-Py/","title":"Upgrading ACA-Py Data","text":"

Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.

"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"

The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.

Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.

Once an upgrade is identified as needed, the process is:

"},{"location":"deploying/UpgradingACA-Py/#forced-offline-upgrades","title":"Forced Offline Upgrades","text":"

In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.

If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.

Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!

"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"

Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade is required when running name tags based upgrade (i.e. providing --named-tag).

Tags are specified in YML file as below:

fix_issue_rev_reg:\n  fix_issue_rev_reg_records: true\n

Example:

 ./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"

With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.

There are 2 options to perform such upgrades:

This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).

This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.

Note: multiple specifications allowed

"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"

There are a couple of upgrade exception conditions to consider, as outlined in the following sections.

"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"

Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.

"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"

If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>\" and \"--force-upgrade\", the --from-version version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.

"},{"location":"deploying/deploymentModel/","title":"Deployment Model","text":""},{"location":"deploying/deploymentModel/#aries-cloud-agent-python-aca-py-deployment-model","title":"Aries Cloud Agent-Python (ACA-Py) - Deployment Model","text":"

This document is a \"concept of operations\" for an instance of an Aries cloud agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).

The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.

The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.

Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.

Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.

The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.

"},{"location":"deploying/deploymentModel/#aries-cloud-agent","title":"Aries Cloud Agent","text":"

Aries cloud agent implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.

Instances of the Aries cloud agents are configured with the following sub-components:

"},{"location":"deploying/deploymentModel/#controller","title":"Controller","text":"

A controller provides the personality of Aries cloud agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single Aries cloud agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.

Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding Aries cloud agent and invoking the DIDComm administration capabilities of the Aries cloud agent by calling the REST API exposed by that cloud agent. As well as responding to Aries cloud agent events, the controller initiates DIDComm protocol instances using the same REST API.

The controller and Aries cloud agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.

A controller implements the following capabilities.

While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the Aries cloud agent it controls via the REST API exposed by that agent.

"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"

The Aries cloud agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the Aries cloud agent instance. In the most common scenario, the Aries cloud agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and Aries cloud agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the Aries cloud agent to use the Postgres wallet supports enterprise scale agent deployments.

Current examples of deployed instances of Aries cloud agent and controllers include:

"},{"location":"design/AnoncredsW3CCompatibility/","title":"Supporting AnonCreds in W3C VC/VP Formats in Aries Cloud Agent Python","text":"

This design proposes to extend the Aries Cloud Agent Python (ACA-PY) to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.

"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"

The pre-requisites for the work are:

As of 2024-01-15, these pre-requisites have been met.

"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"

Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer and issue messages and when receiving request messages.

Related notes:

A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.

A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.

"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"

A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request message with an RFC 0510 DIF Presentation Exchange format attachment.

If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.

An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.

"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"

A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer and issue messages, and when sending request messages.

On receiving an Issue Credential v2.0 offer message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request message.

On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.

On receiving an RFC 0510 DIF Presentation Exchange request message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issues-to-consider","title":"Issues to consider","text":""},{"location":"design/AnoncredsW3CCompatibility/#flow-chart","title":"Flow Chart","text":""},{"location":"design/AnoncredsW3CCompatibility/#key-questions","title":"Key Questions","text":""},{"location":"design/AnoncredsW3CCompatibility/#what-is-the-roadmap-for-delivery-what-will-we-build-first-then-second","title":"What is the roadmap for delivery? What will we build first, then second?","text":"

It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"
  1. Update Admin API endpoints to initiate an Issue Credential v2.0 protocol to issue an AnonCreds credential in W3C VC format using RFC 0809 VC-DI format attachments.
  2. Add support for the RFC 0809 VC-DI message attachment formats.
  3. Should the attachment format be made pluggable as part of this? From the maintainers: If we did make it pluggable, this would be the point where that would take place. Since these values are hard coded, it is not pluggable currently, as noted. I've been dissatisfied with how this particular piece works for a while. I think making it pluggable, if done right, could help clean it up nicely. A plugin would then define their own implementation of V20CredFormatHandler. (@dbluhm)
  4. Update the v2.0 Issue Credential protocol handler to support a \"RFC 0809 VC-DI mode\" such that when a protocol instance starts with that format, it continues with it until completion, supporting issuing AnonCreds credentials in the process. This includes both the sending and receiving of all protocol message types.
"},{"location":"design/AnoncredsW3CCompatibility/#present-proof","title":"Present Proof","text":"
  1. Adjust as needed the sending of a Present Proof request using the RFC 0510 DIF Presentation Exchange with support (to be defined) for requesting AnonCreds VCs.
  2. Adjust as needed the processing of a Present Proof request message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.
  3. AnonCreds VCs issued as legacy or W3C VC format credentials should be usable in AnonCreds W3C VP format presentations.
  4. Update the creation of an RFC 0510 DIF Presentation Exchange presentation submission to support the use of AnonCreds VCs as the source of the VPs.
  5. Update the verifier receipt of a Present Proof v2.0 presentation message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.
"},{"location":"design/AnoncredsW3CCompatibility/#what-are-the-functions-we-are-going-to-wrap","title":"What are the functions we are going to wrap?","text":"

After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject impacted by changes are as follows:

W3CCredential

W3CPresentation

They will be added to __init__.py as additional exports of AnoncredsObject.

We also have to consider which classes or anoncreds objects have been modified

The classes modified according to the same PR mentioned above are:

Credential

PresentCredential

"},{"location":"design/AnoncredsW3CCompatibility/#creating-a-w3c-vc-credential-from-credential-definition-and-issuing-and-presenting-it-as-is","title":"Creating a W3C VC credential from credential definition, and issuing and presenting it as is","text":"

The issuance, presentation and verification of legacy anoncreds are implemented in this ./aries_cloudagent/anoncreds directory. Therefore, we will also start from there.

Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer class:

Create VC_DI Credential Offer

According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,

could be the parameters for create_offer method.

Create VC_DI Credential

NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.

async def create_credential(\n        self,\n        credential_offer: dict,\n        credential_request: dict,\n        credential_values: dict,\n    ) -> str:\n...\n...\n  try:\n    credential = await asyncio.get_event_loop().run_in_executor(\n        None,\n        lambda: W3CCredential.create(\n            cred_def.raw_value,\n            cred_def_private.raw_value,\n            credential_offer,\n            credential_request,\n            raw_values,\n            None,\n            None,\n            None,\n            None,\n        ),\n    )\n...\n

Create VC_DI Credential Request

async def create_vc_di_credential_request(\n        self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n    ) -> Tuple[str, str]:\n...\n...\ntry:\n  secret = await self.get_master_secret()\n  (\n      cred_req,\n      cred_req_metadata,\n  ) = await asyncio.get_event_loop().run_in_executor(\n      None,\n      W3CCredentialRequest.create,\n      None,\n      holder_did,\n      credential_definition.to_native(),\n      secret,\n      AnonCredsHolder.MASTER_SECRET_ID,\n      credential_offer,\n  )\n...\n

Create VC_DI Credential Presentation

async def create_vc_di_presentation(\n        self,\n        presentation_request: dict,\n        requested_credentials: dict,\n        schemas: Dict[str, AnonCredsSchema],\n        credential_definitions: Dict[str, CredDef],\n        rev_states: dict = None,\n    ) -> str:\n...\n...\n  try:\n    secret = await self.get_master_secret()\n    presentation = await asyncio.get_event_loop().run_in_executor(\n        None,\n        Presentation.create,\n        presentation_request,\n        present_creds,\n        self_attest,\n        secret,\n        {\n            schema_id: schema.to_native()\n            for schema_id, schema in schemas.items()\n        },\n        {\n            cred_def_id: cred_def.to_native()\n            for cred_def_id, cred_def in credential_definitions.items()\n        },\n    )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"

In this case, we can use to_w3c method of Credential class to convert from legacy to w3c and to_legacy method of W3CCredential class to convert from w3c to legacy.

We could call to_w3c method like this:

vc_di_cred = Credential.to_w3c(cred_def)\n

and for to_legacy:

legacy_cred = W3CCredential.to_legacy()\n

We don't need to input any parameters to it as it in turn calls Credential.from_w3c() method under the hood.

"},{"location":"design/AnoncredsW3CCompatibility/#format-handler-for-issue_credential-v2_0-protocol","title":"Format Handler for Issue_credential V2_0 Protocol","text":"

Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI in ./protocols/issue_credential/v2_0/messages/cred_format.py -

# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n    \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n    INDY = FormatSpec(...)\n    LD_PROOF = FormatSpec(...)\n    VC_DI = FormatSpec(\n        \u201cvc_di/\u201d,\n        CredExRecordVCDI,\n        DeferLoad(\n            \u201caries_cloudagent.protocols.issue_credential.v2_0\u201d\n            \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n        ),\n    )\n

And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof

# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n    \"\"\"Credential exchange W3C detail record.\"\"\"\n\n    class Meta:\n        \"\"\"CredExRecordW3C metadata.\"\"\"\n\n        schema_class = \"CredExRecordW3CSchema\"\n\n    RECORD_ID_NAME = \"cred_ex_w3c_id\"\n    RECORD_TYPE = \"w3c_cred_ex_v20\"\n    TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n    RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n

Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -

{\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n

Assuming VCDIDetail and VCDIOptions are already in place, VCDIDetailSchema can be created like so:

# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n    \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n    class Meta:\n        \"\"\"Accept parameter overload.\"\"\"\n\n        unknown = INCLUDE\n        model_class = VCDIDetail\n\n    credential = fields.Nested(\n        CredentialSchema(),\n        required=True,\n        metadata={\n            \"description\": \"Detail of the VC_DI Credential to be issued\",\n            \"example\": {\n                \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n                \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n                \"comment\": \"<some comment>\",\n                \"formats\": [\n                    {\n                        \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"format\": \"didcomm/w3c-di-vc@v0.1\"\n                    }\n                ],\n                \"credentials~attach\": [\n                    {\n                        \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"mime-type\": \"application/ld+json\",\n                        \"data\": {\n                            \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n                        }\n                    }\n                ]\n            }\n        },\n    )\n

Then create w3c format handler with mapping like so:

# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n            CRED_20_PROPOSAL: VCDIDetailSchema,\n            CRED_20_OFFER: VCDIDetailSchema,\n            CRED_20_REQUEST: VCDIDetailSchema,\n            CRED_20_ISSUE: VerifiableCredentialSchema,\n        }\n

Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di flag to the corresponding routes.

"},{"location":"design/AnoncredsW3CCompatibility/#admin-api-attachments","title":"Admin API Attachments","text":"

To make sure that once an endpoint has been called to trigger the Issue Credential flow in 0809 W3C_DI attachment formats the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI format.

# Format specifications\nATTACHMENT_FORMAT = {\n    CRED_20_PROPOSAL: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_OFFER: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_REQUEST: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_ISSUE: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n    },\n}\n

And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:

The same goes for ATTACHMENT_FORMAT of Present Proof flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:

"},{"location":"design/AnoncredsW3CCompatibility/#credential-exchange-admin-routes","title":"Credential Exchange Admin Routes","text":"

This route indirectly calls _formats_filters function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n            ...\n            ...\n        }\n    }\n}\n

This route indirectly calls _format_result_with_details function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-present\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    \"presentation_definition\": <presentation_definition_schema>,\n    \"auto_remove\": true,\n    \"dif\": {\n        issuer_id: \"<issuer_id>\",\n        record_ids: {\n            \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n            \"<input descriptor id_2>\": [\"<record id>\"],\n        }\n    },\n    \"reveal_doc\": {\n        // vc_di dict\n    }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"

Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.

One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.

We will duplicate this store_credential function and modify it:

async def store_w3c_credential(...) {\n    ...\n    ...\n    try:\n        cred = W3CCredential.load(credential_data)\n    ...\n    ...\n}\n

Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?

Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"

Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy(), all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c() signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.

"},{"location":"design/AnoncredsW3CCompatibility/#compatibility-with-afj-how-can-we-make-sure-that-we-are-compatible","title":"Compatibility with AFJ: how can we make sure that we are compatible?","text":"

We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.

"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"

Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.

If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.

"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"

So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?

"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"

Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-a-wallet-retain-the-capability-to-present-only-an-anoncred-credential","title":"How can a wallet retain the capability to present ONLY an anoncred credential?","text":"

If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof

How will wallets support that in a way that is developer-friendly to wallet devs?

"},{"location":"features/AdminAPI/","title":"ACA-Py Administration API","text":""},{"location":"features/AdminAPI/#using-the-openapi-swagger-interface","title":"Using the OpenAPI (Swagger) Interface","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py agent with the --admin {HOST} {PORT} and --admin-insecure-mode command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY} and add the X-API-Key: {KEY} header to all requests instead of using the --admin-insecure-mode parameter.

To invoke a specific method:

The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.

Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.

The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.

"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"

When ACA-Py is started with the --webhook-url {URL} command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state property is updated.

When a webhook is dispatched, the record topic is appended as a path component to the URL. For example, https://webhook.host.example becomes https://webhook.host.example/topic/connections when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.

"},{"location":"features/AdminAPI/#webhooks-over-websocket","title":"Webhooks over WebSocket","text":"

ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.

Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.

To open a WebSocket, connect to the /ws endpoint of the Admin API.

"},{"location":"features/AdminAPI/#pairwise-connection-record-updated-connections","title":"Pairwise Connection Record Updated (/connections)","text":""},{"location":"features/AdminAPI/#basic-message-received-basicmessages","title":"Basic Message Received (/basicmessages)","text":""},{"location":"features/AdminAPI/#forward-message-received-forward","title":"Forward Message Received (/forward)","text":"

Enable using --monitor-forward.

"},{"location":"features/AdminAPI/#credential-exchange-record-updated-issue_credential","title":"Credential Exchange Record Updated (/issue_credential)","text":""},{"location":"features/AdminAPI/#presentation-exchange-record-updated-present_proof","title":"Presentation Exchange Record Updated (/present_proof)","text":""},{"location":"features/AdminAPI/#api-standard-behavior","title":"API Standard Behavior","text":"

The best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.

The routes.py file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).

The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422 HTTP response with an error message if the schema validation fails.)

API endpoints are defined using aiohttp_apispec tags (e.g. @doc, @request_schema, @response_schema etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register() method and added to the Swagger page in the post_process_routes() method.

The APIs should return the following HTTP status:

...and should not return:

"},{"location":"features/AnonCredsMethods/","title":"Adding AnonCreds Methods to ACA-Py","text":"

ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.

The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.

This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.

"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"

We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.

Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.

"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"

The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:

The links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of aries_cloudagent/anoncreds/base.py in the main branch.

The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.

Models for the API are defined here

"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"

When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_* event (e.g., AnonCredsIssuer.finish_schema, AnonCredsIssuer.finish_cred_def, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_* to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.

"},{"location":"features/AnonCredsMethods/#questions-or-comments","title":"Questions or Comments","text":"

The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aries-cloudagent-python channel on the Hyperledger Discord Server or open an issue in this repo to get help.

Pull Requests to the ACA-Py repository to improve this content are welcome!

"},{"location":"features/AnoncredsProofValidation/","title":"Anoncreds Proof Validation in ACA-Py","text":"

ACA-Py performs pre-validation when verifying Anoncreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the anoncreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.

The list of possible verification messages can be found here, and consists of:

class PresVerifyMsg(str, Enum):\n    \"\"\"Credential verification codes.\"\"\"\n\n    RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n    RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n    TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n    CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n    PRES_VALUE_ERROR = \"VALUE_ERROR\"\n    PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n

If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid (which means the attribute identified by 19_uuid contained a timestamp outside of the non-revocation interval (this is just a warning)).

A presentation verification may include multiple messages, for example:

    ...\n    \"verified\": \"true\",\n    \"verified_msgs\": [\n        \"TS_OUT_NRI::18_uuid\",\n        \"TS_OUT_NRI::18_id_GE_uuid\",\n        \"TS_OUT_NRI::18_busid_GE_uuid\"\n    ],\n    ...\n

... or it may include a single message, for example:

    ...\n    \"verified\": \"false\",\n    \"verified_msgs\": [\n        \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n    ],\n    ...\n

... or the verified_msgs may be null or an empty array.

"},{"location":"features/AnoncredsProofValidation/#presentation-modifications-and-warnings","title":"Presentation Modifications and Warnings","text":"

The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:

"},{"location":"features/AnoncredsProofValidation/#presentation-pre-validation-errors","title":"Presentation Pre-validation Errors","text":"

The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:

VALUE_ERROR::<description of the failed validation>\n

These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...) in the code.

A summary of the possible errors includes:

"},{"location":"features/AnoncredsProofValidation/#anoncreds-verification-exceptions","title":"Anoncreds Verification Exceptions","text":"

Typically, when you call the anoncreds verifier_verify_proof() method, it will return a True or False based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:

VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.

ACA-Py provides a DIDMethods registry holding all the DID methods supported for storage in a wallet

Askar and InMemory are the only wallets supporting this registry.

"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"

By default, ACA-Py supports did:key and did:sov. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web to the registry from a plugin setup method.

WEB = DIDMethod(\n    name=\"web\",\n    key_types=[ED25519, BLS12381G2],\n    rotation=True,\n    holder_defined_did=HolderDefinedDid.REQUIRED  # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n    methods = context.inject(DIDMethods)\n    methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"

POST /wallet/did/create can be provided with parameters for any registered DID method. Here's a follow-up to the did:web method example:

{\n    \"method\": \"web\",\n    \"options\": {\n        \"did\": \"did:web:doma.in\",\n        \"key_type\": \"ed25519\"\n    }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"

For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.

"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.

A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.

For example, given the DID did:example:1234abcd, a DID Resolver that supports did:example might return:

{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n  \"id\": \"did:example:1234abcd#keys-1\",\n  \"type\": \"Ed25519VerificationKey2018\",\n  \"controller\": \"did:example:1234abcd\",\n  \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n  \"id\": \"did:example:1234abcd#did-communication\",\n  \"type\": \"did-communication\",\n  \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n

For more details on DIDs and DID Resolution, see the W3C DID Specification.

In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.

"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver","text":"

In ACA-Py, the DIDResolver provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers list. This registry enables additional resolvers to be loaded via plugin.

"},{"location":"features/DIDResolution/#example-usage","title":"Example usage","text":"
class ExampleMessageHandler:\n    async def handle(context: RequestContext, responder: BaseResponder):\n    \"\"\"Handle example message.\"\"\"\n    resolver = await context.inject(DIDResolver)\n\n    doc: dict = await resolver.resolve(\"did:example:123\")\n    assert doc[\"id\"] == \"did:example:123\"\n\n    verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n    # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"

On DIDResolver.resolve or DIDResolver.dereference, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:

The selection algorithm roughly follows the following steps:

  1. Filter out all resolvers where resolver.supports(did) returns false.
  2. Partition remaining resolvers by type with all native resolvers followed by non-native resolvers (registration order preserved within partitions).
  3. For each resolver in the resulting list, attempt to resolve the DID and return the first successful result.
"},{"location":"features/DIDResolution/#resolver-plugins","title":"Resolver Plugins","text":"

Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool, writing your own should require minimal overhead.

"},{"location":"features/DIDResolution/#writing-a-resolver-plugin","title":"Writing a resolver plugin","text":"

Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver class as the base. BaseDIDResolver is an abstract base class that defines the interface that the core DIDResolver expects for Method resolvers.

The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py will be in charge of injecting the plugin, and example_resolver.py will have the logic implementation to resolve for a fabricated did:example method.

"},{"location":"features/DIDResolution/#__init-__py","title":"__init __.py","text":"

```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver

from .example_resolver import ExampleResolver

async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)

#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n    \"\"\"ExampleResolver class.\"\"\"\n\n    def __init__(self):\n        super().__init__(ResolverType.NATIVE)\n        # Alternatively, ResolverType.NON_NATIVE\n        self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n    @property\n    def supported_did_regex(self) -> Pattern:\n        \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n        return self._supported_did_regex\n\n    async def setup(self, context):\n        \"\"\"Setup the example resolver (none required).\"\"\"\n\n    async def _resolve(self, profile: Profile, did: str) -> dict:\n        \"\"\"Resolve example DIDs.\"\"\"\n        if did != \"did:example:1234abcd\":\n            raise DIDNotFound(\n                \"We only actually resolve did:example:1234abcd. Sorry!\"\n            )\n\n        return {\n            \"@context\": \"https://www.w3.org/ns/did/v1\",\n            \"id\": \"did:example:1234abcd\",\n            \"verificationMethod\": [{\n                \"id\": \"did:example:1234abcd#keys-1\",\n                \"type\": \"Ed25519VerificationKey2018\",\n                \"controller\": \"did:example:1234abcd\",\n                \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n            }],\n            \"service\": [{\n                \"id\": \"did:example:1234abcd#did-communication\",\n                \"type\": \"did-communication\",\n                \"serviceEndpoint\": \"https://agent.example.com/\"\n            }]\n        }\n

"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"

There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.

"},{"location":"features/DIDResolution/#using-resolver-plugins","title":"Using Resolver Plugins","text":"

In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github DIDs.

The resolution algorithm is simple: for the github DID did:github:dbluhm, the method specific identifier dbluhm (a GitHub username) is used to lookup an index.jsonld file in the ghdid repository in that GitHub users profile. See GitHub DID Method Specification for more details.

To use this plugin, first install it into your project's python environment:

pip install git+https://github.com/dbluhm/acapy-resolver-github\n

Then, invoke ACA-Py as you normally do with the addition of:

$ aca-py start \\\n    --plugin acapy_resolver_github \\\n    # ... the remainder of your startup arguments\n

Or add the following to your configuration file:

plugin:\n  - acapy_resolver_github\n

The following is a fully functional Dockerfile encapsulating this setup:

```dockerfile= FROM ghcr.io/hyperledger/aries-cloudagent-python:py3.9-0.12.0rc2 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github

CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]

To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n

"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":""},{"location":"features/DIDResolution/#references","title":"References","text":"

https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/

"},{"location":"features/DevReadMe/","title":"Developer's Read Me for Hyperledger Aries Cloud Agent - Python","text":"

See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.

"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/DevReadMe/#introduction","title":"Introduction","text":"

Aries Cloud Agent Python (ACA-Py) is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.

The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.

"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"

To put ACA-Py through its paces at the command line, checkout our demos page.

"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"

ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help option to discover the available command line parameters. There are a lot of them--for good and bad.

"},{"location":"features/DevReadMe/#docker","title":"Docker","text":"

To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"

If you installed the PyPi package, the executable aca-py should be available on your PATH.

Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n

If you get an error about a missing module indy (e.g. ModuleNotFoundError: No module named 'indy') when running aca-py, you will need to install the Indy libraries from the command line:

pip install python3_indy\n

Once that completes successfully, you should be able to run aca-py --version and the other examples above.

"},{"location":"features/DevReadMe/#about-aca-py-command-line-parameters","title":"About ACA-Py Command Line Parameters","text":"

ACA-Py invocations are separated into two types - initially provisioning an agent (provision) and starting a new agent process (start). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.

When starting an agent instance, at least one inbound and one outbound transport MUST be specified.

For example:

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --outbound-transport http\n

or

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --inbound-transport ws 0.0.0.0 8001 \\\n                --outbound-transport ws \\\n                --outbound-transport http\n

ACA-Py ships with both inbound and outbound transport drivers for http and ws (websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.

Most configuration parameters are provided to the agent at startup. Refer to the Running sections above for details on listing the available command line parameters.

"},{"location":"features/DevReadMe/#provisioning-secure-storage","title":"Provisioning Secure Storage","text":"

It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...).

aca-py provision --wallet-type askar --seed $SEED\n

For additional provision options, execute aca-py provision --help.

Additional information about secure storage options and configuration settings can be found here.

"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"

ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.

"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"

ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.

"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"

ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.

"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"

Docker must be installed to run software locally and to run the test suite.

"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"

The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.

One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.

"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"

Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.

./scripts/run_docker start <args>\n

For example:

./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

To enable the ptvsd Python debugger for Visual Studio/VSCode use the --debug command line parameter.

Any ports you will be using from the docker container should be published using the PORTS environment variable. For example:

PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

Refer to the previous section for instructions on how to run ACA-Py.

"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"

You can find more details about logging and log levels here.

"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"

To run the ACA-Py test suite, use the following script:

./scripts/run_tests\n

To run the ACA-Py test suite with ptvsd debugger enabled:

./scripts/run_tests --debug\n

To run specific tests pass parameters as defined by pytest:

./scripts/run_tests aries_cloudagent/protocols/connections\n

To run the tests including Indy SDK and related dependencies, run the script:

./scripts/run_tests_indy\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"

You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).

Check out and run AATH tests as follows (this tests the aca-py main branch):

git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n

The manage script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.

"},{"location":"features/DevReadMe/#development-workflow","title":"Development Workflow","text":"

We use Ruff to enforce a coding style guide.

We use Black to automatically format code.

Please write tests for the work that you submit.

Tests should reside in a directory named tests alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_ to be automatically picked up the test runner.

There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!

The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.

Please also refer to the contributing guidelines and code of conduct.

"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"

The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.

"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"

The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext instance, currently within conductor.py. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass); for instance the wallet instance may be injected using wallet = context.inject(BaseWallet). The inject method normally throws an exception if no implementation of the base class is provided, but can be called with required=False for optional dependencies (in which case a value of None may be returned).

Providers are registered with either context.injector.bind_instance(BaseClass, instance) for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider) for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.

The BaseProvider classes in the config.provider module include ClassProvider, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet). ClassProvider accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass), allowing dynamic injection of dependencies when the class instance is instantiated.

"},{"location":"features/Endorser/","title":"Transaction Endorser Support","text":"

ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.

"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"

Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.

Once the connection is established and active, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info endpoint.

"},{"location":"features/Endorser/#requesting-transaction-endorsement","title":"Requesting Transaction Endorsement","text":"

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id (of the Endorser connection) and create_transaction_for_endorser.

(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)

If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:

Protocol Step Author Endorser Request Endorsement /transactions/create-request Endorse Transaction /transactions/{tran_id}/endorse Write Transaction /transactions/{tran_id}/write

Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.

Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).

"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"

The following start-up parameters are supported by ACA-Py:

Endorsement:\n  --endorser-protocol-role <endorser-role>\n                        Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n                        Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n                        the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n  --endorser-public-did <endorser-public-did>\n                        For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n                        DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n  --endorser-alias <endorser-alias>\n                        For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n  --auto-request-endorsement\n                        For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n                        transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n  --auto-endorse-transactions\n                        For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n                        [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n  --auto-write-transactions\n                        For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n                        var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n  --auto-create-revocation-transactions\n                        For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n                        the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n  --auto-promote-author-did\n                        For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"

Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:

The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).

The overall architecture can be illustrated as:

"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"

An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:

You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).

At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.

For example:

Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.

"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"

... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:

You can see the same endorsement processes in this sequence.

Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.

"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"

By design Hyperledger Aries is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, Indy and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.

"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/JsonLdCredentials/#general-concept","title":"General Concept","text":"

The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:

"},{"location":"features/JsonLdCredentials/#bbs","title":"BBS+","text":"

BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.

Some other resources that can help you get started with BBS+ credentials:

"},{"location":"features/JsonLdCredentials/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.

"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"

It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:

<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]

For credentials the https://www.w3.org/2018/credentials/v1 context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1 URL MUST be present in the context. For convenience this URL will be automatically added to the @context of the credential if not present.

{\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://other-contexts.com\"\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"

Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:

Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.

For the remainder of this guide, we will be using the example UniversityDegreeCredential type and https://www.w3.org/2018/credentials/examples/v1 context from the Verifiable Credential Data Model. You should not use this for production use cases.

"},{"location":"features/JsonLdCredentials/#signature-suite","title":"Signature Suite","text":"

Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports two signature suites for issuing credentials:

Generally you should always use BbsBlsSignature2020 as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.

"},{"location":"features/JsonLdCredentials/#did-method","title":"Did Method","text":"

Besides the JSON-LD context, we need a did to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:

"},{"location":"features/JsonLdCredentials/#didsov","title":"did:sov","text":"

When using did:sov you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public endpoint. For backwards compatibility the did is returned without did:sov prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB becomes did:sov:DViYrCMPWfuLiY7LLs8giB)

"},{"location":"features/JsonLdCredentials/#didkey","title":"did:key","text":"

A did:key did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.

You can create a did:key using the /wallet/did/create endpoint with the following body. Use ed25519 for Ed25519Signature2018, bls12381g2 for BbsBlsSignature2020.

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj

"},{"location":"features/JsonLdCredentials/#issuing-credentials","title":"Issuing Credentials","text":"

Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0)

The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.

All endpoints in API use the aries/ld-proof-vc-detail@v1.0. We'll use the /issue-credential-2.0/send as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.

The detail should be included under the filter.ld_proof property. To issue a credential call the /issue-credential-2.0/send endpoint, with the example body below and the connection_id and issuer keys replaced. The value of issuer should be the did that you created in the Did Method paragraph above.

If you don't have auto-respond-credential-offer and auto-store-credential enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request and /issue-credential-2.0/records/{cred_ex_id}/store to finalize the credential issuance.

See the example body
{\n  \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"

After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.

Call the /credentials/w3c endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.

See the example response
{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/bbs/v1\"\n      ],\n      \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n      \"subject_ids\": [],\n      \"proof_types\": [\"BbsBlsSignature2020\"],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n          \"created\": \"2021-05-03T12:31:28.561945\",\n          \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n    }\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"

\u26a0\ufe0f TODO: https://github.com/hyperledger/aries-cloudagent-python/pull/1125

"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"

In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.

These endpoints include:

To learn more about using these endpoints, please refer to the available postman collection.

"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":""},{"location":"features/Mediation/#command-line-arguments","title":"Command Line Arguments","text":"

The minimum set of arguments required to enable mediation are:

aca-py start ... \\\n    --open-mediation\n

To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):

aca-py start ... \\\n    --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n

If a default mediator has already been established, then the --default-mediator-id argument can be used instead of the --mediator-invitation.

"},{"location":"features/Mediation/#didcomm-messages","title":"DIDComm Messages","text":"

See Aries RFC 0211: Coordinate Mediation Protocol.

"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":""},{"location":"features/Mediation/#mediator-message-flow-overview","title":"Mediator Message Flow Overview","text":""},{"location":"features/Mediation/#using-a-mediator","title":"Using a Mediator","text":"

After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.

"},{"location":"features/Multicredentials/","title":"Multi-Credentials","text":"

It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.

With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of Aries Cloud Agent Python being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.

Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using Aries Cloud Agent Python as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.

In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.

"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"

Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger is supported. Configurable write ledgers can be assigned using is_write in the configuration or using any of the --genesis-url, --genesis-file, and --genesis-transactions startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError is raised.

More background information including problem statement, design (algorithm) and more can be found here.

"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/Multiledger/#usage","title":"Usage","text":"

Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list startup parameter. This parameter accepts a string which is the path to the YAML configuration file. For example:

--genesis-transactions-list ./aries_cloudagent/config/multi_ledger_config.yml

If --genesis-transactions-list is specified, then --genesis-url, --genesis-file, --genesis-transactions should not be specified.

"},{"location":"features/Multiledger/#example-config-file","title":"Example config file","text":"
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n  endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n  endorser_alias: \"endorser_test\"\n- id: greenlightDev\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: is_write property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev ledgers are write configurable. By default, on startup bcovrinTest will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger endpoint, either greenlightDev and bcovrinTest can be set as the write ledger.

Note 2: The greenlightDev ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.

- id: localVON\n  is_production: false\n  is_write: true\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: For instance with regards to example config above, localVON will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.

"},{"location":"features/Multiledger/#config-properties","title":"Config properties","text":"

For each ledger, the required properties are as following:

For connecting to ledger, one of the following needs to be specified:

Optional properties:

Note: Both endorser_did and endorser_alias are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did and endorser.endorser_alias profile setting respectively.

"},{"location":"features/Multiledger/#multi-ledger-admin-api","title":"Multi-ledger Admin API","text":"

Multi-ledger related actions are grouped under the ledger topic in the SwaggerUI.

"},{"location":"features/Multiledger/#ledger-selection","title":"Ledger Selection","text":""},{"location":"features/Multiledger/#read-requests","title":"Read Requests","text":"

The following process is executed for these functions in ACA-Py:

  1. get_schema
  2. get_credential_definition
  3. get_revoc_reg_def
  4. get_revoc_reg_entry
  5. get_key_for_did
  6. get_all_endpoints_for_did
  7. get_endpoint_for_did
  8. get_nym_role
  9. get_revoc_reg_delta

If multiple ledgers are configured then IndyLedgerRequestsExecutor service extracts DID from the record identifier and executes the check below, else it returns the BaseLedger instance.

"},{"location":"features/Multiledger/#for-checking-ledger-in-parallel","title":"For checking ledger in parallel","text":""},{"location":"features/Multiledger/#write-requests","title":"Write Requests","text":"

On startup, the first configured applicable ledger is assigned as the write_ledger (BaseLedger), the selection is dependent on the order (top-down) and whether it is production or non_production. For instance, considering this example configuration, ledger bcovrinTest will be set as write_ledger as it is the topmost production ledger. If no production ledgers are included in configuration then the topmost non_production ledger is selected.

"},{"location":"features/Multiledger/#a-special-warning-for-taa-acceptance","title":"A Special Warning for TAA Acceptance","text":"

When you run in multi-ledger mode, ACA-Py will use the pool-name (or id) specified in the ledger configuration file for each ledger.

(When running in single-ledger mode, ACA-Py uses default as the ledger name.)

If you are running against a ledger in write mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name as a key.

This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:

or:

Once you re-start ACA-Py, you can check the GET /ledger/taa endpoint to verify your TAA acceptance status.

"},{"location":"features/Multiledger/#impact-on-other-aca-py-function","title":"Impact on other ACA-Py function","text":"

There should be no impact/change in functionality to any ACA-Py protocols.

IndySdkLedger was refactored by replacing wallet: IndySdkWallet instance variable with profile: Profile and accordingly .aries_cloudagent/indy/credex/verifier, .aries_cloudagent/indy/models/pres_preview, .aries_cloudagent/indy/sdk/profile.py, .aries_cloudagent/indy/sdk/verifier, ./aries_cloudagent/indy/verifier were also updated.

Added build_and_return_get_nym_request and submit_get_nym_request helper functions to IndySdkLedger and IndyVdrLedger.

Best practice/feedback emerging from Askar session deadlock issue and endorser refactoring PR was also addressed here by not leaving sessions open unnecessarily and changing context.session to context.profile.session, etc.

These changes are made here:

"},{"location":"features/Multiledger/#known-issues","title":"Known Issues","text":""},{"location":"features/Multitenancy/","title":"Multi-tenancy in ACA-Py","text":"

Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.

This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.

"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/Multitenancy/#general-concept","title":"General Concept","text":"

When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.

"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"

Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.

The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.

The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.

"},{"location":"features/Multitenancy/#usage","title":"Usage","text":"

Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin startup parameter. See Multi-tenant Admin API below for more info on the admin API.

The --jwt-secret startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.

Example:

# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#multi-tenant-admin-api","title":"Multi-tenant Admin API","text":"

The multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the Authorization header as specified in Authentication).

Multi-tenancy related actions are grouped under the /multitenancy path or the multitenancy topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the --multitenant-admin startup parameter can be used.

See the SwaggerUI for the exact API definition for multi-tenancy.

"},{"location":"features/Multitenancy/#managed-vs-unmanaged-mode","title":"Managed vs Unmanaged Mode","text":"

Multi-tenancy in ACA-Py is designed with two key management modes in mind.

"},{"location":"features/Multitenancy/#managed-mode","title":"Managed Mode","text":"

In managed mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.

"},{"location":"features/Multitenancy/#unmanaged-mode","title":"Unmanaged Mode","text":"

In unmanaged mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See Authentication for more info.

It is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.

Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.

"},{"location":"features/Multitenancy/#mode-usage","title":"Mode Usage","text":"

The mode used can be specified when creating a wallet using the key_management_mode parameter.

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"

In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. Hyperledger Aries defines two types of routing methods, mediation and relaying.

See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.

"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"

In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.

"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"

ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:

  1. The base wallet has a default mediator set that will be used by sub wallets.
  2. Use --mediator-invitation to connect to the mediator, request mediation, and set it as the default mediator
  3. Use default-mediator-id if you're already connected to the mediator and mediation is granted (e.g. after restart).
  4. When a sub wallet creates a connection or key it will be registered at the mediator via the base wallet connection. The base wallet will still act as a relay and route the messages to the correct sub wallets.
  5. Pro: Not every wallet needs to create a connection with the mediator
  6. Con: Sub wallets have no control over the mediator.
  7. Sub wallet creates a connection with mediator and requests mediation
  8. Use mediation as you would in a non-multi-tenant agent, however, the base wallet will still act as a relay.
  9. You can set the default mediator to use for connections (using the mediation API).
  10. Pro: Sub wallets have control over the mediator.
  11. Con: Every wallet

The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.

A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.

+---------------------+      +----------------------+      +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+      +----------------------+      +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"

ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.

When creating a wallet wallet_dispatch_type be used to specify how webhooks for the wallet should be dispatched. The options are:

If either default or both is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls option.

Example:

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_webhook_urls\": [\n    \"https://webhook-url.com/path\",\n    \"https://another-url.com/site\"\n  ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"

When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.

For HTTP events the wallet id is included as the x-wallet-id header. For WebSockets, the wallet id is included in the enclosing JSON object.

HTTP example:

POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n    // event payload\n}\n

WebSocket example:

{\n  \"topic\": \"{topic}\",\n  \"wallet_id\": \"{wallet_id}\",\n  \"payload\": {\n    // event payload\n  }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"

When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key header. As there is only a single wallet, this provides sufficient authentication and authorization.

For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.

Example

GET /connections [headers=\"Authorization: Bearer {token}]\n

The Authorization header is in addition to the Admin API key. So if the admin-api-key is enabled (which should be enabled in production) both the Authorization and the x-api-key headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key should be provided.

"},{"location":"features/Multitenancy/#getting-a-token","title":"Getting a token","text":"

A token can be obtained in two ways. The first method is the token parameter from the response of the create wallet (POST /multitenancy/wallet) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token) endpoint.

"},{"location":"features/Multitenancy/#method-1-register-new-tenant","title":"Method 1: Register new tenant","text":"

This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token as well as other useful information like your wallet id will be returned to you.

Example

new_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample.png\",\n  \"key_management_mode\": \"managed\",\n  \"label\": \"example-label-02\",\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_key\": \"example-encryption-key-02\",\n  \"wallet_name\": \"example-name-02\",\n  \"wallet_type\": \"askar\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook\"\n  ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02\",\n    \"image_url\": \"https://aries.ca/images/sample.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"

This method allows you to retrieve a tenant token for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key and the wallet_id of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d { \"wallet_key\": \"example-encryption-key-02\" }\n

Response

{\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n

In unmanaged mode, the get token endpoint also requires the wallet_key parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.

{\n  \"wallet_id\": \"wallet_id\",\n  // \"wallet_key\" in only present in unmanaged mode\n  \"wallet_key\": \"wallet_key\"\n}\n

In unmanaged mode, sending the wallet_key to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.

"},{"location":"features/Multitenancy/#jwt-secret","title":"JWT Secret","text":"

For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret param is added to the ACA-Py agent that will be used for JWT creation and verification.

"},{"location":"features/Multitenancy/#swaggerui","title":"SwaggerUI","text":"

When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize button at the top to set the correct authentication headers. Make sure to also include the Bearer part in the input field. This won't be automatically added.

"},{"location":"features/Multitenancy/#tenant-management","title":"Tenant Management","text":"

After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.

"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"

The following properties can be updated: image_url, label, wallet_dispatch_type, and wallet_webhook_urls for tenants of a multitenancy wallet. To update these properties you will PUT a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID} admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.

Example

update_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n  \"label\": \"example-label-02-updated\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook/updated\"\n  ]\n}'\n
echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook/updated\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02-updated\",\n    \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n

An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error

"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"

The following information is required to delete a tenant:

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n

Response

{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"

To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:

Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_role

Added extra_settings dict field to request schema. extra_settings can be configured in the request body as below:

Example Request

{\n    \"wallet_name\": \" ... \",\n    \"default_label\": \" ... \",\n    \"wallet_type\": \" ... \",\n    \"wallet_key\": \" ... \",\n    \"key_management_mode\": \"managed\",\n    \"wallet_webhook_urls\": [],\n    \"wallet_dispatch_type\": \"base\",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"public-invites\": true\n    },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n  -H \"Content-Type: application/json\" \\\n  -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n  -d @-\n

Added extra_settings dict field to request schema.

Example Request

  {\n    \"wallet_webhook_urls\": [ ... ],\n    \"wallet_dispatch_type\": \"default\",\n    \"label\": \" ... \",\n    \"image_url\": \" ... \",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"ACAPY_PUBLIC_INVITES\": false\n    },\n  }\n
  echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: Aca-Py Plug-Ins","text":""},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-it-work","title":"What's in a Plug-In and How does it Work?","text":"

Plug-ins are loaded on Aca-Py startup based on the following parameters:

The --plug-in parameter specifies a package that is loaded by Aca-Py at runtime, and extends Aca-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.

The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py routes.py (to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).

You can discover which plug-ins are installed in an aca-py instance by calling (in the \"server\" section) the GET /plugins endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config to inspect the Aca-Py configuration, which will include the configuration for the external plug-ins.)

"},{"location":"features/PlugIns/#setup-method","title":"setup method","text":"

If a setup method is provided, it will be called. If not, the message_types.py and routes.py will be explicitly loaded.

This would be in the package/module __init__.py:

async def setup(context: InjectionContext):\n    pass\n

TODO I couldn't find an implementation of a custom setup in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.

"},{"location":"features/PlugIns/#message_typespy","title":"message_types.py","text":"

When loading a plug-in, if there is a message_types.py available, Aca-Py will check the following attributes to initialize the protocol(s):

"},{"location":"features/PlugIns/#routespy","title":"routes.py","text":"

If routes.py is available, then Aca-Py will call the following functions to initialize the Admin endpoints:

"},{"location":"features/PlugIns/#definitionpy","title":"definition.py","text":"

If definition.py is available, Aca-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):

versions = [\n    {\n        \"major_version\": 1,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v1_0\",\n    },\n    {\n        \"major_version\": 2,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v2_0\",\n    },\n]\n

The attributes are:

"},{"location":"features/PlugIns/#loading-aca-py-plug-ins-at-runtime","title":"Loading Aca-Py Plug-Ins at Runtime","text":"

The load sequence for a plug-in (the \"Startup\" class depends on how Aca-Py is running - upgrade, provision or start):

sequenceDiagram\n  participant Startup\n  Note right of Startup: Configuration is loaded on startup<br/>from aca-py config params\n    Startup->>+ArgParse: configure\n    ArgParse->>settings:  [\"external_plugins\"]\n    ArgParse->>settings:  [\"blocked_plugins\"]\n\n    Startup->>+Conductor: setup()\n      Note right of Conductor: Each configured plug-in is validated and loaded\n      Conductor->>DefaultContext:  build_context()\n      DefaultContext->>DefaultContext:  load_plugins()\n      DefaultContext->>+PluginRegistry:  register_package() (for built-in protocols)\n        PluginRegistry->>PluginRegistry:  register_plugin() (for each sub-package)\n      DefaultContext->>PluginRegistry:  register_plugin() (for non-protocol built-ins)\n      loop for each external plug-in\n      DefaultContext->>PluginRegistry:  register_plugin()\n      alt if a setup method is provided\n        PluginRegistry->>ExternalPlugIn:  has setup\n      else if routes and/or message_types are provided\n        PluginRegistry->>ExternalPlugIn:  has routes\n        PluginRegistry->>ExternalPlugIn:  has message_types\n      end\n      opt if definition is provided\n        PluginRegistry->>ExternalPlugIn:  definition()\n      end\n      end\n      DefaultContext->>PluginRegistry:  init_context()\n        loop for each external plug-in\n        alt if a setup method is provided\n          PluginRegistry->>ExternalPlugIn:  setup()\n        else if a setup method is NOT provided\n          PluginRegistry->>PluginRegistry:  load_protocols()\n          PluginRegistry->>PluginRegistry:  load_protocol_version()\n          PluginRegistry->>ProtocolRegistry:  register_message_types()\n          PluginRegistry->>ProtocolRegistry:  register_controllers()\n        end\n        PluginRegistry->>PluginRegistry:  register_protocol_events()\n      end\n\n      Conductor->>Conductor:  load_transports()\n\n      Note right of Conductor: If the admin server is enabled, plug-in routes are added\n      Conductor->>AdminServer:  create admin server if enabled\n\n    Startup->>Conductor: start()\n      Conductor->>Conductor:  start_transports()\n      Conductor->>AdminServer:  start()\n\n    Note right of Startup: the following represents an<br/>admin server api request\n    Startup->>AdminServer:  setup_context() (called on each request)\n      AdminServer->>PluginRegistry:  register_admin_routes()\n      loop for each external plug-in\n        PluginRegistry->>ExternalPlugIn:  routes.register() (to register endpoints)\n      end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"

When developing a new plug-in:

"},{"location":"features/PlugIns/#pip-vs-poetry-support","title":"PIP vs Poetry Support","text":"

Most Aca-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.

"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"

TBD

"},{"location":"features/PlugIns/#aca-py-plug-ins","title":"Aca-Py Plug-ins","text":"

This list was originally published in this hackmd document.

Maintainer Name Features Last Update Link BCGov Redis Events Inbound/Outbound message queue Sep 2022 https://github.com/bcgov/aries-acapy-plugin-redis-events Hyperledger Aries Toolbox UI for ACA-py Aug 2022 https://github.com/hyperledger/aries-toolbox Hyperledger Aries ACApy Plugin Toolbox Protocol Handlers Aug 2022 https://github.com/hyperledger/aries-acapy-plugin-toolbox Indicio Data Transfer Specific Data import Aug 2022 https://github.com/Indicio-tech/aries-acapy-plugin-data-transfer Indicio Question & Answer Non-Aries Protocol Aug 2022 https://github.com/Indicio-tech/acapy-plugin-qa Indicio Acapy-plugin-pickup Fetching Messages from Mediator Aug 2022 https://github.com/Indicio-tech/acapy-plugin-pickup Indicio Machine Readable GF Governance Framework Mar 2022 https://github.com/Indicio-tech/mrgf Indicio Cache Redis Cache for Scalability Jul 2022 https://github.com/Indicio-tech/aries-acapy-cache-redis SICPA Dlab Kafka Events Event Bus Integration Aug 2022 https://github.com/sicpa-dlab/aries-acapy-plugin-kafka-events SICPA Dlab DidComm Resolver Universal Resolver for DIDComm Aug 2022 https://github.com/sicpa-dlab/acapy-resolver-didcomm SICPA Dlab Universal Resolver Multi-ledger Reading Jul 2021 https://github.com/sicpa-dlab/acapy-resolver-universal DDX mydata-did-protocol Oct 2022 https://github.com/decentralised-dataexchange/acapy-mydata-did-protocol BCGov Basic Message Storage Basic message storage (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/basicmessage_storage BCGov Multi-tenant Provider Multi-tenant Provider (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/multitenant_provider BCGov Traction Innkeeper Innkeeper (traction) Feb 2023 https://github.com/bcgov/traction/tree/develop/plugins/traction_innkeeper"},{"location":"features/PlugIns/#references","title":"References","text":"

The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)

Configuration params:

Loading plug-ins:

Versioning for plug-ins:

"},{"location":"features/SelectiveDisclosureJWTs/","title":"SD-JWT Implementation in ACA-Py","text":"

This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.

This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.

In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.

"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"

The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss, iat, exp, and cnf are always visible.

The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:

{\n    \"birthdate\": \"1940-01-01\",\n    \"address\": {\n        \"street_address\": \"123 Main St\",\n        \"locality\": \"Anytown\",\n        \"region\": \"Anystate\",\n        \"country\": \"US\",\n    },\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\"

The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.

"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"

The issuer can decide to treat the address claim in the above example payload as a block that can either be disclosed completely or not at all.

The issuer lists out all the claims inside \"address\" in the non_sd_list, but not address itself:

non_sd_list = [\n    \"address.street_address\",\n    \"address.locality\",\n    \"address.region\",\n    \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"

The issuer may instead decide to make the address claim contents selectively disclosable individually.

The issuer lists only \"address\" in the non_sd_list.

non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"

The issuer may also decide to make the address claim contents selectively disclosable recursively, i.e., the address claim is made selectively disclosable as well as its sub-claims.

The issuer lists neither address nor the subclaims of address in the non_sd_list, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list need not be defined explicitly.

"},{"location":"features/SelectiveDisclosureJWTs/#walk-through-of-sd-jwt-implementation","title":"Walk-Through of SD-JWT Implementation","text":""},{"location":"features/SelectiveDisclosureJWTs/#signing-sd-jwts","title":"Signing SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtsign-endpoint","title":"Example input to /wallet/sd-jwt/sign endpoint","text":"
{\n  \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n  \"headers\": {},\n  \"payload\": {\n    \"sub\": \"user_42\",\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"email\": \"johndoe@example.com\",\n    \"phone_number\": \"+1-202-555-0101\",\n    \"phone_number_verified\": true,\n    \"address\": {\n      \"street_address\": \"123 Main St\",\n      \"locality\": \"Anytown\",\n      \"region\": \"Anystate\",\n      \"country\": \"US\"\n    },\n    \"birthdate\": \"1940-01-01\",\n    \"updated_at\": 1570000000,\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000\n  },\n  \"non_sd_list\": [\n    \"given_name\",\n    \"family_name\",\n    \"nationalities\"\n  ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"
\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n

The sd_jwt_sign() method:

"},{"location":"features/SelectiveDisclosureJWTs/#verifying-sd-jwts","title":"Verifying SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtverify-endpoint","title":"Example input to /wallet/sd-jwt/verify endpoint","text":"

Using the output from the /wallet/sd-jwt/sign example above, we have decided to only reveal two of the selectively disclosable claims (user and updated_at) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.

{\n  \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"

Note that attributes in the non_sd_list (given_name, family_name, and nationalities), as well as essential verification data (iss, iat, exp) are visible directly within the payload. The disclosures include only the values for the user and updated_at claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"] list.

{\n  \"headers\": {\n    \"typ\": \"JWT\",\n    \"alg\": \"EdDSA\",\n    \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n  },\n  \"payload\": {\n    \"_sd\": [\n      \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n      \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n      \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n      \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n      \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n      \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n      \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n    ],\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"nationalities\": [\n      {\n        \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n      },\n      {\n        \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n      },\n      {\n        \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n      }\n    ],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000,\n    \"_sd_alg\": \"sha-256\"\n  },\n  \"valid\": true,\n  \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n  \"disclosures\": [\n    [\n      \"xvDX00fjZferiNiPod51qQ\",\n      \"updated_at\",\n      1570000000\n    ],\n    [\n      \"X99s3_LixBcor_hntREZcg\",\n      \"sub\",\n      \"user_42\"\n    ]\n  ]\n}\n

The sd_jwt_verify() method:

"},{"location":"features/SupportedRFCs/","title":"Aries AIP and RFCs Supported in Aries Cloud Agent Python","text":"

This document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on Hyperledger Discord or through an issue in this repo.

Last Update: 2024-03-05, Release 0.12.0rc2

The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.

"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"

See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other Aries Frameworks and Agents.

AIP Version Supported Notes AIP 1.0 Fully supported. AIP 2.0 Fully supported, with a couple of very minor exceptions noted below.

A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.

"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at ghcr.io/hyperledger/aries-cloudagent-python. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using the Ed25519Signature2018, BbsBlsSignature2020 and BbsBlsSignatureProof2020 signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov DID published on an Indy network. did:sov did:web Resolution only did:key did:peer Algorithms 2/3 and 4 Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type will eventually be the same as askar when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Deprecated Full support for the features of the \"indy-wallet\" secure storage capabilities found in the Indy SDK.

New installations of ACA-Py should NOT use the Indy SDK. Existing deployments using the Indy SDK should transition to Aries Askar and related components as soon as possible.

"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"

All RFCs listed in AIP 1.0 are fully supported in ACA-Py. The following table provides notes about the implementation of specific RFCs.

RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol The agent supports Connection/DID exchange initiated from both plaintext invitations and public DIDs that enable bypassing the invitation message."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"

All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.

RFC Supported Notes 0587-encryption-envelope-v2 Supporting the DIDComm v2 encryption envelope does not make sense until DIDComm v2 is to be supported. 0317-please-ack An investigation was done into supporting please-ack and a number of complications were found. As a result, we expect that please-ack will be dropped from AIP 2.0. It has not been implemented by any Aries frameworks or deployments.

There is a PR to the Aries RFCs repository to remove those RFCs from AIP 2.0. If that PR is removed, the RFCs will be removed from the table above.

"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

The running agent provides a Swagger User Interface that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.

"},{"location":"features/UsingOpenAPI/#aca-py-openapi-raw-output-characteristics","title":"ACA-Py, OpenAPI Raw Output Characteristics","text":"

ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json or from the browser Swagger User Interface directly.

The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec. This tool starts ACA-Py, retrieves the swagger.json file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json language output. The output of this tool enables comparison with the checked-in open-api/swagger.json and open-api/openapi.json, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation is turned off via the open-api/openAPIJSON.config file, so warning messages are printed for non-conformance, but the json is still output. Most of the warnings reported by generate-open-api-spec relate to missing operationId fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId annotations via tags.

The generate-open-api-spec tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript. It is recommended that the generate-open-api-spec is run prior to each release, and the resulting open-api/openapi.json file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.

"},{"location":"features/UsingOpenAPI/#generating-language-wrappers-for-aca-py","title":"Generating Language Wrappers for ACA-Py","text":"

There are inevitably differences around best practice for method naming based on coding language and organization standards.

Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId fields. This allows the greatest flexibility in conforming to external naming requirements.

Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.

The OpenAPI Tools was found to offer some nice features when generating Typescript. It creates separate files for each class and allows the use of a .openapi-generator-ignore file to override generation if there is a spec file issue that needs to be maintained manually.

If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.

Another suggestion for code generation is to keep the modelPropertyNaming set to original when generating code. Although it is tempting to try and enable marshaling into standard naming formats such as camelCase, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model shows when looking at the ACA-Py Swagger UI in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshaling correct in all circumstances when changing model name format.

"},{"location":"features/UsingOpenAPI/#existing-language-wrappers-for-aca-py","title":"Existing Language Wrappers for ACA-Py","text":""},{"location":"features/UsingOpenAPI/#python","title":"Python","text":""},{"location":"features/UsingOpenAPI/#go","title":"Go","text":""},{"location":"features/UsingOpenAPI/#java","title":"Java","text":""},{"location":"features/devcontainer/","title":"ACA-Py Development with Dev Container","text":"

The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer and will use VS Code to illustrate.

By no means is ACA-Py limited to these tools; they are merely examples.

For information on running demos and tests using provided shell scripts, see DevReadMe readme.

"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"

The primary use case for this devcontainer is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.

There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose all within this container.

"},{"location":"features/devcontainer/#files","title":"Files","text":"

The .devcontainer folder contains the devcontainer.json file which defines this container. We are using a Dockerfile and post-install.sh to build and configure the container run image. The Dockerfile is simple but in place for simplifying image enhancements (ex. adding poetry to the image). The post-install.sh will install some additional development libraries (including for BDD support).

"},{"location":"features/devcontainer/#devcontainer","title":"Devcontainer","text":"

What are Development Containers?

A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.

see https://containers.dev.

In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:

"},{"location":"features/devcontainer/#open-aca-py-in-the-devcontainer","title":"Open ACA-Py in the devcontainer","text":"

To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:

  1. Open Visual Studio Code, and use the Command Palette and use Dev Containers: Open Folder in Container...
  2. Open Visual Studio Code and File|Open Folder..., you should be prompted to Reopen in Container.

NOTE follow any prompts to install Python Extension or reload window for Pylance when first building the container.

ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window as some extensions seem to require this in order to work as expected.

"},{"location":"features/devcontainer/#devcontainerjson","title":"devcontainer.json","text":"

When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.9 image (bash shell) and loading it with all the ACA-Py requirements (and black). We also load a few Visual Studio settings (for running Pytests and formatting with Flake and Black).

"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"

The Python libraries / dependencies are installed using poetry. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run black .). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.

In VS Code, open a Terminal, you should be able to run the following commands:

python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\nblack . --check\npoetry --version\n

The first command should show you that aries_cloudagent module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit installed) and Pull Requests.

When running ruff check . in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13) - that's ok. If there are actual ruff errors, you should see something like:

error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"

We have added Black formatter and Ruff extensions. Although we have added launch settings for both ruff and black, you can also use the extension commands from the command palette.

More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window to ensure the extensions are loaded correctly.

"},{"location":"features/devcontainer/#running-docker-in-docker-demos","title":"Running docker-in-docker demos","text":"

Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n

If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":""},{"location":"features/devcontainer/#aca-py-debugging","title":"ACA-Py Debugging","text":"

To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json and settings.json, please cut and paste what you want/need.

cp -R .vscode-sample .vscode\n

This will add a launch.json, settings.json and multiple ACA-Py configuration files for developing with different scenarios.

Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.

For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url config. If you want to use a non localhost server then you will need to change the url.

"},{"location":"features/devcontainer/#faber","title":"Faber","text":""},{"location":"features/devcontainer/#alice","title":"Alice","text":""},{"location":"features/devcontainer/#endorser","title":"Endorser","text":""},{"location":"features/devcontainer/#author","title":"Author","text":""},{"location":"features/devcontainer/#multitenant-admin","title":"Multitenant-Admin","text":""},{"location":"features/devcontainer/#try-running-faber-and-alice-at-the-same-time-and-add-break-points-and-recreate-the-demo","title":"Try running Faber and Alice at the same time and add break points and recreate the demo","text":"

To run your ACA-Py code in debug mode, go to the Run and Debug view, select the agent(s) you want to start and click Start Debugging (F5).

This will start your source code as a running ACA-Py instance, all configuration is in the *.yml files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000). You could change this or another ledger such as http://test.bcovrin.vonx.io. These are purposefully, very simple configurations.

For example, open aries_cloudagent/admin/server.py and set a breakpoint in async def status_handler(self, request: web.BaseRequest):, then call GET /status in the Admin Console and hit your breakpoint.

"},{"location":"features/devcontainer/#pytest","title":"Pytest","text":"

Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests will scan and find the tests.

See Python Testing for more details, and Test Commands for usage.

WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File to start debugging.

WARNING: the project configuration found in pyproject.toml include performing ruff checks when we run pytest. Including ruff does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.

"},{"location":"features/devcontainer/#next-steps","title":"Next Steps","text":"

At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal. This isn't a panacea but should get you going in the right direction and provide you with some development tools.

"},{"location":"gettingStarted/","title":"Becoming an Indy/Aries Developer","text":"

This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own Aries agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an Aries agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.

Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.

Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.

"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between Aries Agents","text":"

Use an ACA-Py issuer/verifier to establish a connection with an Aries mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/AriesAgentArchitecture/","title":"Aries Cloud Agent Internals: Agent and Controller","text":"

This section talks in particular about the architecture of this Aries cloud agent implementation. An instance of an Aries agent is actually made up of to two parts - the agent itself and a controller.

The agent handles all of the core Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.

Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.

As such, the agent is just a configured dependency in an Aries cloud agent deployment. Thus, the vast majority of Aries developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of Aries cloud agent maintainers will focus on adding and maintaining the agent dependency.

Want more details about the agent and controller internals? Take a look at the Aries cloud agent deployment model document.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBasics/","title":"What is Aries?","text":"

Hyperledger Aries provides a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interaction between those clients.

A Hyperledger Aries agent (such as the one in this repository):

The concepts and features that make up the Aries project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs. The Aries Working Group meets weekly to expand the design and components of Aries.

The Aries Cloud Agent Python currently only supports Hyperledger Indy-based verifiable credentials and public ledger. Longer term (as we'll see later in this guide) protocols will be extended or added to support other verifiable credential implementations and public ledgers.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBigPicture/","title":"Aries Agents in context: The Big Picture","text":"

Aries agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.

The agents in the picture shares many attributes:

While there can be many other agent setups, the picture above shows the most common ones - edge agents for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.

Misleading in the picture is that (almost) all agents connect directly to the Ledger network. In this picture it's the Sovrin ledger, but that could be any Indy network (e.g. set of nodes running indy-node software) and in future, ledgers from other providers. That implies most agents embed the ledger SDK (e.g. indy-sdk) and makes calls to the ledger SDK to interact with the ledger and other SDK controlled resources (e.g. secure storage). Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the ledger - they do it directly. Super small IoT devices are an instance of an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the ledger.

While current Aries agents currently only support Indy-based ledgers, the intention is to add support for other ledgers.

The (most common) purpose of cloud agents is to enable secure and privacy preserving routing of messages between edge agents. Rather than messages going directly from edge agent to edge agent (which is often impossible - for example sending to a mobile agent), messages sent from edge agent to edge agent are routed through a sequence of cloud agents. Some of those cloud agents might be controlled by the sender, some by the receiver and others might be gateways owned by agent vendors (called \"Agencies\"). In all cases, an edge agent tells routing agents \"here's how to send messages to me\", so a routing agent sending a message only has to know how to send a peer-to-peer message. While quite complicated, the protocols used by the agents largely take care of this complexity, and most developers don't have to know much about it.

Note the many caveats in this section - \"most common\", \"commonly\", etc. There are many small building blocks available in Aries and underlying components that can be combined in infinite ways. We recommend not worrying about the alternate use cases for now. Focus on understanding the common use cases while remembering that other configurations are possible.

We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how indy-node or indy-sdk work to be able to build your first Aries-based application. Later in this guide we'll covering the starting point you do need to know.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesDeveloperDemos/","title":"Developer Demos and Samples of Aries Agent","text":"

Here are some demos that developers can use to get up to speed on Aries. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.

"},{"location":"gettingStarted/AriesDeveloperDemos/#open-api-demo","title":"Open API demo","text":"

This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.

Collaborating Agents OpenAPI Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"

Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.

Python-based Alice/Faber Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/AriesMessaging/","title":"An overview of Aries messaging","text":"

Aries Agents communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. Aries agents use (an early instance of) the did:peer DID method, which uses DIDs that are not published to a public ledger, but only shared privately between the communicating parties - usually just two agents.

Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:

Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.

Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.

Developers building Aries agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesRoutingExample/","title":"Aries Routing - an example","text":"

In this example, we'll walk through an example of complex routing in Aries, outlining some of the possibilities that can be implemented.

We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.

What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?

"},{"location":"gettingStarted/AriesRoutingExample/#the-scenario","title":"The Scenario","text":"

Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.

"},{"location":"gettingStarted/AriesRoutingExample/#all-the-dids","title":"All the DIDs","text":"

A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:

That's a lot more than just the Bob and Alice relationship we usually think about!

"},{"location":"gettingStarted/AriesRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"

From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):

Let's look at the did-communication service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:

The null serviceEndpoint for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.

"},{"location":"gettingStarted/AriesRoutingExample/#preparing-bobs-diddoc-for-alice","title":"Preparing Bob's DIDDoc for Alice","text":"

Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".

We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:

Note: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.

With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.

At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.

Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:

"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"

To be completed.

"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"

Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?

Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:

Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:

That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.

From experience, we\u2019ve added to two extra features to deal with unexpected conditions:

"},{"location":"gettingStarted/CredentialRevocation/#using-aca-py-revocation","title":"Using ACA-Py Revocation","text":"

The following are the ACA-Py steps and APIs involved in handling credential revocation.

To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.

Include the command line parameter --tails-server-base-url <indy-tails-server url>

  1. Publish credential definition

    Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.

    POST /credential-definitions\n{\n  \"schema_id\": schema_id,\n  \"support_revocation\": true,\n  # Only needed if support_revocation is true. Defaults to 100\n  \"revocation_registry_size\": size_int,\n  \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n  \"credential_definition_id\": \"credential_definition_id\"\n}\n
  2. Issue credential

    This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.

    POST /issue-credential/send-offer\n{\n    \"cred_def_id\": credential_definition_id,\n    \"revoc_reg_id\": revocation_registry_id\n    \"auto_remove\": False, # We need the credential exchange record when revoking\n    ...\n}\nResponse\n{\n    \"credential_exchange_id\": credential_exchange_id\n}\n
  3. Revoking credential

    POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>\n}\n

    If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations to publish pending revocations in batches. Revocation are not written to ledger until this is called.

  4. When asking for proof, specify the time span when the credential is NOT revoked

     POST /present-proof/send-request\n {\n   \"connection_id\": ...,\n   \"proof_request\": {\n     \"requested_attributes\": [\n       {\n         \"name\": ...\n         \"restrictions\": ...,\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"requested_predicates\": [\n       {\n         \"name\": ...\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"non_revoked\": # Optional, only check revocation if specified\n     {\n       \"from\": <seconds from Unix Epoch> # Optional, default is 0\n       \"to\": <seconds from Unix Epoch>\n     }\n   }\n }\n
"},{"location":"gettingStarted/CredentialRevocation/#revocation-notification","title":"Revocation Notification","text":"

ACA-Py supports Revocation Notification v1.0.

Note: The optional ~please_ack is not currently supported.

"},{"location":"gettingStarted/CredentialRevocation/#issuer-role","title":"Issuer Role","text":"

To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:

Your request might look something like:

POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>,\n    \"notify\": true,\n    \"connection_id\": <connection id>,\n    \"thread_id\": <thread id>,\n    \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"

On receipt of a revocation notification, an event with topic acapy::revocation-notification::received and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.

If the argument --monitor-revocation-notification is used on startup, a webhook with the topic revocation-notification and a payload containing the thread ID and comment is emitted to registered webhook urls.

"},{"location":"gettingStarted/CredentialRevocation/#manually-creating-revocation-registries","title":"Manually Creating Revocation Registries","text":"

NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.

The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.

However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.

There are several endpoints that must be called, and they must be called in this order:

  1. Create revoc registry POST /revocation/create-registry

  2. you need to provide the credential definition id and the size of the registry

  3. Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}

  4. here you need to provide the full URI that will be written to the ledger, for example:

{\n  \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
  1. Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition

  2. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  3. Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file

  4. the tails server will check that the registry definition is already written to the ledger

  5. Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry

  6. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  7. this operation MUST be performed on the the new revoc registry def BEFORE any revocation operations are performed
"},{"location":"gettingStarted/CredentialRevocation/#revocation-registry-rotation","title":"Revocation Registry Rotation","text":"

From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init, generated, posted, active, full, decommissioned. When issuing revocable credentials, the work is done with the active registry record. There are always 2 active registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active registry. When rotating, all registry records (except records in init state) are decommissioned and a new pair of active registry records are created.

Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate

It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id} after rotation and before issuance (if possible).

"},{"location":"gettingStarted/DIDcommMsgs/","title":"Deeper Dive: DIDComm Messaging","text":"

DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type. The namespace is currently defined to be a public DID that should be globally resolvable to a protocol specification. Currently, \"core\" messages use a DID that is not yet globally resolvable - Daniel Hardman has the keys associated with the DID.

Link: Message Types

As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.

"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"

In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"

The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"

Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"

Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.

"},{"location":"gettingStarted/IndyAriesDevOptions/","title":"What should I work on? Options for Aries/Indy Developers","text":"

Now that you know the basics of the Indy/Aries eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.

This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node (the Indy public ledger) or the indy-sdk. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.

In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.

"},{"location":"gettingStarted/IndyAriesDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"

If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in this repository (aries-cloudagent-python).

If you want to build a mobile agent, there are open source options available, including Aries-MobileAgent-Xamarin (aka \"Aries MAX\"), which is built on Aries Framework .NET, and Aries Mobile Agent React Native, which is built on Aries Framework JavaScript.

As a developer building applications that use/embed Aries agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.

Note that if building apps is what you want to do, you don't need to do a deep dive into the Aries SDK, the Indy SDK or the Indy Node public ledger. You need to know the concepts, but it's not a requirement that know the code base intimately.

"},{"location":"gettingStarted/IndyAriesDevOptions/#contributing-to-aries-cloudagent-python","title":"Contributing to aries-cloudagent-python","text":"

Of course as you build applications using aries-cloudagent-python, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.

"},{"location":"gettingStarted/IndyAriesDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"

aries-cloudagent-python currently supports only Hyperledger Indy-based public ledgers and verifiable credentials exchange. A goal of Hyperledger Aries is to be ledger-agnostic, and to support other ledgers. We're experimenting with adding support for other ledgers, and would welcome assistance in doing that.

"},{"location":"gettingStarted/IndyAriesDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"

Although controllers for an aries-cloudagent-python instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the aries-cloudagent-python, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader Aries community so that this and other codebases improve.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-aries-sdk","title":"Improving Aries SDK","text":"

This code base and other Aries agent implementations currently embed the indy-sdk. However, much of the code in the indy-sdk is being migrated into a variety of Aries language specific repositories. How this migration is to be done is still being decided, but it makes sense that the agent-type things be moved to Aries repositories. A number of language specific Aries SDK repos have been created and are being populated.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-the-indy-sdk","title":"Improving the Indy SDK","text":"

Dropping down a level from Aries and into Indy, the indy-sdk needs to continue to evolve. The code base is robust, of high quality and well thought out, but it needs to continue to add new capabilities and improve existing features. The indy-sdk is implemented in Rust, to produce a C-callable library that can be used by client libraries built in a variety of languages.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-indy-node","title":"Improving Indy Node","text":"

If you are interested in getting into the public ledger part of Indy, particularly if you are going to be a Sovrin Steward, you should take a deep look into indy-node. Like the indy-sdk, indy-node is robust, of high quality and is well thought out. As the network grows, use cases change and new cryptographic primitives move into the mainstream, indy-node capabilities will need to evolve. indy-node is coded in Python.

"},{"location":"gettingStarted/IndyAriesDevOptions/#working-in-cryptography","title":"Working in Cryptography","text":"

Finally, at the deepest level, and core to all of the projects is the cryptography in Hyperledger Ursa. If you are a cryptographer, that's where you want to be - and we want you there.

"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"

NOTE: If you are developer building apps on top of Aries and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. Aries takes care of those details for you. The introduction linked here should be sufficient.

If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).

Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - almost a year! We've got much more relevant examples later in this guide.

As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and Aries.

"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"

Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"

Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"

Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"

Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:

Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.

The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).

"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"

To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:

Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.

Link: DIDDoc conventions for inbound routing

"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"

Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.

Link: Mediators and Relays

"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"

The DIDComm encryption handling is handling within the Aries agent, and not really something a developer building applications using an agent needs to worry about. Further, within an Aries agent, the handling of the encryption is left to libraries to handle - ultimately calling dependencies from Hyperledger Ursa. To encrypt a message, the agent code calls a pack() function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack() function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.

Much thought has also gone into repudiable and non-repudiable messaging, as described here.

"},{"location":"gettingStarted/YourOwnAriesAgent/","title":"Creating Your Own Aries Agent","text":"

Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.

"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"

The aca-py agent supports message tracing, according to the Tracing RFC.

Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.

Tracing is configured globally for the agent.

"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"

The following options can be specified when starting the aca-py agent:

  --trace               Generate tracing events.\n  --trace-target <trace-target>\n                        Target for trace events (\"log\", \"message\", or http\n                        endpoint).\n  --trace-tag <trace-tag>\n                        Tag to be included when logging events.\n  --trace-label <trace-label>\n                        Label (agent name) used logging events.\n

The --trace option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log).

Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...} in the JSON payload to the API call (for credential and proof exchanges).

"},{"location":"testing/AgentTracing/#enabling-tracing-in-the-alicefaber-demo","title":"Enabling Tracing in the Alice/Faber Demo","text":"

The run_demo script supports the following parameters and environment variables.

Environment variables:

TRACE_ENABLED          Flag to enable tracing\n\nTRACE_TARGET_URL       Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET             Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG              Tag to be included in all logged trace events\n

Parameters:

--trace-log            Enables tracing to the standard log output\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http           Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n

When running the Faber controller, tracing can be enabled using the T menu option:

Faber      | Connected\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X]\n

When Exchange Tracing is ON, all exchanges will include tracing.

"},{"location":"testing/AgentTracing/#logging-trace-events-to-an-elk-stack","title":"Logging Trace Events to an ELK Stack","text":"

You can use the ELK stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:

DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"

ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url option, which requires the WEBHOOK_URL environment variable. Configure an end point running on the docker host system, port 8888, use the following:

WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/INTEGRATION-TESTS/","title":"Integration Tests for Aca-py using Behave","text":"

Integration tests for aca-py are implemented using Behave functional tests to drive aca-py agents based on the alice/faber demo framework.

If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness tests before you submit your pull requests.

"},{"location":"testing/INTEGRATION-TESTS/#getting-started","title":"Getting Started","text":"

To run the aca-py Behave tests, open a bash shell run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\n./run_bdd -t ~@taa_required\n

Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).

Note also that some tests require a ledger with TAA enabled, how to run these tests will be described later.

By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:

# run the above commands, up to cd aries-cloudagent-python/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

To run the tests against the back-end askar libraries (as opposed to indy-sdk) run the following:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n

(Note that wallet-type is currently the only extra argument supported.)

You can run individual tests by specifying the tag(s):

./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/INTEGRATION-TESTS/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"

To run a local von-network with TAA enabled,run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n

You can then run the TAA-enabled tests as follows:

./run_bdd -t @taa_required\n

or:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To overriide the default port settings:

AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tags>\n

(Note that since the test run multiple agents you require up to 60 available ports.)

"},{"location":"testing/INTEGRATION-TESTS/#aca-py-integration-tests-vs-aries-agent-test-harness-aath","title":"Aca-py Integration Tests vs Aries Agent Test Harness (AATH)","text":"

Aca-py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running aca-py agent (or in the case of AATH, against any compatible Aries agent), however the aca-py integration tests focus on aca-py specific features.

AATH:

Aca-py integration tests:

"},{"location":"testing/INTEGRATION-TESTS/#configuration-driven-tests","title":"Configuration-driven Tests","text":"

Aca-py integration tests use the same configuration approach as AATH, documented here.

In addition to support for external schemas, credential data etc, the aca-py integration tests support configuration of the aca-py agents that are used to run the test. For example:

Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n  Given \"3\" agents\n     | name  | role     | capabilities        |\n     | Acme  | issuer   | <Acme_capabilities> |\n     | Faber | verifier | <Acme_capabilities> |\n     | Bob   | prover   | <Bob_capabilities>  |\n  And \"<issuer>\" and \"Bob\" have an existing connection\n  And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n  ...\n\n  Examples:\n     | issuer | Acme_capabilities        | Bob_capabilities | Schema_name    | Credential_data          | Proof_request  |\n     | Acme   | --public-did             |                  | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n     | Faber  | --public-did  --mediator | --mediator       | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n

In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.

The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.

"},{"location":"testing/INTEGRATION-TESTS/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All Aca-py Agents Under Test","text":"

You can specify parameters that are applied to all aca-py agents using the ACAPY_ARG_FILE environment variable, for example:

ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

... will apply the parameters in the postgres-indy-args.yml file (which just happens to configure a postgres wallet) to all agents under test.

Or the following:

ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n

... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).

Any aca-py arguement can be included in the yml file, and order-of-precidence applies (see https://pypi.org/project/ConfigArgParse/).

"},{"location":"testing/INTEGRATION-TESTS/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"

Aca-py integration tests support the following environment-driven configuration:

"},{"location":"testing/INTEGRATION-TESTS/#running-specific-test-scenarios","title":"Running specific test scenarios","text":"

Behave tests are tagged using the same standard tags as used in AATH.

To run a specific set of Aca-py integration tests (or exclude specific tests):

./run_bdd -t tag1 -t ~tag2\n

(All command line parameters are passed to the behave command, so all parameters supported by behave can be used.)

"},{"location":"testing/INTEGRATION-TESTS/#aries-agent-test-harness-aca-py-tests","title":"Aries Agent Test Harness ACA-Py Tests","text":"

This video is a presentation by Aries Cloud Agent Python (ACA-Py) developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.

"},{"location":"testing/Logging/","title":"Logging docs","text":"

ACA_Py supports multiple configurations of logging.

"},{"location":"testing/Logging/#log-level","title":"Log level","text":"

ACA-Py's logging is based on python's logging lib. Log levels DEBUG, INFO and WARNING are available. Other log levels fall back to WARNING.

"},{"location":"testing/Logging/#per-tenant-logging","title":"Per Tenant Logging","text":"

Supports writing of log messages to a file with wallet_id as the tenant identifier for each. To enable this, both multitenant mode (--multitenant) and writing to log file option (--log-file) are required. If both --multitenant and --log-file are not passed when starting up ACA-Py, then it will use default_logging_config.ini config (backward compatible) and not log at a per tenant level.

"},{"location":"testing/Logging/#command-line-arguments","title":"Command Line Arguments","text":"

Example:

./bin/aca-py start --log-level debug --log-file acapy.log --log-config aries_cloudagent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./aries_cloudagent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"

The log level can be configured using the environment variable ACAPY_LOG_LEVEL. The log file can be set by ACAPY_LOG_FILE. The log config can be set by ACAPY_LOG_CONFIG.

Example:

ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#acapy-config-file","title":"Acapy Config File","text":"

Following parameters can be used in a configuration file like this.

log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n

Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.

"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"

The path to config file is provided via --log-config.

Find an example in default_logging_config.ini.

You can find more detail description in the logging documentation.

For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler and StreamHandler handlers. Custom TimedRotatingFileMultiProcessHandler handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name, when, interval and backupCount can be passed as args=('acapy.log', 'd', 7, 1,) (also shown below). Note: backupCount of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here

[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n

For DictConfig (dict logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini file.

version: 1\nformatters:\n  default:\n    format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n  console:\n    class: logging.StreamHandler\n    level: DEBUG\n    formatter: default\n    stream: ext://sys.stderr\n  rotating_file:\n    class: logging.handlers.TimedRotatingFileMultiProcessHandler\n    level: DEBUG\n    filename: 'acapy.log'\n    when: 'd'\n    interval: 7\n    backupCount: 1\n    formatter: default\nroot:\n  level: INFO\n  handlers:\n    - console\n    - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting Aries Cloud Agent Python","text":"

This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.

Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.

"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":""},{"location":"testing/Troubleshooting/#unable-to-connect-to-ledger","title":"Unable to Connect to Ledger","text":"

The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.

"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"

Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:

"},{"location":"testing/Troubleshooting/#any-firewalls","title":"Any Firewalls","text":"

Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.

"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"

We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, aca-py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (Andrew/s PR # 1804 (merged) should mitigate but probably won't completely eliminate this from happening).

For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:

To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:

Note that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.

We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.

We add an integration test that demonstrates/tests this issue here.

To run the scenario either manually or using the integration tests, you can do the following:

"},{"location":"testing/UnitTests/","title":"ACA-Py Unit Tests","text":"

The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.

This video is a presentation of the material covered in this document by developer @shaangill025.

"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":""},{"location":"testing/UnitTests/#pytest","title":"Pytest","text":"

Example: aries_cloudagent/core/tests/test_event_bus.py

@pytest.fixture\ndef event_bus():\n    yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n    yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n    event = Event(topic=\"anything\", payload=\"payload\")\n    yield event\n\nclass MockProcessor:\n    def __init__(self):\n        self.profile = None\n        self.event = None\n\n    async def __call__(self, profile, event):\n        self.profile = profile\n        self.event = event\n\n\n@pytest.fixture\ndef processor():\n    yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n    \"\"\"Test subscribe and unsubscribe.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    assert event_bus.topic_patterns_to_subscribers\n    assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n    event_bus.unsubscribe(re.compile(\".*\"), processor)\n    assert not event_bus.topic_patterns_to_subscribers\n

From aries_cloudagent/core/event_bus.py

class EventBus:\n    def __init__(self):\n        self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n        if pattern not in self.topic_patterns_to_subscribers:\n            self.topic_patterns_to_subscribers[pattern] = []\n        self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n    if pattern in self.topic_patterns_to_subscribers:\n        try:\n            index = self.topic_patterns_to_subscribers[pattern].index(processor)\n        except ValueError:\n            return\n        del self.topic_patterns_to_subscribers[pattern][index]\n        if not self.topic_patterns_to_subscribers[pattern]:\n            del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n    \"\"\"Test subscriber receives event.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    await event_bus.notify(profile, event)\n    assert processor.profile == profile\n    assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n    partials = []\n    for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n        match = pattern.match(event.topic)\n\n        if not match:\n            continue\n\n        for subscriber in subscribers:\n            partials.append(\n                partial(\n                    subscriber,\n                    profile,\n                    event.with_metadata(EventMetadata(pattern, match)),\n                )\n            )\n\n    for processor in partials:\n        try:\n            await processor()\n        except Exception:\n            LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"

From: aries_cloudagent/protocols/didexchange/v1_0/tests/test.manager.py

class TestDidExchangeManager(AsyncTestCase, TestConfig):\n    async def setUp(self):\n        self.responder = MockResponder()\n\n        self.oob_mock = async_mock.MagicMock(\n            clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n        )\n\n        self.route_manager = async_mock.MagicMock(RouteManager)\n        ...\n        self.profile = InMemoryProfile.test_profile(\n            {\n                \"default_endpoint\": \"http://aries.ca/endpoint\",\n                \"default_label\": \"This guy\",\n                \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n                \"debug.auto_accept_invites\": True,\n                \"debug.auto_accept_requests\": True,\n                \"multitenant.enabled\": True,\n                \"wallet.id\": True,\n            },\n            bind={\n                BaseResponder: self.responder,\n                OobMessageProcessor: self.oob_mock,\n                RouteManager: self.route_manager,\n                ...\n            },\n        )\n        ...\n\n    async def test_receive_invitation_no_auto_accept(self):\n        async with self.profile.session() as session:\n            mediation_record = MediationRecord(\n                role=MediationRecord.ROLE_CLIENT,\n                state=MediationRecord.STATE_GRANTED,\n                connection_id=self.test_mediator_conn_id,\n                routing_keys=self.test_mediator_routing_keys,\n                endpoint=self.test_mediator_endpoint,\n            )\n            await mediation_record.save(session)\n            with async_mock.patch.object(\n                self.multitenant_mgr, \"get_default_mediator\"\n            ) as mock_get_default_mediator:\n                mock_get_default_mediator.return_value = mediation_record\n                invi_rec = await self.oob_manager.create_invitation(\n                    my_endpoint=\"testendpoint\",\n                    hs_protos=[HSProto.RFC23],\n                )\n\n                invitee_record = await self.manager.receive_invitation(\n                    invi_rec.invitation,\n                    auto_accept=False,\n                )\n                assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n    self,\n    invitation: OOBInvitationMessage,\n    their_public_did: Optional[str] = None,\n    auto_accept: Optional[bool] = None,\n    alias: Optional[str] = None,\n    mediation_id: Optional[str] = None,\n) -> ConnRecord:\n    ...\n    accept = (\n        ConnRecord.ACCEPT_AUTO\n        if (\n            auto_accept\n            or (\n                auto_accept is None\n                and self.profile.settings.get(\"debug.auto_accept_invites\")\n            )\n        )\n        else ConnRecord.ACCEPT_MANUAL\n    )\n    service_item = invitation.services[0]\n    # Create connection record\n    conn_rec = ConnRecord(\n        invitation_key=(\n            DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n            if isinstance(service_item, OOBService)\n            else None\n        ),\n        invitation_msg_id=invitation._id,\n        their_label=invitation.label,\n        their_role=ConnRecord.Role.RESPONDER.rfc23,\n        state=ConnRecord.State.INVITATION.rfc23,\n        accept=accept,\n        alias=alias,\n        their_public_did=their_public_did,\n        connection_protocol=DIDX_PROTO,\n    )\n\n    async with self.profile.session() as session:\n        await conn_rec.save(\n            session,\n            reason=\"Created new connection record from invitation\",\n            log_params={\n                \"invitation\": invitation,\n                \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n            },\n        )\n\n        # Save the invitation for later processing\n        ...\n\n    return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":"
  with self.assertRaises(DIDXManagerError) as ctx:\n     ...\n  assert \" ... error ...\" in str(ctx.exception)\n
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Hyperledger Aries Cloud Agent - Python","text":"

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

Full access to an organized set of all of the ACA-Py documents is available at https://aca-py.org. Check it out! It's much easier to navigate than this GitHub repo for reading the documentation.

"},{"location":"#overview","title":"Overview","text":"

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using DIDComm messaging and Hyperledger Aries protocols. The \"cloud\" in the name means that ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.

ACA-Py is built on the Aries concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py\u2019s supported Aries protocols include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.

To use ACA-Py you create a business logic controller that \"talks to\" an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the Aries and DIDComm protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type Aries protocols.

This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.

"},{"location":"#multi-tenant","title":"Multi-Tenant","text":"

ACA-Py supports \"multi-tenant\" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an \"issuer-as-a-service\", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a \"cloud wallet\" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.

"},{"location":"#mediator-service","title":"Mediator Service","text":"

Startup options allow the use of an ACA-Py as an Aries mediator using core Aries protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to Aries agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio Technologies. Learn more about deploying a mediator here. See the Aries Mediator Service for a \"best practices\" configuration of an Aries mediator.

"},{"location":"#indy-transaction-endorsing","title":"Indy Transaction Endorsing","text":"

ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.

"},{"location":"#scaled-deployments","title":"Scaled Deployments","text":"

ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.

"},{"location":"#vc-api-endpoints","title":"VC-API Endpoints","text":"

A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.

"},{"location":"#example-uses","title":"Example Uses","text":"

The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:

"},{"location":"#getting-started","title":"Getting Started","text":"

For those new to SSI, Aries and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.

The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.

Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You\u2019ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.

"},{"location":"#understanding-the-architecture","title":"Understanding the Architecture","text":"

There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.

You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the Aries ACA-Py Plugins repository. Check them out -- it might have the very plugin you need!

"},{"location":"#installation-and-usage","title":"Installation and Usage","text":"

Use the \"install and go\" page for developers if you are comfortable with Trust over IP and Aries concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the /demo directory there is a full set of demos for developers to use in getting started, and the demo read me is a great starting point for developers to use an \"in-browser\" approach to run a zero-install example. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.

If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging, if you are unfamiliar with poetry please see our cheat sheet

"},{"location":"#about-the-aca-py-admin-api","title":"About the ACA-Py Admin API","text":"

The overview of ACA-Py\u2019s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.

An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.

Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.

"},{"location":"#troubleshooting","title":"Troubleshooting","text":"

There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the \"aries-cloudagent-python\" channel on the Hyperledger Discord chat server (invitation here).

"},{"location":"#credit","title":"Credit","text":"

The initial implementation of ACA-Py was developed by the Government of British Columbia\u2019s Digital Trust Team in Canada. To learn more about what\u2019s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.

See the MAINTAINERS.md file for a list of the current ACA-Py maintainers, and the guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.

"},{"location":"#contributing","title":"Contributing","text":"

Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing \u2014\u00a0guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.

"},{"location":"#license","title":"License","text":"

Apache License Version 2.0

"},{"location":"CHANGELOG/","title":"Aries Cloud Agent Python Changelog","text":""},{"location":"CHANGELOG/#0120rc2","title":"0.12.0rc2","text":""},{"location":"CHANGELOG/#march-5-2024","title":"March 5, 2024","text":"

Release 0.12.0 is a relative large release but currently with no breaking changes. We expect there will be breaking changes (at least in the handling of endorsement) before the 0.12.0 release is finalized, hence the minor version update.

The rc0 release candidate introduced a regression via [PR #2705] that has been reverted in rc1 and later via [PR #2789]. Further investigation is needed to determine how to accomplish the goal of [PR #2705] (\"feat: inject profile\") without the regression. The rc2 and later releases address a regression related to the sending of a revocation notification from the issuer to the holder of a newly revoked credential, fixed in [PR #2814]

Much progress has been made on did:peer support in this release, with the handling of inbound DID Peer 1 added, and inbound and outbound support for DID Peer 2 and 4. The goal of that work is to eliminate the remaining places where \"unqualified\" DIDs remain, and to enable the \"connection reuse\" in the Out of Band protocol when using DID Peer 2 and 4 DIDs. Work continues in supporting ledger agnostic AnonCreds, and the new Hyperledger AnonCreds Rust library. Attention was also given in the release to the handling of JSON-LD Data Integrity Verifiable Credentials, with more expected before the release is finalized. In addition to those updates, there were fixes and improvements across the codebase.

The most visible change in this release is the re-organization of the ACA-Py documentation, moving the vast majority of the documents to the folders within the docs folder -- a long overdue change that will allow us to soon publish the documents on https://aca-py.org directly from the ACA-Py repository, rather than from the separate aries-acapy-docs currently being used.

A big developer improvement is a revamping of the test handling to eliminate ~2500 warnings that were previously generated in the test suite. Nice job @ff137!

"},{"location":"CHANGELOG/#0120rc2-breaking-changes","title":"0.12.0rc2 Breaking Changes","text":"

There are no breaking changes in 0.12.0rc2.

"},{"location":"CHANGELOG/#0120rc2-categorized-list-of-pull-requests","title":"0.12.0rc2 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0110","title":"0.11.0","text":""},{"location":"CHANGELOG/#november-24-2023","title":"November 24, 2023","text":"

Release 0.11.0 is a relatively large release of new features, fixes, and internal updates. 0.11.0 is planned to be the last significant update before we begin the transition to using the ledger agnostic AnonCreds Rust in a release that is expected to bring Admin/Controller API changes. We plan to do patches to the 0.11.x branch while the transition is made to using [Anoncreds Rust].

An important addition to ACA-Py is support for signing and verifying SD-JWT verifiable credentials. We expect this to be the first of the changes to extend ACA-Py to support OpenID4VC protocols.

This release and Release 0.10.5 contain a high priority fix to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details. Anyone using JSON-LD presentations is recommended to upgrade to one of these versions of ACA-Py as soon as possible.

In the CI/CD realm, substantial changes were applied to the source base in switching from:

These are necessary and important modernization changes, with the latter two triggering many (largely mechanical) changes to the codebase.

"},{"location":"CHANGELOG/#0110-breaking-changes","title":"0.11.0 Breaking Changes","text":"

In addition to the impacts of the change for developers in switching from pip to Poetry, the only significant breaking change is the (overdue) transition of ACA-Py to always use the new DIDComm message type prefix, changing the DID Message prefix from the old hardcoded did:sov:BzCbsNYhMrjHiqZDTUASHg;spec to the new hardcoded https://didcomm.org value, and using the new DIDComm MIME type in place of the old. The vast majority (all?) Aries deployments have long since been updated to accept both values, so this change just forces the use of the newer value in sending messages. In updating this, we retained the old configuration parameters most deployments were using (--emit-new-didcomm-prefix and --emit-new-didcomm-mime-type) but updated the code to set the configuration parameters to true even if the parameters were not set. See [PR #2517].

The JSON-LD verifiable credential handling of JSON-LD contexts has been updated to pre-load the base contexts into the repository code so they are not fetched at run time. This is a security best practice for JSON-LD, and prevents errors in production when, from time to time, the JSON-LD contexts are unavailable because of outages of the web servers where they are hosted. See [PR #2587].

A Problem Report message is now sent when a request for a credential is received and there is no associated Credential Exchange Record. This may happen, for example, if an issuer decides to delete a Credential Exchange Record that has not be answered for a long time, and the holder responds after the delete. See [PR #2577].

"},{"location":"CHANGELOG/#0110-categorized-list-of-pull-requests","title":"0.11.0 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#2289-migrate-to-poetry-2436-gavinok","title":"2289 Migrate to Poetry #2436 Gavinok","text":""},{"location":"CHANGELOG/#0105","title":"0.10.5","text":""},{"location":"CHANGELOG/#november-21-2023","title":"November 21, 2023","text":"

Release 0.10.5 is a high priority patch release to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof in the Verifiable Presentation was not included when determining the verification value (true or false) of the overall presentation. A forthcoming security advisory will cover the details.

Anyone using JSON-LD presentations is recommended to upgrade to this version of ACA-Py as soon as possible.

"},{"location":"CHANGELOG/#0105-categorized-list-of-pull-requests","title":"0.10.5 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0104","title":"0.10.4","text":""},{"location":"CHANGELOG/#october-9-2023","title":"October 9, 2023","text":"

Release 0.10.4 is a patch release to correct an issue with the handling of did:key routing keys in some mediator scenarios, notably with the use of [Aries Framework Kotlin]. See the details in the PR and [Issue #2531 Routing for agents behind a aca-py based mediator is broken].

Thanks to codespree for raising the issue and providing the fix.

Aries Framework Kotlin

"},{"location":"CHANGELOG/#0104-categorized-list-of-pull-requests","title":"0.10.4 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0103","title":"0.10.3","text":""},{"location":"CHANGELOG/#september-29-2023","title":"September 29, 2023","text":"

Release 0.10.3 is a patch release to add an upgrade process for very old versions of Aries Cloud Agent Python (circa 0.5.2). If you have a long time deployment of an issuer that uses revocation, this release could correct internal data (tags in secure storage) related to revocation registries. Details of the about the triggering problem can be found in [Issue #2485].

The upgrade is applied by running the following command for the ACA-Py instance to be upgraded:

./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg

"},{"location":"CHANGELOG/#0103-categorized-list-of-pull-requests","title":"0.10.3 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0102","title":"0.10.2","text":""},{"location":"CHANGELOG/#september-22-2023","title":"September 22, 2023","text":"

Release 0.10.2 is a patch release for 0.10.1 that addresses three specific regressions found in deploying Release 0.10.1. The regressions are to fix:

"},{"location":"CHANGELOG/#0102-categorized-list-of-pull-requests","title":"0.10.2 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0101","title":"0.10.1","text":""},{"location":"CHANGELOG/#august-29-2023","title":"August 29, 2023","text":"

Release 0.10.1 contains a breaking change, an important fix for a regression introduced in 0.8.2 that impacts certain deployments, and a number of fixes and updates. Included in the updates is a significant internal reorganization of the DID and connection management code that was done to enable more flexible uses of different DID Methods, such as being able to use did:web DIDs for DIDComm messaging connections. The work also paves the way for coming updates related to support for did:peer DIDs for DIDComm. For details on the change see [PR #2409], which includes some of the best pull request documentation ever created.

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

The regression fix is for ACA-Py deployments that use multi-use invitations but do NOT use the --auto-accept-connection-requests flag/processing. A change in 0.8.2 (PR [#2223]) suppressed an extra webhook event firing during the processing after receiving a connection request. An unexpected side effect of that change was that the subsequent webhook event also did not fire, and as a result, the controller did not get any event signalling a new connection request had been received via the multi-use invitation. The update in this release ensures the proper event fires and the controller receives the webhook.

See below for the breaking changes and a categorized list of the pull requests included in this release.

Updates in the CI/CD area include adding the publishing of a nightly container image that includes any changes in the main branch since the last nightly was published. This allows getting the \"latest and greatest\" code via a container image vs. having to install ACA-Py from the repository. In addition, Snyk scanning was added to the CI pipeline, and Indy SDK tests were removed from the pipeline.

"},{"location":"CHANGELOG/#0101-breaking-changes","title":"0.10.1 Breaking Changes","text":"

[#2352] is a breaking change related to the storage of presentation exchange records in ACA-Py. In previous releases, presentation exchange protocol state data records were retained in ACA-Py secure storage after the completion of protocol instances. With this release the default behavior changes to deleting those records by default, unless the ----preserve-exchange-records flag is set in the configuration. This extends the use of that flag that previously applied only to issue credential records. The extension matches the initial intention of the flag--that it cover both issue credential and present proof exchanges. The \"best practices\" for ACA-Py is that the controller (business logic) store any long-lasting business information needed for the service that is using the Aries Agent, and ACA-Py storage should be used only for data necessary for the operation of the agent. In particular, protocol state data should be held in ACA-Py only as long as the protocol is running (as it is needed by ACA-Py), and once a protocol instance completes, the controller should extract and store the business information from the protocol state before it is deleted from ACA-Py storage.

"},{"location":"CHANGELOG/#0100-categorized-list-of-pull-requests","title":"0.10.0 Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#0100","title":"0.10.0","text":""},{"location":"CHANGELOG/#august-29-2023_1","title":"August 29, 2023","text":"

Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.

"},{"location":"CHANGELOG/#090","title":"0.9.0","text":""},{"location":"CHANGELOG/#july-24-2023","title":"July 24, 2023","text":"

Release 0.9.0 is an important upgrade that changes (PR [#2302]) the dependency on the now archived Hyperledger Ursa project to its updated, improved replacement, AnonCreds CL-Signatures. This important change is ONLY available when using Aries Askar as the wallet type, which brings in both [Indy VDR] and the CL-Signatures via the latest version of CredX from the indy-shared-rs repository. The update is NOT available to those that are using the Indy SDK. All new deployments of ACA-Py SHOULD use Aries Askar. Further, we strongly recommend that all deployments using the Indy SDK with ACA-Py upgrade their installation to use Aries Askar and the related components using the migration scripts available. An Indy SDK to Askar migration document added to the aca-py.org documentation site, and a deprecation warning added to the ACA-Py startup.

The second big change in this release is that we have upgraded the primary Python version from 3.6 to 3.9 (PR [#2247]). In this case, primary means that Python 3.9 is used to run the unit and integration tests on all Pull Requests. We also do nightly runs of the main branch using Python 3.10. As of this release we have dropped Python 3.6, 3.7 and 3.8, and introduced new dependencies that are not supported in those versions of Python. For those that use the published ACA-Py container images, the upgrade should be easily handled. If you are pulling ACA-Py into your own image, or a non-containerized environment, this is a breaking change that you will need to address.

Please see the next section for all breaking changes, and the subsequent section for a categorized list of all pull requests in this release.

"},{"location":"CHANGELOG/#breaking-changes","title":"Breaking Changes","text":"

In addition to the breaking Python 3.6 to 3.9 upgrade, there are two other breaking changes that may impact some deployments.

[#2034] allows for additional flexibility in using public DIDs in invitations, and adds a restriction that \"implicit\" invitations must be proactively enabled using a flag (--requests-through-public-did). Previously, such requests would always be accepted if --auto-accept was enabled, which could lead to unexpected connections being established.

[#2170] is a change to improve message handling in the face of delivery errors when using a persistent queue implementation such as the ACA-Py Redis Plugin. If you are using the Redis plugin, you MUST upgrade to Redis Plugin Release 0.1.0 in conjunction with deploying this ACA-Py release. For those using their own persistent queue solution, see the PR [#2170] comments for information about changes you might need to make to your deployment.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#082","title":"0.8.2","text":""},{"location":"CHANGELOG/#june-29-2023","title":"June 29, 2023","text":"

Release 0.8.2 contains a number of minor fixes and updates to ACA-Py, including the correction of a regression in Release 0.8.0 related to the use of plugins (see [#2255]). Highlights include making it easier to use tracing in a development environment to collect detailed performance information about what is going in within ACA-Py.

This release pulls in indy-shared-rs Release 3.3 which fixes a serious issue in AnonCreds verification, as described in issue [#2036], where the verification of a presentation with multiple revocable credentials fails when using Aries Askar and the other shared components. This issue occurs only when using Aries Askar and indy-credx Release 3.3.

An important new feature in this release is the ability to set some instance configuration settings at the tenant level of a multi-tenant deployment. See PR [#2233].

There are no breaking changes in this release.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_1","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#081","title":"0.8.1","text":""},{"location":"CHANGELOG/#april-5-2023","title":"April 5, 2023","text":"

Version 0.8.1 is an urgent update to Release 0.8.0 to address an inability to execute the upgrade command. The upgrade command is needed for 0.8.0 Pull Request [#2116] - \"UPGRADE: Fix multi-use invitation performance\", which is useful for (at least) deployments of ACA-Py as a mediator. In the release, the upgrade process is revamped, and documented in Upgrading ACA-Py.

Key points about upgrading for those with production, pre-0.8.1 ACA-Py deployments:

"},{"location":"CHANGELOG/#postgres-support-with-aries-askar","title":"Postgres Support with Aries Askar","text":"

Recent changes to Aries Askar have resulted in Askar supporting Postgres version 11 and greater. If you are on Postgres 10 or earlier and want to upgrade to use Askar, you must migrate your database to Postgres 10.

We have also noted that in some container orchestration environments such as Red Hat's OpenShift and possibly other Kubernetes distributions, Askar using Postgres versions greater than 14 do not install correctly. Please monitor [Issue #2199] for an update to this limitation. We have found that Postgres 15 does install correctly in other environments (such as in docker compose setups).

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_2","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#080","title":"0.8.0","text":""},{"location":"CHANGELOG/#march-14-2023","title":"March 14, 2023","text":"

0.8.0 is a breaking change that contains all updates since release 0.7.5. It extends the previously tagged 1.0.0-rc1 release because it is not clear when the 1.0.0 release will be finalized. Many of the PRs in this release were previously included in the 1.0.0-rc1 release. The categorized list of PRs separates those that are new from those in the 1.0.0-rc1 release candidate.

There are not a lot of new Aries Framework features in this release, as the focus has been on cleanup and optimization. The biggest addition is the inclusion with ACA-Py of a universal resolver interface, allowing an instance to have both local resolvers for some DID Methods and a call out to an external universal resolver for other DID Methods. Another significant new capability is full support for Hyperledger Indy transaction endorsement for Authors and Endorsers. A new repo aries-endorser-service has been created that is a pre-configured instance of ACA-Py for use as an Endorser service.

A recently completed feature that is outside of ACA-Py is a script to migrate existing ACA-Py storage from Indy SDK format to Aries Askar format. This enables existing deployments to switch to using the newer Aries Askar components. For details see the converter in the aries-acapy-tools repository.

"},{"location":"CHANGELOG/#container-publishing-updated","title":"Container Publishing Updated","text":"

With this release, a new automated process publishes container images in the Hyperledger container image repository. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\"). Additional information about the container image publication process can be found in the document Container Images and Github Actions.

The ACA-Py container images are based on Python 3.6 and 3.9 slim-bullseye images, and are designed to support linux/386 (x86), linux/amd64 (x64), and linux/arm64. However, for this release, the publication of multi-architecture containers is disabled. We are working to enable that through the updating of some dependencies that lack that capability. There are two flavors of image built for each Python version. One contains only the Indy/Aries Shared Libraries only (Aries Askar, Indy VDR and Indy Shared RS, supporting only the use of --wallet-type askar). The other (labelled indy) contains the Indy/Aries shared libraries and the Indy SDK (considered deprecated). For new deployments, we recommend using the Python 3.9 Shared Library images. For existing deployments, we recommend migrating to those images.

Those currently using the container images published by BC Gov on Docker Hub should change to use those published to the Hyperledger Package Repository under aries-cloudagent-python.

"},{"location":"CHANGELOG/#breaking-changes-and-upgrades","title":"Breaking Changes and Upgrades","text":""},{"location":"CHANGELOG/#pr-2034-implicit-connections","title":"PR #2034 -- Implicit connections","text":"

The break impacts existing deployments that support implicit connections, those initiated by another agent using a Public DID for this instance instead of an explicit invitation. Such deployments need to add the configuration parameter --requests-through-public-did to continue to support that feature. The use case is that an ACA-Py instance publishes a public DID on a ledger with a DIDComm service in the DIDDoc. Other agents resolve that DID, and attempt to establish a connection with the ACA-Py instance using the service endpoint. This is called an \"implicit\" connection in RFC 0023 DID Exchange.

"},{"location":"CHANGELOG/#pr-1913-unrevealed-attributes-in-presentations","title":"PR #1913 -- Unrevealed attributes in presentations","text":"

Updates the handling of \"unrevealed attributes\" during verification of AnonCreds presentations, allowing them to be used in a presentation, with additional data that can be checked if for unrevealed attributes. As few implementations of Aries wallets support unrevealed attributes in an AnonCreds presentation, this is unlikely to impact any deployments.

"},{"location":"CHANGELOG/#pr-2145-update-webhook-message-to-terse-form-by-default-added-startup-flag-debug-webhooks-for-full-form","title":"PR #2145 - Update webhook message to terse form by default, added startup flag --debug-webhooks for full form","text":"

The default behavior in ACA-Py has been to keep the full text of all messages in the protocol state object, and include the full protocol state object in the webhooks sent to the controller. When the messages include an object that is very large in all the messages, the webhook may become too big to be passed via HTTP. For example, issuing a credential with a photo as one of the claims may result in a number of copies of the photo in the protocol state object and hence, very large webhooks. This change reduces the size of the webhook message by eliminating redundant data in the protocol state of the \"Issue Credential\" message as the default, and adds a new parameter to use the old behavior.

"},{"location":"CHANGELOG/#upgrade-pr-2116-upgrade-fix-multi-use-invitation-performance","title":"UPGRADE PR #2116 - UPGRADE: Fix multi-use invitation performance","text":"

The way that multiuse invitations in previous versions of ACA-Py caused performance to degrade over time. An update was made to add state into the tag names that eliminated the need to scan the tags when querying storage for the invitation.

If you are using multiuse invitations in your existing (pre-0.8.0 deployment of ACA-Py, you can run an upgrade to apply this change. To run upgrade from previous versions, use the following command using the 0.8.0 version of ACA-Py, adding you wallet settings:

aca-py upgrade <other wallet config settings> --from-version=v0.7.5 --upgrade-config-path ./upgrade.yml

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_3","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#075","title":"0.7.5","text":""},{"location":"CHANGELOG/#october-26-2022","title":"October 26, 2022","text":"

0.7.5 is a patch release to deal primarily to add PR #1881 DID Exchange in ACA-Py 0.7.4 with explicit invitations and without auto-accept broken. A couple of other PRs were added to the release, as listed below, and in Milestone 0.7.5.

"},{"location":"CHANGELOG/#list-of-pull-requests","title":"List of Pull Requests","text":""},{"location":"CHANGELOG/#074","title":"0.7.4","text":""},{"location":"CHANGELOG/#june-30-2022","title":"June 30, 2022","text":"

Existing multitenant JWTs invalidated when a new JWT is generated: If you have a pre-existing implementation with existing Admin API authorization JWTs, invoking the endpoint to get a JWT now invalidates the existing JWT. Previously an identical JWT would be created. Please see this comment on PR #1725 for more details.

0.7.4 is a significant release focused on stability and production deployments. As the \"patch\" release number indicates, there were no breaking changes in the Admin API, but a huge volume of updates and improvements. Highlights of this release include:

In addition, there are a significant number of general enhancements, bug fixes, documentation updates and code management improvements.

This release is a reflection of the many groups stressing ACA-Py in production environments, reporting issues and the resulting solutions. We also have a very large number of contributors to ACA-Py, with this release having PRs from 22 different individuals. A big thank you to all of those using ACA-Py, raising issues and providing solutions.

"},{"location":"CHANGELOG/#major-enhancements","title":"Major Enhancements","text":"

A lot of work has been put into this release related to performance and load testing, with significant updates being made to the key \"shared component\" ACA-Py dependencies (Aries Askar, Indy VDR) and Indy Shared RS (including CredX). We now recommend using those components (by using --wallet-type askar in the ACA-Py startup parameters) for new ACA-Py deployments. A wallet migration tool from indy-sdk storage to Askar storage is still needed before migrating existing deployment to Askar. A big thanks to those creating/reporting on stress test scenarios, and especially the team at LISSI for creating the aries-cloudagent-loadgenerator to make load testing so easy! And of course to the core ACA-Py team for addressing the findings.

The largest enhancement is in the area of the endorsing of Hyperledger Indy ledger transactions, enabling an instance of ACA-Py to act as an Endorser for Indy authors needing endorsements to write objects to an Indy ledger. We're working on an Aries Endorser Service based on the new capabilities in ACA-Py, an Endorser to be easily operated by an organization, ideally with a controller starter kit supporting a basic human and automated approvals business workflow. Contributions welcome!

A focus towards the end of the 0.7.4 development and release cycle was on the handling of AnonCreds revocation in ACA-Py. Most important, a production issue was uncovered where by an ACA-Py issuer's local Revocation Registry data could get out of sync with what was published on an Indy ledger, resulting in an inability to publish new RevRegEntry transactions -- making new revocations impossible. As a result, we have added some new endpoints to enable an update to the RevReg storage such that RevRegEntry transactions can again be published to the ledger. Other changes were added related to revocation in general and in the handling of tails files in particular.

The team has worked a lot on evolving the persistent queue (PQ) approach available in ACA-Py. We have landed on a design for the queues for inbound and outbound messages using a default in-memory implementation, and the ability to replace the default method with implementations created via an ACA-Py plugin. There are two concrete, out-of-the-box external persistent queuing solutions available for Redis and Kafka. Those ACA-Py persistent queue implementation repositories will soon be migrated to the Aries project within the Hyperledger Foundation's GitHub organization. Anyone else can implement their own queuing plugin as long as it uses the same interface.

Several new ways to control ACA-Py configurations were added, including new startup parameters, Admin API parameters to control instances of protocols, and additional web hook notifications.

A number of fixes were made to the Credential Exchange protocols, both for V1 and V2, and for both AnonCreds and W3C format VCs. Nothing new was added and there no changes in the APIs.

As well there were a number of internal fixes, dependency updates, documentation and demo changes, developer tools and release management updates. All the usual stuff needed for a healthy, growing codebase.

"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_4","title":"Categorized List of Pull Requests","text":""},{"location":"CHANGELOG/#073","title":"0.7.3","text":""},{"location":"CHANGELOG/#january-10-2022","title":"January 10, 2022","text":"

This release includes some new AIP 2.0 features out (Revocation Notification and Discover Features 2.0), a major new feature for those using Indy ledger (multi-ledger support), a new \"version upgrade\" process that automates updating data in secure storage required after a new release, and a fix for a critical bug in some mediator scenarios. The release also includes several new pieces of documentation (upgrade processing, storage database information and logging) and some other documentation updates that make the ACA-Py Read The Docs site useful again. And of course, some recent bug fixes and cleanups are included.

There is a BREAKING CHANGE for those deploying ACA-Py with an external outbound queue implementation (see PR #1501). As far as we know, there is only one organization that has such an implementation and they were involved in the creation of this PR, so we are not making this release a minor or major update. However, anyone else using an external queue should be aware of the impact of this PR that is included in the release.

For those that have an existing deployment of ACA-Py with long-lasting connection records, an upgrade is needed to use RFC 434 Out of Band and the \"reuse connection\" as the invitee. In PR #1453 (details below) a performance improvement was made when finding a connection for reuse. The new approach (adding a tag to the connection to enable searching) applies only to connections made using this ACA-Py release and later, and \"as-is\" connections made using earlier releases of ACA-Py will not be found as reuse candidates. A new \"Upgrade deployment\" capability (#1557, described below) must be executed to update your deployment to add tags for all existing connections.

The Supported RFCs document has been updated to reflect the addition of the AIP 2.0 RFCs for which support was added.

The following is an annotated list of PRs in the release, including a link to each PR.

"},{"location":"CHANGELOG/#072","title":"0.7.2","text":""},{"location":"CHANGELOG/#november-15-2021","title":"November 15, 2021","text":"

A mostly maintenance release with some key updates and cleanups based on community deployments and discovery. With usage in the field increasing, we're cleaning up edge cases and issues related to volume deployments.

The most significant new feature for users of Indy ledgers is a simplified approach for transaction authors getting their transactions signed by an endorser. Transaction author controllers now do almost nothing other than configuring their instance to use an Endorser, and ACA-Py takes care of the rest. Documentation of that feature is here.

"},{"location":"CHANGELOG/#071","title":"0.7.1","text":""},{"location":"CHANGELOG/#august-31-2021","title":"August 31, 2021","text":"

A relatively minor maintenance release to address issues found since the 0.7.0 Release. Includes some cleanups of JSON-LD Verifiable Credentials and Verifiable Presentations

"},{"location":"CHANGELOG/#070","title":"0.7.0","text":""},{"location":"CHANGELOG/#july-14-2021","title":"July 14, 2021","text":"

Another significant release, this version adds support for multiple new protocols, credential formats, and extension methods.

"},{"location":"CHANGELOG/#060","title":"0.6.0","text":""},{"location":"CHANGELOG/#february-25-2021","title":"February 25, 2021","text":"

This is a significant release of ACA-Py with several new features, as well as changes to the internal architecture in order to set the groundwork for using the new shared component libraries: indy-vdr, indy-credx, and aries-askar.

"},{"location":"CHANGELOG/#mediator-support","title":"Mediator support","text":"

While ACA-Py had previous support for a basic routing protocol, this was never fully developed or used in practice. Starting with this release, inbound and outbound connections can be established through a mediator agent using the Aries Mediator Coordination Protocol. This work was initially contributed by Adam Burdett and Daniel Bluhm of Indicio on behalf of SICPA. Read more about mediation support.

"},{"location":"CHANGELOG/#multi-tenancy-support","title":"Multi-Tenancy support","text":"

Started by BMW and completed by Animo Solutions and Anon Solutions on behalf of SICPA, this feature allows for a single ACA-Py instance to host multiple wallet instances. This can greatly reduce the resources required when many identities are being handled. Read more about multi-tenancy support.

"},{"location":"CHANGELOG/#new-connection-protocols","title":"New connection protocol(s)","text":"

In addition to the Aries 0160 Connections RFC, ACA-Py now supports the Aries DID Exchange Protocol for connection establishment and reuse, as well as the Aries Out-of-Band Protocol for representing connection invitations and other pre-connection requests.

"},{"location":"CHANGELOG/#issue-credential-v2","title":"Issue-Credential v2","text":"

This release includes an initial implementation of the Aries Issue Credential v2 protocol.

"},{"location":"CHANGELOG/#notable-changes-for-administrators","title":"Notable changes for administrators","text":""},{"location":"CHANGELOG/#notable-changes-for-plugin-writers","title":"Notable changes for plugin writers","text":"

The following are breaking changes to the internal APIs which may impact Python code extensions.

python= async with profile.session() as session: storage = session.inject(BaseStorage)

"},{"location":"CHANGELOG/#056","title":"0.5.6","text":""},{"location":"CHANGELOG/#october-19-2020","title":"October 19, 2020","text":""},{"location":"CHANGELOG/#055","title":"0.5.5","text":""},{"location":"CHANGELOG/#october-9-2020","title":"October 9, 2020","text":""},{"location":"CHANGELOG/#054","title":"0.5.4","text":""},{"location":"CHANGELOG/#august-24-2020","title":"August 24, 2020","text":""},{"location":"CHANGELOG/#053","title":"0.5.3","text":""},{"location":"CHANGELOG/#july-23-2020","title":"July 23, 2020","text":""},{"location":"CHANGELOG/#052","title":"0.5.2","text":""},{"location":"CHANGELOG/#june-26-2020","title":"June 26, 2020","text":""},{"location":"CHANGELOG/#051","title":"0.5.1","text":""},{"location":"CHANGELOG/#april-23-2020","title":"April 23, 2020","text":""},{"location":"CHANGELOG/#050","title":"0.5.0","text":""},{"location":"CHANGELOG/#april-21-2020","title":"April 21, 2020","text":""},{"location":"CHANGELOG/#045","title":"0.4.5","text":""},{"location":"CHANGELOG/#march-3-2020","title":"March 3, 2020","text":""},{"location":"CHANGELOG/#044","title":"0.4.4","text":""},{"location":"CHANGELOG/#february-28-2020","title":"February 28, 2020","text":""},{"location":"CHANGELOG/#043","title":"0.4.3","text":""},{"location":"CHANGELOG/#february-26-2020","title":"February 26, 2020","text":""},{"location":"CHANGELOG/#042","title":"0.4.2","text":""},{"location":"CHANGELOG/#february-8-2020","title":"February 8, 2020","text":""},{"location":"CHANGELOG/#041","title":"0.4.1","text":""},{"location":"CHANGELOG/#january-31-2020","title":"January 31, 2020","text":""},{"location":"CHANGELOG/#040","title":"0.4.0","text":""},{"location":"CHANGELOG/#december-10-2019","title":"December 10, 2019","text":""},{"location":"CHANGELOG/#035","title":"0.3.5","text":""},{"location":"CHANGELOG/#november-1-2019","title":"November 1, 2019","text":""},{"location":"CHANGELOG/#034","title":"0.3.4","text":""},{"location":"CHANGELOG/#october-23-2019","title":"October 23, 2019","text":""},{"location":"CHANGELOG/#033","title":"0.3.3","text":""},{"location":"CHANGELOG/#september-27-2019","title":"September 27, 2019","text":""},{"location":"CHANGELOG/#032","title":"0.3.2","text":""},{"location":"CHANGELOG/#september-3-2019","title":"September 3, 2019","text":""},{"location":"CHANGELOG/#031","title":"0.3.1","text":""},{"location":"CHANGELOG/#august-15-2019","title":"August 15, 2019","text":""},{"location":"CHANGELOG/#030","title":"0.3.0","text":""},{"location":"CHANGELOG/#august-9-2019","title":"August 9, 2019","text":""},{"location":"CHANGELOG/#021","title":"0.2.1","text":""},{"location":"CHANGELOG/#july-16-2019","title":"July 16, 2019","text":""},{"location":"CHANGELOG/#020","title":"0.2.0","text":""},{"location":"CHANGELOG/#july-16-2019_1","title":"July 16, 2019","text":"

This is the first PyPI release. The history begins with the transfer of aca-py from bcgov to hyperledger.

"},{"location":"CODE_OF_CONDUCT/","title":"Hyperledger Code of Conduct","text":"

Hyperledger is a collaborative project at The Linux Foundation. It is an open-source and open community project where participants choose to work together, and in that process experience differences in language, location, nationality, and experience. In such a diverse environment, misunderstandings and disagreements happen, which in most cases can be resolved informally. In rare cases, however, behavior can intimidate, harass, or otherwise disrupt one or more people in the community, which Hyperledger will not tolerate.

A Code of Conduct is useful to define accepted and acceptable behaviors and to promote high standards of professional practice. It also provides a benchmark for self evaluation and acts as a vehicle for better identity of the organization.

This code (CoC) applies to any member of the Hyperledger community \u2013 developers, participants in meetings, teleconferences, mailing lists, conferences or functions, etc. Note that this code complements rather than replaces legal rights and obligations pertaining to any particular situation.

"},{"location":"CODE_OF_CONDUCT/#statement-of-intent","title":"Statement of Intent","text":"

Hyperledger is committed to maintain a positive work environment. This commitment calls for a workplace where participants at all levels behave according to the rules of the following code. A foundational concept of this code is that we all share responsibility for our work environment.

"},{"location":"CODE_OF_CONDUCT/#code","title":"Code","text":"
  1. Treat each other with respect, professionalism, fairness, and sensitivity to our many differences and strengths, including in situations of high pressure and urgency.

  2. Never harass or bully anyone verbally, physically or sexually.

  3. Never discriminate on the basis of personal characteristics or group membership.

  4. Communicate constructively and avoid demeaning or insulting behavior or language.

  5. Seek, accept, and offer objective work criticism, and acknowledge properly the contributions of others.

  6. Be honest about your own qualifications, and about any circumstances that might lead to conflicts of interest.

  7. Respect the privacy of others and the confidentiality of data you access.

  8. With respect to cultural differences, be conservative in what you do and liberal in what you accept from others, but not to the point of accepting disrespectful, unprofessional or unfair or unwelcome behavior or advances.

  9. Promote the rules of this Code and take action (especially if you are in a leadership position) to bring the discussion back to a more civil level whenever inappropriate behaviors are observed.

  10. Stay on topic: Make sure that you are posting to the correct channel and avoid off-topic discussions. Remember when you update an issue or respond to an email you are potentially sending to a large number of people.

  11. Step down considerately: Members of every project come and go, and the Hyperledger is no different. When you leave or disengage from the project, in whole or in part, we ask that you do so in a way that minimizes disruption to the project. This means you should tell people you are leaving and take the proper steps to ensure that others can pick up where you left off.

"},{"location":"CODE_OF_CONDUCT/#glossary","title":"Glossary","text":""},{"location":"CODE_OF_CONDUCT/#demeaning-behavior","title":"Demeaning Behavior","text":"

is acting in a way that reduces another person's dignity, sense of self-worth or respect within the community.

"},{"location":"CODE_OF_CONDUCT/#discrimination","title":"Discrimination","text":"

is the prejudicial treatment of an individual based on criteria such as: physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.

"},{"location":"CODE_OF_CONDUCT/#insulting-behavior","title":"Insulting Behavior","text":"

is treating another person with scorn or disrespect.

"},{"location":"CODE_OF_CONDUCT/#acknowledgement","title":"Acknowledgement","text":"

is a record of the origin(s) and author(s) of a contribution.

"},{"location":"CODE_OF_CONDUCT/#harassment","title":"Harassment","text":"

is any conduct, verbal or physical, that has the intent or effect of interfering with an individual, or that creates an intimidating, hostile, or offensive environment.

"},{"location":"CODE_OF_CONDUCT/#leadership-position","title":"Leadership Position","text":"

includes group Chairs, project maintainers, staff members, and Board members.

"},{"location":"CODE_OF_CONDUCT/#participant","title":"Participant","text":"

includes the following persons:

"},{"location":"CODE_OF_CONDUCT/#respect","title":"Respect","text":"

is the genuine consideration you have for someone (if only because of their status as participant in Hyperledger, like yourself), and that you show by treating them in a polite and kind way.

"},{"location":"CODE_OF_CONDUCT/#sexual-harassment","title":"Sexual Harassment","text":"

includes visual displays of degrading sexual images, sexually suggestive conduct, offensive remarks of a sexual nature, requests for sexual favors, unwelcome physical contact, and sexual assault.

"},{"location":"CODE_OF_CONDUCT/#unwelcome-behavior","title":"Unwelcome Behavior","text":"

Hard to define? Some questions to ask yourself are:

"},{"location":"CODE_OF_CONDUCT/#unwelcome-sexual-advance","title":"Unwelcome Sexual Advance","text":"

includes requests for sexual favors, and other verbal or physical conduct of a sexual nature, where:

"},{"location":"CODE_OF_CONDUCT/#workplace-bullying","title":"Workplace Bullying","text":"

is a tendency of individuals or groups to use persistent aggressive or unreasonable behavior (e.g. verbal or written abuse, offensive conduct or any interference which undermines or impedes work) against a co-worker or any professional relations.

"},{"location":"CODE_OF_CONDUCT/#work-environment","title":"Work Environment","text":"

is the set of all available means of collaboration, including, but not limited to messages to mailing lists, private correspondence, Web pages, chat channels, phone and video teleconferences, and any kind of face-to-face meetings or discussions.

"},{"location":"CODE_OF_CONDUCT/#incident-procedure","title":"Incident Procedure","text":"

To report incidents or to appeal reports of incidents, send email to Mike Dolan (mdolan@linuxfoundation.org) or Angela Brown (angela@linuxfoundation.org). Please include any available relevant information, including links to any publicly accessible material relating to the matter. Every effort will be taken to ensure a safe and collegial environment in which to collaborate on matters relating to the Project. In order to protect the community, the Project reserves the right to take appropriate action, potentially including the removal of an individual from any and all participation in the project. The Project will work towards an equitable resolution in the event of a misunderstanding.

"},{"location":"CODE_OF_CONDUCT/#credits","title":"Credits","text":"

This code is based on the W3C\u2019s Code of Ethics and Professional Conduct with some additions from the Cloud Foundry\u2018s Code of Conduct.

"},{"location":"CONTRIBUTING/","title":"How to contribute","text":"

You are encouraged to contribute to the repository by forking and submitting a pull request.

For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.

(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)

Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main branch. Pull requests should have a descriptive name, include a summary of all changes made in the pull request description, and include unit tests that provide good coverage of the feature or fix. A Continuous Integration (CI) pipeline is executed on all PRs before review and contributors are expected to address all CI issues identified. Where appropriate, PRs that impact the end-user and developer demos in the repo should include updates or extensions to those demos to cover the new capabilities.

If you would like to propose a significant change, please open an issue first to discuss the work with the community.

Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).

"},{"location":"CONTRIBUTING/#development-tools","title":"Development Tools","text":""},{"location":"CONTRIBUTING/#pre-commit","title":"Pre-commit","text":"

A configuration for pre-commit is included in this repository. This is an optional tool to help contributors commit code that follows the formatting requirements enforced by the CI pipeline. Additionally, it can be used to help contributors write descriptive commit messages that can be parsed by changelog generators.

On each commit, pre-commit hooks will run that verify the committed code complies with ruff and is formatted with black. To install the ruff and black checks:

pre-commit install\n

To install the commit message linter:

pre-commit install --hook-type commit-msg\n
"},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#maintainer-scopes-github-roles-and-github-teams","title":"Maintainer Scopes, GitHub Roles and GitHub Teams","text":"

Maintainers are assigned the following scopes in this repository:

Scope Definition GitHub Role GitHub Team Admin Admin aries-admins Maintainer The GitHub Maintain role Maintain aries-cloudagent-python committers Triage The GitHub Triage role Triage aries triage Read The GitHub Read role Read Aries Contributors Read The GitHub Read role Read TOC Read The GitHub Read role Read aries-framework-go-ext committers"},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"GitHub ID Name Scope LFID Discord ID Email Company Affiliation andrewwhitehead Andrew Whitehead Admin cywolf@gmail.com BC Gov dbluhm Daniel Bluhm Admin daniel@indicio.tech Indicio PBC dhh1128 Daniel Hardman Admin daniel.hardman@gmail.com Provident shaangill025 Shaanjot Gill Maintainer gill.shaanjots@gmail.com BC Gov swcurran Stephen Curran Admin swcurran@cloudcompass.ca BC Gov TelegramSam Sam Curren Maintainer telegramsam@gmail.com Indicio PBC TimoGlastra Timo Glastra Admin timo@animo.id Animo Solutions WadeBarnes Wade Barnes Admin wade@neoterictech.ca BC Gov usingtechnology Jason Sherman Maintainer tools@usingtechnolo.gy BC Gov"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name GitHub ID Scope LFID Discord ID Email Company Affiliation"},{"location":"MAINTAINERS/#the-duties-of-a-maintainer","title":"The Duties of a Maintainer","text":"

Maintainers are expected to perform the following duties for this repository. The duties are listed in more or less priority order:

"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

This community welcomes contributions. Interested contributors are encouraged to progress to become maintainers. To become a maintainer the following steps occur, roughly in order.

"},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

Being a maintainer is not a status symbol or a title to be carried indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

The process to move a maintainer from active to emeritus status is comparable to the process for adding a maintainer, outlined above. In the case of voluntary resignation, the Pull Request can be merged following a maintainer PR approval. If the removal is for any other reason, the following steps SHOULD be followed:

Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

"},{"location":"PUBLISHING/","title":"How to Publish a New Version","text":"

The code to be published should be in the main branch. Make sure that all the PRs to go in the release are merged, and decide on the release tag. Should it be a release candidate or the final tag, and should it be a major, minor or patch release, per semver rules.

Once ready to do a release, create a local branch that includes the following updates:

  1. Create a PR branch from an updated main branch.

  2. Update the CHANGELOG.md to add the new release. Only create a new section when working on the first release candidate for a new release. When transitioning from one release candidate to the next, or to an official release, just update the title and date of the change log section.

  3. Include details of the merged PRs included in this release. General process to follow:

  4. Gather the set of PRs since the last release and put them into a list. A good tool to use for this is the github-changelog-generator. Steps:

  5. Create a read only GitHub token for your account on this page: https://github.com/settings/tokens with a scope of repo / public_repo.
  6. Use a command like the following, adjusting the tag parameters as appropriate. docker run -it --rm -v \"$(pwd)\":/usr/local/src/your-app githubchangeloggenerator/github-changelog-generator --user hyperledger --project aries-cloudagent-python --output 0.11.0rc2.md --since-tag 0.10.4 --future-release 0.11.1rc2 --release-branch main --token <your-token>
  7. In the generated file, use only the PR list -- we don't include the list of closed issues in the Change Log.

In some cases, the approach above fails because of too many API calls. An alternate approach to getting the list of PRs in the right format is to use OpenAI ChatGPT.

Prepare the following ChatGPT request. Don't hit enter yet--you have to add the data.

Generate from this the github pull request number, the github id of the author and the title of the pull request in a tab-delimited list

Get a list of the merged PRs since the last release by displaying the PR list in the GitHub UI, highlighting/copying the PRs and pasting them below the ChatGPT request, one page after another. Hit <Enter>, let the AI magic work, and you should have a list of the PRs in a nice table with a Copy link that you should click.

Once you have that, open this Google Sheet and highlight the A1 cell and paste in the ChatGPT data. A formula in column E will have the properly formatted changelog entries. Double check the list with the GitHub UI to make sure that ChatGPT isn't messing with you and you have the needed data.

If using ChatGPT doesn't appeal to you, try this scary sed/command line approach:

/Approved/d\n/updated /d\n/^$/d\n/^ [0-9]/d\ns/was merged.*//\n/^@/d\ns# by \\(.*\\) # [\\1](https://github.com/\\1)#\ns/^ //\ns#  \\#\\([0-9]*\\)# [\\#\\1](https://github.com/hyperledger/aries-cloudagent-python/pull/\\1) #\ns/  / /g\n/^Version/d\n/tasks done/d\ns/^/- /\n

Once you have the list of PRs:

Additional information about the container image publication process can be found in the document Container Images and Github Actions.

  1. Update the ACA-Py Read The Docs site by building the new \"latest\" (main branch) and activating and building the new release. Appropriate permissions are required to publish the new documentation version.

  2. Update the https://aca-py.org website with the latest documentation by creating a PR and tag of the latest documentation from this site. Details are provided in the aries-acapy-docs repository.

"},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

"},{"location":"UpdateRTD/","title":"Managing Aries Cloud Agent Python Read The Docs Documentation","text":"

This document describes how to maintain the Read The Docs documentation that is generated from the ACA-Py code base. As the structure of the ACA-Py code evolves, the RTD files need to be regenerated and possibly updated, as described here.

"},{"location":"UpdateRTD/#generating-aca-py-read-the-docs-rtd-documentation","title":"Generating ACA-Py Read The Docs (RTD) documentation","text":""},{"location":"UpdateRTD/#before-you-start","title":"Before you start","text":"

To test generate and view the RTD documentation locally, you must install Sphinx and the Sphinx RTD theme. Follow the instructions on the respective pages to install and verify the installation on your system.

"},{"location":"UpdateRTD/#generate-module-files","title":"Generate Module Files","text":"

To rebuild the project and settings from scratch (you'll need to move the generated index file up a level):

rm -rf generated; sphinx-apidoc -f -M -o ./generated ../aries_cloudagent/ $(find ../aries_cloudagent/ -name '*tests*')

Note that the find command that is used to exclude any of the test python files from the RTD documentation.

Check the git status in your repo to see if the generator updates, adds or removes any existing RTD modules.

"},{"location":"UpdateRTD/#reviewing-the-files-locally","title":"Reviewing the files locally","text":"

To auto-generate the module documentation locally run:

sphinx-build -b html -a -E -c ./ ./ ./_build\n

Once generated, go into the _build folder and open index.html in a browser. Note that the _build is .gitignore'd and so will not be part of a git push.

"},{"location":"UpdateRTD/#look-for-errors","title":"Look for Errors","text":"

This is the hard part; looking for errors in docstrings added by devs. Some tips:

Other than that, please investigate and fix things that you find. If there are fixes, it's usually to adhere to the rules around processing docstrings, and especially around JSON samples.

"},{"location":"UpdateRTD/#checking-for-missing-modules","title":"Checking for missing modules","text":"

The file index.rst in the ACA-Py docs folder drive the RTD generation. It picks up all the modules in the source code, starting from the root ../aries_cloudagent folder. However, some modules are not picked up automatically from the root and have to be manually added to index.rst. To do that:

If any are missing, you likely need to add them to the index.rst file in the toctree section of the file. You will see there are already several instances of that, notably \"connections\" and \"protocols\".

"},{"location":"UpdateRTD/#updating-the-readthedocsorg-site","title":"Updating the readthedocs.org site","text":"

The RTD documentation is not currently auto-generated, so a manual re-generation of the documentation is still required.

TODO: Automate this when new tags are applied to the repository.

"},{"location":"aca-py.org/","title":"Welcome!","text":"

Welcome to the Aries Cloud Agent Python documentation site. On this site you will find documentation for recent releases of ACA-Py. You'll find a few of the older versions of ACA-Py (pre-0.8.0), all versions since 0.8.0, and the main branch, which is the latest and greatest.

All of the documentation here is extracted from the Aries Cloud Agent Python repository. If you want to contribute to the documentation, please start there.

Ready to go? Scan the tabs in the page header to find the documentation you need now!

"},{"location":"aca-py.org/#code-internals-documentation","title":"Code Internals Documentation","text":"

In addition to this documentation site, the ACA-Py community also maintains an ACA-Py internals documentation site. The internals documentation consists of the docstrings extracted from the ACA-Py Python code and covers all of the (non-test) modules in the codebase. Check it out on the Aries Cloud Agent-Python ReadTheDocs site. As with this site, the ReadTheDocs documentation is version specific.

Got questions?

"},{"location":"assets/","title":"Assets Folder for Documentation","text":"

Put any assets (images, source for images, videos, etc.) in this folder to be referenced in the various documents for this repo.

"},{"location":"assets/#plantuml-source-and-images","title":"Plantuml Source and Images","text":"

Plantuml diagrams are stored in this folder in source form in files ending in .puml and are generated manually using the ./genPlantuml script. The script uses a docker image from docker-hub and can be run without downloading any dependencies.

If you don't want to use the script, download plantuml and a command line utility and use that for the plantuml generation. I preferred not having any dependencies used (other than docker) and couldn't find a nice way to run plantuml headless from a command line.

"},{"location":"assets/#to-do","title":"To Do","text":"

It would be better to use a local Dockerfile vs. one found on Docker Hub. The one I did find was simple and straight forward.

I couldn't tell if the svg generation was working so just went with png. Not sure which would be better.

"},{"location":"demo/","title":"Aries Cloud Agent Python (ACA-Py) Demos","text":"

There are several demos available for ACA-Py mostly (but not only) aimed at developers learning how to deploy an instance of the agent and an ACA-Py controller to implement an application.

"},{"location":"demo/#table-of-contents","title":"Table of Contents","text":""},{"location":"demo/#the-alicefaber-python-demo","title":"The Alice/Faber Python demo","text":"

The Alice/Faber demo is the (in)famous first verifiable credentials demo. Alice, a former student of Faber College (\"Knowledge is Good\"), connects with the College, is issued a credential about her degree and then is asked by the College for a proof. There are a variety of ways of running the demo. The easiest is in your browser using a site (\"Play with VON\") that let's you run docker containers without installing anything. Alternatively, you can run locally on docker (our recommendation), or using python on your local machine. Each approach is covered below.

"},{"location":"demo/#running-in-a-browser","title":"Running in a Browser","text":"

In your browser, go to the docker playground service Play with Docker. On the title screen, click \"Start\". On the next screen, click (in the left menu) \"+Add a new instance\". That will start up a terminal in your browser. Run the following commands to start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Alice's agent is now running.

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-in-docker","title":"Running in Docker","text":"

Running the demo in docker requires having a von-network (a Hyperledger Indy public ledger sandbox) instance running in docker locally. See the VON Network Tutorial for guidance on starting and stopping your own local Hyperledger Indy instance.

Open three bash shells. For Windows users, git-bash is highly recommended. bash is the default shell in Linux and Mac terminal sessions.

In the first terminal window, start von-network by following the Building and Starting instructions.

In the second terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the faber agent by issuing the following command:

  ./run_demo faber\n

In the third terminal, change directory into demo directory of your clone of the Aries Cloud Agent Python repository. Start the alice agent by issuing the following command:

  ./run_demo alice\n

Jump to the Follow the Script section below for further instructions.

"},{"location":"demo/#running-locally","title":"Running Locally","text":"

The following is an approach to to running the Alice and Faber demo using Python3 running on a bare machine. There are other ways to run the components, but this covers the general approach.

We don't recommend this approach if you are just trying this demo, as you will likely run into issues with the specific setup of your machine.

"},{"location":"demo/#installing-prerequisites","title":"Installing Prerequisites","text":"

We assume you have a running Python 3 environment. To install the prerequisites specific to running the agent/controller examples in your Python environment, run the following command from this repo's demo folder. The precise command to run may vary based on your Python environment setup.

pip3 install -r demo/requirements.txt\n

While that process will include the installation of the Indy python prerequisite, you still have to build and install the libindy code for your platform. Follow the installation instructions in the indy-sdk repo for your platform.

"},{"location":"demo/#start-a-local-indy-ledger","title":"Start a local Indy ledger","text":"

Start a local von-network Hyperledger Indy network running in Docker by following the VON Network Building and Starting instructions.

We strongly recommend you use Docker for the local Indy network until you really, really need to know the details of running an Indy Node instance on a bare machine.

"},{"location":"demo/#genesis-file-handling","title":"Genesis File handling","text":"

Assuming you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section. If you started the Indy ledger without using VON Network, this information might be helpful.

An Aries agent (or other client) connecting to an Indy ledger must know the contents of the genesis file for the ledger. The genesis file lets the agent/client know the IP addresses of the initial nodes of the ledger, and the agent/client sends ledger requests to those IP addresses. When using the indy-sdk ledger, look for the instructions in that repo for how to find/update the ledger genesis file, and note the path to that file on your local system.

The envrionment variable GENESIS_FILE is used to let the Aries demo agents know the location of the genesis file. Use the path to that file as value of the GENESIS_FILE environment variable in the instructions below. You might want to copy that file to be local to the demo so the path is shorter.

"},{"location":"demo/#run-a-local-postgres-instance","title":"Run a local Postgres instance","text":"

The demo uses the postgres database the wallet persistence. Use the Docker Hub certified postgres image to start up a postgres instance to be used for the wallet storage:

docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres -c 'log_statement=all' -c 'logging_collector=on' -c 'log_destination=stderr'\n
"},{"location":"demo/#optional-run-a-von-network-ledger-browser","title":"Optional: Run a von-network ledger browser","text":"

If you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section, as you already have a Ledger browser running, accessible on http://localhost:9000.

If you started the Indy ledger without using VON Network, and you want to be able to browse your local ledger as you run the demo, clone the von-network repo, go into the root of the cloned instance and run the following command, replacing the /path/to/local-genesis.txt with a path to the same genesis file as was used in starting the ledger.

GENESIS_FILE=/path/to/local-genesis.txt PORT=9000 REGISTER_NEW_DIDS=true python -m server.server\n
"},{"location":"demo/#run-the-alice-and-faber-controllersagents","title":"Run the Alice and Faber Controllers/Agents","text":"

With the rest of the pieces running, you can run the Alice and Faber controllers and agents. To do so, cd into the demo folder your clone of this repo in two terminal windows.

If you are using a VON Network instance of Hyperledger, run the following commands:

DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

If you started the Indy ledger without using VON Network, use the following commands, replacing the /path/to/local-genesis.txt with the one for your configuration.

GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n

Note that Alice and Faber will each use 5 ports, e.g., using the parameter ... --port 8020 actually uses ports 8020 through 8024. Feel free to use different ports if you want.

Everything running? See the Follow the Script section below for further instructions.

If the demo fails with an error that references the genesis file, a timeout connecting to the Indy Pool, or an Indy 307 error, it's likely a problem with the genesis file handling. Things to check:

"},{"location":"demo/#follow-the-script","title":"Follow The Script","text":"

With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (X) Exit?\n
"},{"location":"demo/#exchanging-messages","title":"Exchanging Messages","text":"

Feel free to use the \"3\" option to send messages back and forth between the agents. Fun, eh? Those are secure, end-to-end encrypted messages.

"},{"location":"demo/#issuing-and-proving-credentials","title":"Issuing and Proving Credentials","text":"

When ready to test the credentials exchange protocols, go to the Faber prompt, enter \"1\" to send a credential, and then \"2\" to request a proof.

You don't need to do anything with Alice's agent - her agent is implemented to automatically receive credentials and respond to proof requests.

Note there is an option \"2a\" to initiate a connectionless proof - you can execute this option but it will only work end-to-end when connecting to Faber from a mobile agent.

"},{"location":"demo/#additional-options-in-the-alicefaber-demo","title":"Additional Options in the Alice/Faber demo","text":"

You can enable support for various ACA-Py features by providing additional command-line arguements when starting up alice or faber.

Note that when the controller starts up the agent, it prints out the ACA-Py startup command with all parameters - you can inspect this command to see what parameters are provided in each case. For more details on the parameters, just start ACA-Py with the --help parameter, for example:

./scripts/run_docker start --help\n
"},{"location":"demo/#revocation","title":"Revocation","text":"

To enable support for revoking credentials, run the faber demo with the --revocation option:

./run_demo faber --revocation\n

Note that you don't specify this option with alice because it's only applicable for the credential issuer (who has to enable revocation when creating a credential definition, and explicitly revoke credentials as appropriate; alice doesn't have to do anything special when revocation is enabled).

You need to run an AnonCreds revocation registry tails server in order to support revocation - the details are described in the Alice gets a Phone demo instructions.

Faber will setup support for revocation automatically, and you will see an extra option in faber's menu to revoke a credential:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (5) Revoke Credential\n    (6) Publish Revocations\n    (7) Rotate Revocation Registry\n    (8) List Revocation Registries\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n  ```\n\nWhen you issue a credential, make a note of the `Revocation registry ID` and `Credential revocation ID`:\n
Faber | Revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Faber | Credential revocation ID: 1
When you revoke a credential you will need to provide those values:\n
[\u00bd/\u00be/\u215a/\u215e/T/X] 5

Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y

Note that you need to Publish the revocation information to the ledger.  Once you've revoked a credential any proof which uses this credential will fail to verify.  \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol.  There is no other affect on the operation of the agents.\n\nWith DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, and connection re-use:\n\n- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations\n- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation)\n- `--multi-use-invitations` - inviter will issue multi-use invitations\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Run Indy-SDK Backend\n\nThis runs using the older (and not recommended) indy-sdk libraries instead of [Aries Askar](https://github.com/hyperledger/aries-ask):\n\n```bash\n./run_demo faber --wallet-type indy\n

"},{"location":"demo/#mediation","title":"Mediation","text":"

To enable mediation, run the alice or faber demo with the --mediation option:

./run_demo faber --mediation\n

This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.

"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"

To enable multiple ledger mode, run the alice or faber demo with the --multi-ledger option:

./run_demo faber --multi-ledger\n

The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml.

"},{"location":"demo/#multi-tenancy","title":"Multi-tenancy","text":"

To enable support for multi-tenancy, run the alice or faber demo with the --multitenant option:

./run_demo faber --multitenant\n

(This option can be used with both (or either) alice and/or faber.)

You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").

Faber:

    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (4) Create New Invitation\n    (W) Create and/or Enable Wallet\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n

Alice:

    (3) Send Message\n    (4) Input New Invitation\n    (W) Create and/or Enable Wallet\n    (X) Exit?\n

When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)

[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber      | Register or switch to wallet new_wallet_12\nFaber      | Created new profile\nFaber      | Profile backend: indy\nFaber      | Profile name: new_wallet_12\nFaber      | No public DID\n... etc\n

Note that faber will create a public DID for this wallet, and will create a schema and credential definition.

Once you have created a new wallet, you must establish a connection between alice and faber (remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").

In faber, create a new invitation:

[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n

In alice, accept the invitation:

[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n

You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:

Show me a screenshot - multi-tenancy via admin API

Note that with multi-tenancy enabled:

Documentation on ACA-Py's multi-tenancy support can be found here.

"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"

There are two options for configuring mediation with multi-tenancy, documented here.

This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.

Run the demo (Alice or Faber) specifying both options:

./run_demo faber --multitenant --mediation\n

This works exactly as the vanilla multi-tenancy, except that all connections are mediated.

"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To overriide the default port settings:

AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n

(The agent requires up to 10 available ports.)

To pass extra arguements to the agent (for example):

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n
"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"

These Alice and Faber scripts (in the demo/runners folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.

The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.

"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"

Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This Aries OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connectiing, issuing a credential, and presenting a proof sequence.

"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"

Another example in the demo/runners folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.

To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:

The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.

A second version of the performance test can be run by adding the parameter --routing to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.

You can also run the demo against a postgres database using the following:

./run_demo performance --arg-file demo/postgres-indy-args.yml\n

(Obvs you need to be running a postgres database - the command to start postgres is in the yml file provided above.)

You can tweak the number of credentials issued using the --count and --batch parameters, and you can run against an Askar database using the --wallet-type askar option (or run using indy-sdk using --wallet-type indy).

An example full set of options is:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n

Or:

./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"

Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:

The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:

All done? Checkout how we added the missing code segments here.

"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"

In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.

Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo

"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"

There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)

To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:

Open 2 bash shells, and in each run:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

In one shell run Faber:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n

... and in the second shell run Alice:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

When Faber has produced an invitation, copy it over to Alice.

Then, in the Faber shell, select option 1 to issue a credential to Alice. (You can select option 2 if you like, to confirm via proof.)

Then, in the Faber shell, enter X to exit the controller, and then run the Acme controller:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n

In the Alice shell, select option 4 (to enter a new invitation) and then copy over Acme's invitation once it's available.

Then, in the Acme shell, you can select option 2 and then option 1, which don't do anything ... yet!!!

"},{"location":"demo/AcmeDemoWorkshop/#asking-alice-for-a-proof-of-education","title":"Asking Alice for a Proof of Education","text":"

In the Acme code acme.py we are going to add code to issue a proof request to Alice, and then validate the received proof.

First the following import statements and constants that we will need near the top of acme.py:

import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n

Next locate the code that is triggered by option 2:

            elif option == \"2\":\n                log_status(\"#20 Request proof of degree from alice\")\n                # TODO presentation requests\n

Replace the # TODO comment with the following code:

                req_attrs = [\n                    {\n                        \"name\": \"name\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"date\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    },\n                    {\n                        \"name\": \"degree\",\n                        \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n                    }\n                ]\n                req_preds = []\n                indy_proof_request = {\n                    \"name\": \"Proof of Education\",\n                    \"version\": \"1.0\",\n                    \"nonce\": str(uuid4().int),\n                    \"requested_attributes\": {\n                        f\"0_{req_attr['name']}_uuid\": req_attr\n                        for req_attr in req_attrs\n                    },\n                    \"requested_predicates\": {}\n                }\n                proof_request_web_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"presentation_request\": {\"indy\": indy_proof_request},\n                }\n                # this sends the request to our agent, which forwards it to Alice\n                # (based on the connection_id)\n                await agent.admin_POST(\n                    \"/present-proof-2.0/send-request\",\n                    proof_request_web_request\n                )\n

Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):

        if state == \"presentation-received\":\n            # TODO handle received presentations\n            pass\n

then replace the # TODO comment and the pass statement:

            log_status(\"#27 Process the proof provided by X\")\n            log_status(\"#28 Check if proof is valid\")\n            proof = await self.admin_POST(\n                f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n            )\n            self.log(\"Proof = \", proof[\"verified\"])\n\n            # if presentation is a degree schema (proof of education),\n            # check values received\n            pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n            pres = message[\"by_format\"][\"pres\"][\"indy\"]\n            is_proof_of_education = (\n                pres_req[\"name\"] == \"Proof of Education\"\n            )\n            if is_proof_of_education:\n                log_status(\"#28.1 Received proof of education, check claims\")\n                for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n                    if referent in pres['requested_proof']['revealed_attrs']:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n                        )\n                    else:\n                        self.log(\n                            f\"{attr_spec['name']}: \"\n                            \"(attribute not revealed)\"\n                        )\n                for id_spec in pres[\"identifiers\"]:\n                    # just print out the schema/cred def id's of presented claims\n                    self.log(f\"schema_id: {id_spec['schema_id']}\")\n                    self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n                # TODO placeholder for the next step\n            else:\n                # in case there are any other kinds of proofs received\n                self.log(\"#28.1 Received \", pres_req[\"name\"])\n

Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.

Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!

"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"

Now we can issue a work credential to Alice!

There are two options for this. We can (a) add code under option 1 to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.

We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!

First though we need to register a schema and credential definition. Find this code:

        # acme_schema_name = \"employee id schema\"\n        # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n        await acme_agent.initialize(\n            the_agent=agent,\n            # schema_name=acme_schema_name,\n            # schema_attrs=acme_schema_attrs,\n        )\n\n        # TODO publish schema and cred def\n

... and uncomment the code lines. Replace the # TODO comment with the following code:

        with log_timer(\"Publish schema and cred def duration:\"):\n            # define schema\n            version = format(\n                \"%d.%d.%d\"\n                % (\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                    random.randint(1, 101),\n                )\n            )\n            # register schema and cred def\n            (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n                \"employee id schema\",\n                version,\n                [\"employee_id\", \"name\", \"date\", \"position\"],\n                support_revocation=False,\n                revocation_registry_size=TAILS_FILE_COUNT,\n            )\n

For option (1) we want to replace the # TODO comment here:

            elif option == \"1\":\n                log_status(\"#13 Issue credential offer to X\")\n                # TODO credential offers\n

with the following code:

                agent.cred_attrs[cred_def_id] = {\n                    \"employee_id\": \"ACME0009\",\n                    \"name\": \"Alice Smith\",\n                    \"date\": date.isoformat(date.today()),\n                    \"position\": \"CEO\"\n                }\n                cred_preview = {\n                    \"@type\": CRED_PREVIEW_TYPE,\n                    \"attributes\": [\n                        {\"name\": n, \"value\": v}\n                        for (n, v) in agent.cred_attrs[cred_def_id].items()\n                    ],\n                }\n                offer_request = {\n                    \"connection_id\": agent.connection_id,\n                    \"comment\": f\"Offer on cred def id {cred_def_id}\",\n                    \"credential_preview\": cred_preview,\n                    \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n                }\n                await agent.admin_POST(\n                    \"/issue-credential-2.0/send-offer\", offer_request\n                )\n

... and then locate the code that handles the credential request callback:

        if state == \"request-received\":\n            # TODO issue credentials based on offer preview in cred ex record\n            pass\n

... and replace the # TODO comment and pass statement with the following code to issue the credential as Acme offered it:

            # issue credentials based on offer preview in cred ex record\n            if not message.get(\"auto_issue\"):\n                await self.admin_POST(\n                    f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n                    {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n                )\n

Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.

"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"

In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.

This demo also introduces revocation of credentials.

"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":""},{"location":"demo/AliceGetsAPhone/#getting-started","title":"Getting Started","text":"

This demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.

If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.

"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"

Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

There are a couple of extra steps you need to take to prepare to run the Faber agent locally:

"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"

ngrok is used to expose public endpoints for services running locally on your computer.

jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.

You can install ngrok from here

You can download jq releases here

"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"

Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.

Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accesible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:

ngrok http 8020\n

This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.

You will see something like this:

Forwarding                    http://abc123.ngrok.io -> http://localhost:8020\nForwarding                    https://abc123.ngrok.io -> http://localhost:8020\n

This creates a public url for ports 8020 on your local machine.

Note that an ngrok process is created automatically for your tails server.

Keep this process running as we'll come back to it in a moment.

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

Open a new bash shell and in a project directory run the following:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

We'll come back to this in a minute, when we start the faber agent!

"},{"location":"demo/AliceGetsAPhone/#run-an-instance-of-indy-tails-server","title":"Run an instance of indy-tails-server","text":"

For revocation to function, we need another component running that is used to store what are called tails files.

If you are not running with revocation enabled you can skip this step.

"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"

Open a new bash shell, and in a project directory, run:

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n

This will run the required components for the tails server to function and make a tails server available on port 6543.

This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:

ngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1  | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n

Note the server name in the url=https://c5789aa0.ngrok.io parameter (https://c5789aa0.ngrok.io) - this is the external url for your tails server. Make sure you use the https url!

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_1","title":"Running in Play with Docker?","text":"

Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.

Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events option when you run the Faber agent to reduce the quantity of information logged to the screen.

"},{"location":"demo/AliceGetsAPhone/#run-faber-with-extra-parameters","title":"Run faber With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"

If you are running in a local bash shell, navigate to the demo directory in your fork/clone of the Aries Cloud Agent Python repository and run:

TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

(Note that we have to start faber with --aip 10 for compatibility with mobile clients.)

The TAILS_NETWORK parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).

"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker_2","title":"Running in Play with Docker?","text":"

If you are running in Play with Docker, navigate to the demo folder in the clone of Aries Cloud Agent Python and run the following:

PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n

The PUBLIC_TAILS_URL parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.

Use the ngrok url for the tails server that you noted earlier.

*Note that you must use the https url for the tails server endpoint.

*Note - you may want to leave off the --events option when you run the Faber agent, if you are finding you are getting too much logging output.

"},{"location":"demo/AliceGetsAPhone/#waiting-for-the-faber-agent-to-start","title":"Waiting for the Faber agent to start ...","text":"

The Preparing agent image... step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:

As part of its startup process, the agent will publish a revocation registry to the ledger.

Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"

When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.

Click here to view screenshot

The mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".

Click here to view screenshot Click here to view screenshot

Switch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.

Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL option, and copy the invitation_url, which will look something like:

https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n

Or this:

http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n

Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.

"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"

We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.

In the Faber console, select option 1 to send a credential to the mobile agent.

Click here to view screenshot

The Faber agent outputs details to the console; e.g.,

Faber      | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber      | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber      | Credential revocation ID: 1\nFaber      | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n

The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id later, so we recommend that you copy it it now and paste it into a text file or someplace that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id} endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created endpoint).

"},{"location":"demo/AliceGetsAPhone/#accept-the-credential","title":"Accept the Credential","text":"

The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"

We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.

In the Faber console, select option 2 to send a proof request to the mobile agent.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#present-the-proof","title":"Present the Proof","text":"

The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:

Click here to view screenshot Click here to view screenshot Click here to view screenshot

If the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.

The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.

"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"

In the Faber console window, the proof should be received as validated.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"

If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber options 5 and 6). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.

Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.

Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"

A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.

This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)

If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).

Then in the faber demo, select option 2a - Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.

Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py code in the repository.

"},{"location":"demo/AliceGetsAPhone/#conclusion","title":"Conclusion","text":"

That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.

"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"

ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.

The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.

"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"

Clone this repository to a directory on your local:

git clone https://github.com/hyperledger/aries-cloudagent-python.git\ncd aries-cloudagent-python/demo\n

Open up a second shell (so you have 2 shells open in the demo directory) and in one shell:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n

... and in the other:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n

Note that you start the faber agent with AIP2.0 options. (When you specify --cred-type json-ld faber will set aip to 20 automatically, so the --aip option is not strictly required). Note as well the use of the LEDGER_URL. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.

Also note that the above will only work with the /issue-credential-2.0/create-offer endpoint. If you want to use the /issue-credential-2.0/send endpoint - which automates each step of the credential exchange - you will need to include the --no-auto option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).

(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh and ./alice-local.sh scripts in the demo directory.)

Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.

(If you are running with --no-auto you will also need to call the /connections/{conn_id}/accept-invitation endpoint in alice's admin api swagger page.)

Now open up two browser windows to the Faber and Alice admin api swagger pages.

Using the Faber admin api, you have to create a DID with the appropriate:

Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.

For example, in Faber's swagger page call the /wallet/did/create endpoint with the following payload:

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

This will return something like:

{\n  \"result\": {\n    \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n    \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n    \"posture\": \"wallet_only\",\n    \"key_type\": \"bls12381g2\",\n    \"method\": \"key\"\n  }\n}\n

You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).

You will need to create a DID as above for Alice as well (/wallet/did/create etc ...).

Congratulations, you are now ready to start issuing JSON-LD credentials!

To issue a credential, use the /issue-credential-2.0/send-offer endpoint. (You can also use the /issue-credential-2.0/send) endpoint, if, as mentioned above, you have included the --no-auto when starting both of the agents.)

You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:

{\n  \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request, /store, etc endpoints to complete the protocol.

To see the issued credential, call the /credentials/w3c endpoint on Alice's admin api - this will return something like:

{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\"\n      ],\n      \"types\": [\n        \"UniversityDegreeCredential\",\n        \"VerifiableCredential\"\n      ],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n      \"subject_ids\": [],\n      \"proof_types\": [\n        \"BbsBlsSignature2020\"\n      ],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\n          \"VerifiableCredential\",\n          \"UniversityDegreeCredential\"\n        ],\n        \"issuer\": \"did:key:zUC71Kd...poCE\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"degreeType\": \"Undergraduate\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n          \"created\": \"2021-05-19T16:19:44.458170\",\n          \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n    }\n  ]\n}\n

If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records) and check the state. If the state is credential-received, then the credential has been received but not stored, in this case just call the /store endpoint for this credential exchange.

"},{"location":"demo/AliceWantsAJsonCredential/#building-more-realistic-json-ld-credentials","title":"Building More Realistic JSON-LD Credentials","text":"

The above example uses the https://www.w3.org/2018/credentials/examples/v1 context, which should never be used in a real application.

To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.

"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"

You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).

You first include https://schema.org in the @context block of the credential as follows:

\"@context\": [\n  \"https://www.w3.org/2018/credentials/v1\",\n  \"https://schema.org\"\n],\n

Then you review the attributes and objects defined by https://schema.org and decide what you need to include in your credential.

For example to issue a credetial with givenName, familyName and alumniOf attributes, submit the following:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"givenName\": \"Sally\",\n          \"familyName\": \"Student\",\n          \"alumniOf\": \"Example University\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n

Note that with https://schema.org, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject in the above with:

\"credentialSubject\": {\n  \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n  \"givenName\": \"Sally\",\n  \"familyName\": \"Student\",\n  \"alumniOf\": \"Example University\",\n  \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n

... and the credential issuance should fail, however https://schema.org defines a @vocab that by default all terms derive from (see here).

You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName and familyName):

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://schema.org\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"Person\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n          \"student\": {\n            \"type\": \"Person\",\n            \"givenName\": \"Sally\",\n            \"familyName\": \"Student\",\n            \"alumniOf\": \"Example University\"\n          }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"

The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org, you just shouldn't uste this directly in your credential.)

"},{"location":"demo/AliceWantsAJsonCredential/#credential-issue-example","title":"Credential Issue Example","text":"

The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id, issuer and credentialSubject.id with your local values):

{\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"filter\": {\n        \"ld_proof\": {\n            \"credential\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://w3id.org/citizenship/v1\"\n                ],\n                \"type\": [\n                    \"VerifiableCredential\",\n                    \"PermanentResident\"\n                ],\n                \"id\": \"https://credential.example.com/residents/1234567890\",\n                \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n                \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n                \"credentialSubject\": {\n                    \"type\": [\n                        \"PermanentResident\"\n                    ],\n                    \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n                    \"givenName\": \"ALICE\",\n                    \"familyName\": \"SMITH\",\n                    \"gender\": \"Female\",\n                    \"birthCountry\": \"Bahamas\",\n                    \"birthDate\": \"1958-07-17\"\n                }\n            },\n            \"options\": {\n                \"proofType\": \"BbsBlsSignature2020\"\n            }\n        }\n    }\n}\n

Copy and paste this content into Faber's /issue-credential-2.0/send-offer endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.

In Alice's swagger page, submit the /credentials/records/w3c endpoint to see the issued credential.

"},{"location":"demo/AliceWantsAJsonCredential/#request-presentation-example","title":"Request Presentation Example","text":"

To request a proof, submit the following (with appropriate connection_id) to Faber's /present-proof-2.0/send-request endpoint:

{\n    \"comment\": \"string\",\n    \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n    \"presentation_request\": {\n        \"dif\": {\n            \"options\": {\n                \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n                \"domain\": \"4jt78h47fh47\"\n            },\n            \"presentation_definition\": {\n                \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n                \"format\": {\n                    \"ldp_vp\": {\n                        \"proof_type\": [\n                            \"BbsBlsSignature2020\"\n                        ]\n                    }\n                },\n                \"input_descriptors\": [\n                    {\n                        \"id\": \"citizenship_input_1\",\n                        \"name\": \"EU Driver's License\",\n                        \"schema\": [\n                            {\n                                \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n                            },\n                            {\n                                \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n                            }\n                        ],\n                        \"constraints\": {\n                            \"limit_disclosure\": \"required\",\n                            \"is_holder\": [\n                                {\n                                    \"directive\": \"required\",\n                                    \"field_id\": [\n                                        \"1f44d55f-f161-4938-a659-f8026467f126\"\n                                    ]\n                                }\n                            ],\n                            \"fields\": [\n                                {\n                                    \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                                    \"path\": [\n                                        \"$.credentialSubject.familyName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\",\n                                    \"filter\": {\n                                        \"const\": \"SMITH\"\n                                    }\n                                },\n                                {\n                                    \"path\": [\n                                        \"$.credentialSubject.givenName\"\n                                    ],\n                                    \"purpose\": \"The claim must be from one of the specified issuers\"\n                                }\n                            ]\n                        }\n                    }\n                ]\n            }\n        }\n    }\n}\n

Note that the is_holder property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName). Later on, the received presentation will be signed and verifiable only if is_holder with \"directive\": \"required\" is included in the presentation request.

There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation:

{\n  \"dif\": {\n  }\n}\n

There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.

Firstly, Alice can include the received presentation request in the body to the /send-presentation endpoint, and can include additional constraints on the fields:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"is_holder\": [\n                {\n                    \"directive\": \"required\",\n                    \"field_id\": [\n                        \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"332be361-823a-4863-b18b-c3b930c5623e\"\n                    ],\n                }\n            ],\n            \"fields\": [\n              {\n                \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              },\n              {\n                  \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n                  \"path\": [\n                      \"$.id\"\n                  ],\n                  \"purpose\": \"Specify the id of the credential to present\",\n                  \"filter\": {\n                      \"const\": \"https://credential.example.com/residents/1234567890\"\n                  }\n              }\n            ]\n          }\n        }\n      ]\n    }\n  }\n}\n

Note the additional constraint on \"path\": [ \"$.id\" ] - this restricts the presented credential to the one with the matching credential.id. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.

Another option is for Alice to specify the credential record_id - this is an internal value within ACA-Py:

{\n  \"dif\": {\n    \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n    \"presentation_definition\": {\n      \"format\": {\n        \"ldp_vp\": {\n          \"proof_type\": [\n            \"BbsBlsSignature2020\"\n          ]\n        }\n      },\n      \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n      \"input_descriptors\": [\n        {\n          \"id\": \"citizenship_input_1\",\n          \"name\": \"Some kind of citizenship check\",\n          \"schema\": [\n            {\n              \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n            },\n            {\n              \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n            }\n          ],\n          \"constraints\": {\n            \"limit_disclosure\": \"required\",\n            \"fields\": [\n              {\n                \"path\": [\n                  \"$.credentialSubject.familyName\"\n                ],\n                \"purpose\": \"The claim must be from one of the specified issuers\",\n                \"filter\": {\n                  \"const\": \"SMITH\"\n                }\n              }\n            ]\n          }\n        }\n      ]\n    },\n    \"record_ids\": {\n      \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n    }\n  }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"

TBD the following credential is based on the W3C Vaccination schema:

{\n  \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://w3id.org/vaccination/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n        \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n            \"type\": \"VaccinationEvent\",\n            \"batchNumber\": \"1183738569\",\n            \"administeringCentre\": \"MoH\",\n            \"healthProfessional\": \"MoH\",\n            \"countryOfVaccination\": \"NZ\",\n            \"recipient\": {\n              \"type\": \"VaccineRecipient\",\n              \"givenName\": \"JOHN\",\n              \"familyName\": \"SMITH\",\n              \"gender\": \"Male\",\n              \"birthDate\": \"1958-07-17\"\n            },\n            \"vaccine\": {\n              \"type\": \"Vaccine\",\n              \"disease\": \"COVID-19\",\n              \"atcCode\": \"J07BX03\",\n              \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n              \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n            }\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"demo/Aries-Workshop/","title":"A Hyperledger Aries/AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/Aries-Workshop/#introduction","title":"Introduction","text":"

Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.

To run the labs, you\u2019ll need a Hyperledger Aries agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant Aries agent built on Hyperledger Aries Cloud Agent Python (ACA-Py). Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.

The four labs in this workshop are laid out as follows:

Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py

Jump in!

"},{"location":"demo/Aries-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"

Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.

"},{"location":"demo/Aries-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"
  1. Get a compatible Aries Mobile Wallet to use with your Aries Traction tenant. There are a number to choose from. We suggest that you use one of these:
    1. BC Wallet from the Government of British Columbia
    2. Orbit Wallet from Northern Block
  2. Click this Traction Sandbox link to go to the Sandbox login page to create your own Traction Tenant Aries agent. Once there, do the following:
    1. Click \"Create Request!\", fill in at least the required form fields, and click \"Submit\".
    2. Your new Traction Tenant's Wallet ID and Wallet Key will be displayed. SAVE THOSE IMMEDIATELY SO THAT YOU HAVE THEM TO ACCESS YOUR TENANT. You only get to see/save them once!
      1. You will need those each time you open your Traction Tenant agent. Putting them into a Password Manager is a great idea!
      2. We can't recover your Wallet ID and Wallet Key, so if you lose them you have to start the entire process again.
  3. Go back to the Traction Sandbox login and this time, use your Wallet ID/Key to log in to your brand new Traction Tenant agent. You might want to bookmark the site.
  4. Make your new Traction Tenant a verifiable credential issuer by:
    1. Clicking on the \"User\" (folder icon) menu (top right), and choosing \"Profile\"
    2. Clicking the \u201cBCovrin Test\u201d Action in the Endorser section.
      1. When done, you will have your own public DID (displayed on the page) that has been published on the BCovrin Test Ledger (can you find it?). Your DID will be used to publish other AnonCreds transactions so you can issue verifiable credentials.
  5. Connect from your Traction Tenant to your mobile Wallet app by:
    1. Selecting on the left menu \"Connections\" and then \"Invitations\"
    2. Click the \"Single Use Connection\" button, give the connection an alias (maybe \"My Wallet\"), and click \"Submit.\"
    3. Scan the resulting QR code with your initialized mobile Wallet and follow the prompts. Once you connect, type a quick \"Hi!\" message to the Traction Agent and you should get an automated message back.
    4. Check the Traction Tenant menu item \"Connections\u2192Connections\" to see the status of your connection \u2013 it should be active.
    5. If anything didn't work in the sequence, here are some things to try:
    6. If the Traction Tenant connection is not active, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.
    7. We've created a Traction Sandbox Workshop FAQ and Questions GitHub issue that you can check to see if your question is already answered, and if not, you can add your question as comment on the issue, and we'll get back to you.

That's it--you should be ready to start issuing and receiving verifiable credentials.

"},{"location":"demo/Aries-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"

::: todo To Do: Update lab to use this schema: H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0 :::

In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:

"},{"location":"demo/Aries-Workshop/#lab-2-steps-to-follow","title":"Lab 2: Steps to Follow","text":"
  1. Log into your Traction Sandbox. You did record your Wallet ID and Key, right?
    1. If not \u2014 jump back to Lab 1 to create a new Traction Tenant, and to a connection to your mobile Wallet.
  2. Create a Schema:
    1. Click the menu item \u201cConfiguration\u201d and then \u201cSchema Storage\u201d.
    2. Click \u201cAdd Schema From Ledger\u201d and fill in the Schema Id with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0.
      1. By doing this, you (as the issuer) will be using a previously published schema. Click here to see the schema on the ledger.
    3. To see the details about your schema, hit the Expand (>) link, and then the subsequent > to \u201cView Raw Content.\"
  3. With the schema in place, it's time to become an issuer. To do that, you have to create a Credential Definition. Click on the \u201cCredential\u201d icon in the \u201cCredential Definition\u201d column of your schema to create the Credential Definition (CredDef) for the Schema. The \u201cTag\u201d can be any value you want \u2014 it is an issuer defined part of the identifier for the Credential Definition. Wait for the operation to complete. Click the \u201cRefresh\u201d button if needed to see the Create icon has been replaced with the identifier for your CredDef.
  4. Move to the menu item \"Configuration \u2192 Credential Definition Storage\" to see the CredDef you created, If you want, expand it to view the raw data. In this case, the raw data does not show the actual CredDef, but rather the Traction data about the CredDef. You can again use the BCovrin Test ledger browser to see your new, published CredDef.

Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.

"},{"location":"demo/Aries-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"

In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.

"},{"location":"demo/Aries-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Issue a Credential:
    1. Click the menu item \u201cIssuance\u201d and then \u201cOffer a Credential\u201d.
    2. Select the Credential Definition of the credential you want to issue.
    3. Select the Contact Name to whom you are issuing the credential\u2014the alias of the connection you made to your mobile Wallet.
    4. Click the \u201cEnter Credential Value\u201d to popup a data entry form for the attributes to populate.
      1. When you enter the date values that you want to use in predicates (e.g., \u201cOlder than 19\u201d), put the date into the following format: YYYYMMDD, e.g., 20231001. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.
      2. We suggest you use realistic dates for Date of Birth (DOB) (e.g., 20-ish years in the past) and expiry (e.g., 3 years in the future) to make using them in predicates easier.
    5. Click \u201cSave\u201d when you are finished entering the attributes and review the information you have entered.
    6. When you are ready, click \u201cSend Offer\u201d to initiate the issuance of the credential.
  3. Receive the Credential:
    1. Open up your mobile Wallet and look for a notification about the credential offer. Where that appears may vary based on the Wallet you are using.
    2. Review the offer and then click the \u201cAccept\u201d button.
    3. Your new credential should be saved to your wallet.
  4. Review the Issuance Data:
    1. Back in your Traction Tenant, refresh the list to see the updates status of the issuance you just completed (should be \u201ccredential_issued\u201d or \u201ccredential_acked\u201d, depending on the Wallet you are using).
    2. Expand the issuance and again to \u201cView Raw Content.\u201d to see the data that was exchanged between the Traction Issuer and the Wallet.
  5. If you want, repeat the process for other credentials types your Traction Tenant is capable of issuing.

That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.

"},{"location":"demo/Aries-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"

In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.

"},{"location":"demo/Aries-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"
  1. If necessary, log into your Traction Sandbox with your Wallet ID and Key.
  2. Create and send a presentation request:
    1. Click the menu item \u201cVerification\u201d and then the button \u201cCreate Presentation Request\u201d.
    2. Select the Connection to whom you are sending the request\u2014the alias of the connection you made to your mobile Wallet.
    3. Update the example Presentation Request to match the credential that you want to request. Keep it simple for your first request\u2014it\u2019s easy to iterate in Traction to make your request more complicated. If you used the schema we suggested in Lab 1, just use the default presentation request. It should just work! If not, start from it, and:
      1. Update the value of \u201cschema_name\u201d to the name(s) of the schema for the credential(s) you issued.
      2. Update the group name(s) to something that makes sense for your credential(s) and make sure the attributes listed match your credential(s).
      3. Update (or perhaps remove) the \u201crequest_predicates\u201d JSON item, if it is not applicable to your credential.
    4. Update the optional fields (\u201cAuto Verify\u201d and \u201cOptional Comment\u201d) as you see fit. The \u201cOptional Comment\u201d goes into the list of Verifications so you can keep track of the different presentation requests you create.
    5. Click \u201cSubmit\u201d when your presentation request is ready.
  3. Respond to the Presentation Request:
    1. Open up your mobile Wallet and look for a notification about receiving a presentation request. Where that appears may vary based on the Wallet you are using.
    2. Review the information you are being asked to share, and then click the \u201cShare\u201d button to send the presentation.
  4. Review the Presentation Request Result:
    1. Back in your Traction Tenant, refresh the Verifications list to see the updated status of the presentation request you just completed. It should be something positive, like \u201cpresentation_received\u201d if all went well. It may be different depending on the Wallet you are using.
    2. If you want, expand the presentation request and \u201cView Raw Content.\u201d to see the presentation request, and presentation data exchanged between the Traction Verifier and the Wallet.
  5. Repeat the process, making the presentation request more complicated:
    1. From the list of presentations, use the arrow icon action to copy an existing presentation request and just re-run it, or evolve it.
    2. Ideas:
    3. Add predicates using date of birth (\u201colder than\u201d) and expiry (\u201cnot expired today\u201d).
      1. The p_value should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD format (the integer form of the date).
      2. The p_type should be >= for the \u201colder than\u201d, and =< for \u201cnot expired\u201d. See the table below for the form of the expression form.
    4. Add a second credential group with a restriction for a different credential to the request, so the presentation is derived from two source credentials.
p_value p_type credential_data 20230527 <= expiry_dateint 20030527 >= dob_dateint

That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!

"},{"location":"demo/Aries-Workshop/#whats-next","title":"What's Next","text":"

The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.

Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!

"},{"location":"demo/Aries-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"

Are you going to build an app that uses Traction or an instance of the Aries Cloud Agent Python (ACA-Py)? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.

To access and use your Tenant's OpenAPI (aka Swagger) interface:

The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.

We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.

"},{"location":"demo/Aries-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"

If you are challenged to use Traction or [Aries Cloud Agent Python] to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.

"},{"location":"demo/AriesOpenAPIDemo/","title":"Aries OpenAPI Demo","text":"

What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.

"},{"location":"demo/AriesOpenAPIDemo/#contents","title":"Contents","text":""},{"location":"demo/AriesOpenAPIDemo/#getting-started","title":"Getting Started","text":"

We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.

Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.

For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.

Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg. In that:

"},{"location":"demo/AriesOpenAPIDemo/#running-in-a-browser","title":"Running in a Browser","text":"

To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"

In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct....

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent","title":"Start the Alice Agent","text":"

Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).

Once the Alice agent has started up (with the invite: prompt displayed), click the link near the top of the screen 8031. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct....

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!

You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.

"},{"location":"demo/AriesOpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"

To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.

"},{"location":"demo/AriesOpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"

To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.

In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:

git clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f faber\n

If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber

Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"

To start Alice's agent, open up a second terminal window and in it, change to the same demo directory as where Faber's agent was started above. Once there, start Alice's agent:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n

Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:

docker logs -f alice\n

You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR) that may appear.

If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.

NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice

Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.

Show me a screenshot!"},{"location":"demo/AriesOpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"

When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:

docker stop faber\ndocker stop alice\n
"},{"location":"demo/AriesOpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"

Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.

From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.

In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.

Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation. That means you must:

  1. scroll to and find that endpoint;
  2. click on the endpoint name to expand its section of the UI;
  3. click on the Try it out button;
  4. fill in any data necessary to run the command;
  5. click Execute;
  6. check the response to see if the request worked.

So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.

Enough with the preliminaries, let\u2019s get started!

"},{"location":"demo/AriesOpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"

We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer DID method, a public ledger is not needed.

"},{"location":"demo/AriesOpenAPIDemo/#use-the-faber-agent-to-create-an-invitation","title":"Use the Faber Agent to Create an Invitation","text":"

In the Faber browser tab, navigate to the POST /connections/create-invitation endpoint. Replace the sample body with and empty production ({}) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"

Copy the entire block of the invitation object, from the curly brackets {}, excluding the trailing comma.

Show me a screenshot - Create Invitation Response

Before switching over to the Alice browser tab, scroll to and execute the GET /connections endpoint to see the list of Faber's connections. You should see a connection with a connection_id that is identical to the invitation you just created, and that its state is invitation.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#use-the-alice-agent-to-receive-fabers-invitation","title":"Use the Alice Agent to Receive Faber's Invitation","text":"

Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation.

Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on

Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation Response

A key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.

"},{"location":"demo/AriesOpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"

At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections endpoint.

Show me a screenshot

To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id in the connection response from the previous POST /connections/receive-invitation endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation endpoint and paste the connection_id in the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking Execute should show that the connection has a state of request.

Show me a screenshot - Accept Invitation Request Show me a screenshot - Accept Invitation Response"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-gets-the-request","title":"The Faber Agent Gets the Request","text":"

In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id from the event for the next step.

Show me the event

Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.

"},{"location":"demo/AriesOpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"

To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request endpoint and paste the connection_id you previously copied into the id parameter field (you will have to click the Try it out button to see the available URL parameters). The response from clicking the Execute button should show that the connection has a state of response, which indicates that Faber has accepted Alice's connection request.

Show me a screenshot - Accept Connection Request Show me a screenshot - Accept Connection Request"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-alices-agent","title":"Review the Connection Status in Alice's Agent","text":"

Switch over to the Alice browser tab.

Scroll to and execute GET /connections to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active.

Show me a screenshot - Alice Connection Status

As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.

Show me the event"},{"location":"demo/AriesOpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"

You are connected! Switch to the Faber browser tab and run the same GET /connections endpoint to see Faber's view of the connection. Its state is also active. Note the connection_id, you\u2019ll need it later in the tutorial.

Show me a screenshot - Faber Connection Status"},{"location":"demo/AriesOpenAPIDemo/#basic-messaging-between-agents","title":"Basic Messaging Between Agents","text":"

Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.

"},{"location":"demo/AriesOpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"

On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message endpoint. Click on Try it Out and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}). Enter the connection id of Alice's connection in the field provided. Then click on Execute.

Show me a screenshot"},{"location":"demo/AriesOpenAPIDemo/#receiving-a-basic-message-faber","title":"Receiving a Basic Message (Faber)","text":"

How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:

Show me a screenshot

Faber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).

"},{"location":"demo/AriesOpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"

How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:

Show me a screenshot

Again, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received.

"},{"location":"demo/AriesOpenAPIDemo/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.

Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:

  1. registered a public DID and stored it on the ledger;
  2. created a schema and registered it on the ledger;
  3. created a credential definition and registered it on the ledger.

The schema and credential definition could also be created through this swagger interface.

We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.

To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.

"},{"location":"demo/AriesOpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"

You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public endpoint. Click on Try it Out and Execute and you will see Faber's public DID.

Show me a screenshot

On the ledger browser of the BCovrin ledger, click the Domain page, refresh, and paste the Faber public DID into the Filter: field:

Show me a screenshot

The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:

Show me the ledger transactions

You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created endpoint to get a list of schemas, including the one schema_id that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Likewise use the GET /credential-definitions/created endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.

Show me a screenshot

Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST versions of these endpoints. Now you know!

"},{"location":"demo/AriesOpenAPIDemo/#notes","title":"Notes","text":"

The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!

"},{"location":"demo/AriesOpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"

Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.

"},{"location":"demo/AriesOpenAPIDemo/#faber-preparing-to-issue-a-credential","title":"Faber - Preparing to Issue a Credential","text":"

First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections API endpoint, execute, copy it and paste the connection_id value into the same field in the issue credential JSON.

Click here to see a screenshot

For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:

into the filter section's indy subsection. Remove the \"dif\" subsection of the filter section within the JSON, and specify the remaining indy filter criteria as follows:

Finally, set the remaining values as follows: - auto_remove: set to true (no quotes), see note below - comment: set to any string. It's intended to let Alice know something about the credential being offered. - trace: set to false (no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.

By setting auto_remove to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.

"},{"location":"demo/AriesOpenAPIDemo/#faber-issuing-the-credential","title":"Faber - Issuing the Credential","text":"

Finally, we need put into the JSON the data values for the credential_preview section of the JSON. Copy the following and paste it between the square brackets of the attributes item, replacing what is there. Feel free to change the attribute value items, but don't change the labels or names:

      {\n        \"name\": \"name\",\n        \"value\": \"Alice Smith\"\n      },\n      {\n        \"name\": \"timestamp\",\n        \"value\": \"1234567890\"\n      },\n      {\n        \"name\": \"date\",\n        \"value\": \"2018-05-28\"\n      },\n      {\n        \"name\": \"degree\",\n        \"value\": \"Maths\"\n      },\n      {\n        \"name\": \"birthdate_dateint\",\n        \"value\": \"19640101\"\n      }\n

(Note that the birthdate above is used to present later on to pass an \"age proof\".)

OK, finally, you are ready to click Execute. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?

Show me a screenshot - credential offer

To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0 section and execute the GET /issue-credential-2.0/records endpoint. You should see a lot of information about the exchange just initiated.

"},{"location":"demo/AriesOpenAPIDemo/#alice-receives-credential","title":"Alice Receives Credential","text":"

Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.

Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.

Show me a screenshot - issue credential"},{"location":"demo/AriesOpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"

We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records endpoint. Note that within the results, the cred_ex_record just received has a state of credential-received, but not yet done. Let's address that.

Show me a screenshot - check credential exchange status

First, we need the cred_ex_id from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id string. A real controller might use the cred_ex_id for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).

Show me a screenshot - store credential

Now, in Alice\u2019s swagger browser tab, find the credentials section and within that, execute the GET /credentials endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent is the value of the credential_id element used in other calls. referent is the name returned in the indy-sdk call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.

"},{"location":"demo/AriesOpenAPIDemo/#faber-receives-acknowledgment-that-the-credential-was-received","title":"Faber Receives Acknowledgment that the Credential was Received","text":"

On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.

Show me Faber's event activity

Note that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records

Show me a screenshot

You\u2019ve done it, issued a credential! w00t!

"},{"location":"demo/AriesOpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"

Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points","title":"Bonus Points","text":"

If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/ messages. Use the GET /issue-credential-2.0/records to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.

Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential Offer POST /issue-credential-2.0/send-offer REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/AriesOpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"

Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.

"},{"location":"demo/AriesOpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"

From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request endpoint. After hitting Try it Now, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id (there are four instances) and connection_id with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment item to whatever you want.

{\n  \"comment\": \"This is a comment about the reason for the proof\",\n  \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n  \"presentation_request\": {\n    \"indy\": {\n      \"name\": \"Proof of Education\",\n      \"version\": \"1.0\",\n      \"requested_attributes\": {\n        \"0_name_uuid\": {\n          \"name\": \"name\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_date_uuid\": {\n          \"name\": \"date\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_degree_uuid\": {\n          \"name\": \"degree\",\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        },\n        \"0_self_attested_thing_uuid\": {\n          \"name\": \"self_attested_thing\"\n        }\n      },\n      \"requested_predicates\": {\n        \"0_age_GE_uuid\": {\n          \"name\": \"birthdate_dateint\",\n          \"p_type\": \"<=\",\n          \"p_value\": 20030101,\n          \"restrictions\": [\n            {\n              \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n

(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18), and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py demo code.)

Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute and cross your fingers. If the request fails check your JSON!

Show me a screenshot - send proof request"},{"location":"demo/AriesOpenAPIDemo/#alice-responding-to-the-proof-request","title":"Alice - Responding to the Proof Request","text":"

As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.

Show me Alice's event activity

In a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.

"},{"location":"demo/AriesOpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"

Note that in the response, the state is request-sent. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id} endpoint. That should return a result showing the state as done and verified as true. Proof positive!

You can see some of Faber's activity below:

Show me Faber's event activity"},{"location":"demo/AriesOpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"

As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0 event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.

"},{"location":"demo/AriesOpenAPIDemo/#bonus-points_1","title":"Bonus Points","text":"

If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0 messages.

The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.

Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof Request POST /present-proof-2.0/send-request REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation REST service Save Proof application data"},{"location":"demo/AriesOpenAPIDemo/#conclusion","title":"Conclusion","text":"

That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.

One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.

"},{"location":"demo/AriesPostmanDemo/","title":"Aries Postman Demo","text":"

In these demos we will use Postman as our controller client.

"},{"location":"demo/AriesPostmanDemo/#contents","title":"Contents","text":""},{"location":"demo/AriesPostmanDemo/#getting-started","title":"Getting Started","text":"

Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.

"},{"location":"demo/AriesPostmanDemo/#installing-postman","title":"Installing Postman","text":"

Download, install and launch postman.

"},{"location":"demo/AriesPostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"

Create a new postman workspace labeled \"acapy-demo\".

"},{"location":"demo/AriesPostmanDemo/#importing-the-environment","title":"Importing the environment","text":"

In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.

Make sure you have the environment set as your active environment.

"},{"location":"demo/AriesPostmanDemo/#importing-the-collections","title":"Importing the collections","text":"

In the collections tab from the left, click the import button.

The following collections are available:

"},{"location":"demo/AriesPostmanDemo/#postman-basics","title":"Postman basics","text":"

Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.

You have your environment where you define variables to be accessed by your collections.

Each collection consists of a series of requests which can be configured independently.

"},{"location":"demo/AriesPostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"

Make sure you have a demo agent available. You can use the following command to deploy one:

LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n

When running for the first time, please allow some time for the images to build.

"},{"location":"demo/AriesPostmanDemo/#register-new-dids","title":"Register new dids","text":"

The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020 and BbsBlsSignature2020 credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create endpoint.

"},{"location":"demo/AriesPostmanDemo/#issue-credentials","title":"Issue credentials","text":"

For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.

{\n    \"credential\":   { \n        \"@context\": [\n            \"https://www.w3.org/2018/credentials/v1\"\n        ],\n        \"type\": [\n            \"VerifiableCredential\"\n        ],\n        \"issuer\": \"did:example:123\",\n        \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:123\"\n        }\n    },\n    \"options\": {}\n}\n

Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.

"},{"location":"demo/AriesPostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"

Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.

Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.

"},{"location":"demo/AriesPostmanDemo/#verify-credentials","title":"Verify credentials","text":"

You can verify your last issued credential with this endpoint or any issued credential you provide to it.

"},{"location":"demo/AriesPostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"

Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.

"},{"location":"demo/AriesPostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"

The final request is to verify a presentation.

"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"

There are two ways to run the alice/faber demo with endorser support enabled.

"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"

This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.

Start a VON Network instance and a Tails server:

Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n

Start up Alice as normal:

./run_demo alice\n

You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.

If you issue more than 5 credentials, you will see Faber creating a new revocation registry (encluding endorser operations).

"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"

This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:

Start a VON Network and a Tails server using the instructions above.

Start up Faber as Endorser:

TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n

Start up Alice as Author:

TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n

Copy the invitation from Faber to Alice to complete the connection.

Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).

This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.

Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).

If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.

"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"

The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.

The requirements on your invitations (such as in the example below) are:

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.

The RFC 0434 Out of Band protocol requirement enables reuse message by the invitee (the Wallet in the flow above) is that the service in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services item of the Out of Band invitation. As noted in the Out of Band specification, reuse cannot be used with such DID types even if the contents are the same.

Example invitation:

{\n    \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n    \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n    \"label\": \"faber.agent\",\n    \"handshake_protocols\": [\n        \"https://didcomm.org/didexchange/1.0\"\n    ],\n    \"services\": [\n        \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n    ]\n}\n

The use of conenction reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.

  1. On a command line, run Faber with these parameters: ./run_demo faber --reuse-connections --public-did-connections --events.
  2. On a second command line, run Alice as normal, perhaps with the events option: ./run_demo alice --reuse-connections --events
  3. Copy the invitation from the Faber terminal and paste it into the Alice terminal at the prompt.
  4. Verify that the connection was established.
  5. If you want, go to the Alice OpenAPI screen (port 8031, path api/docs), and then use the GET Connections to see that Alice has one connection to Faber.
  6. In the Faber terminal, type 4 to get a prompt for a new connection. This will generate a new invitation with the same public DID.
  7. In the Alice terminal, type 4 to get a prompt for a new connection, and paste the new invitation.
  8. Note from the webhook events in the Faber terminal that the reuse message is received from Alice, and as a result, no new connection was created.
  9. Execute again the GET Connections endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.
  10. Try running the demo again without the --reuse-connections parameter and compare the services value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.

While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.

For example, to run faber with connection reuse using a non-public DID:

./run_demo faber --reuse-connections --events\n

To run faber using a did_peer and reusable connections:

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --reuse-connections --events\n

To run this demo using a multi-use invitation (from Faber):

DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --reuse-connections --multi-use-invitations --events\n
"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"

A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:

--wallet-type askar-anoncreds\n

When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs instead of credx.

There is a new package under aries_cloudagent/anoncreds with code that supports the new library.

There are new endpoints (under /anoncreds) for creating a Schema and Credential Definition. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).

Within the protocols, there are new handler libraries to support the new anoncreds format (these are in parallel to the existing indy libraries).

The existing indy code are in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/indy/handler.py\naries_cloudagent/protocols/indy/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/indy/handler.py\n

The new anoncreds code is in:

aries_cloudagent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\naries_cloudagent/protocols/present_proof/anoncreds/pres_exch_handler.py\naries_cloudagent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n

The Indy handler checks to see if the wallet type is askar-anoncreds and if so delegates the calls to the anoncreds handler, for example:

        # Temporary shim while the new anoncreds library integration is in progress\n        wallet_type = profile.settings.get_value(\"wallet.type\")\n        if wallet_type == \"askar-anoncreds\":\n            self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n

... and then:

        # Temporary shim while the new anoncreds library integration is in progress\n        if self.anoncreds_handler:\n            return self.anoncreds_handler.get_format_identifier(message_type)\n

To run the alice/faber demo using the new anoncreds library, start the demo with:

--wallet-type askar-anoncreds\n

There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:

--wallet-type askar-anoncreds\n

Everything should just work!!!

Theoretically ATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).

"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"

The changes are significant. Notably:

The Tails File changes are minimal -- nothing about the file itself changed. What changed:

"},{"location":"deploying/AnonCredsWalletType/#outstanding-work","title":"Outstanding work","text":""},{"location":"deploying/AnonCredsWalletType/#retiring-old-indy-and-askar-credx-code","title":"Retiring old Indy and Askar (credx) Code","text":"

The main changes for the Credential and Presentation support are in the following two files:

aries_cloudagent/protocols/issue_credential/v2_0/messages/cred_format.py\naries_cloudagent/protocols/present_proof/v2_0/messages/pres_format.py\n

The INDY handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.

The new code is already in place (in comments). For example for the Credential handler:

        To make the switch from indy to anoncreds replace the above with the following\n        INDY = FormatSpec(\n            \"hlindy/\",\n            DeferLoad(\n                \"aries_cloudagent.protocols.present_proof.v2_0\"\n                \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n            ),\n        )\n

There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.

Some new methods were added within the Ledger class.

New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).

"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"

Aries Cloud Agent - Python is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their Aries stack using the container images graciously provided by BC Gov and hosted through their bcgovimages docker hub account. These images have been critical to the adoption of not only ACA-Py but also Hyperledger Aries and SSI more generally.

Recognizing how critical these images are to the success of ACA-Py and consistent with Hyperledger's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.

"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"

This project builds and publishes the ghcr.io/hyperledger/aries-cloudagent-python image. Multiple variants are available; see Tags.

"},{"location":"deploying/ContainerImagesAndGithubActions/#tags","title":"Tags","text":"

ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. There are currently two main variants:

These two image variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.

The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.

Below is a table of all generated images and their tags:

Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z py3.9-indy-A.B.C-X.Y.Z Indy py3.9-indy-1.16.0-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z and Indy SDK Version A.B.C py3.10-indy-A.B.C-X.Y.Z Indy py3.10-indy-1.16.0-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z and Indy SDK Version A.B.C"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"

There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.

"},{"location":"deploying/ContainerImagesAndGithubActions/#github-actions","title":"Github Actions","text":""},{"location":"deploying/Databases/","title":"Databases","text":"

Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.

"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"

If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n

For this configuration, a folder called wallet will be created which contains a file called sqlite.db.

"},{"location":"deploying/Databases/#postgresql","title":"PostgreSQL","text":"

The wallet can be configured to use PostgreSQL as storage.

# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n

In this case the hostname for the database is db on port 5432.

A docker-compose file could look like this:

# docker-compose.yml\nversion: '3'\nservices:\n  # acapy ...\n  # database\n  db:\n    image: postgres:10\n    environment:\n      POSTGRES_PASSWORD: mysecretpassword\n      POSTGRES_USER: postgres\n      POSTGRES_DB: postgres\n    ports:\n      - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"

The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.

"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"

Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.

"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"

Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.

Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.

However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.

"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"

The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)

The components are:

  1. Aries Askar: the replacement for the \u201cindy-wallet\u201d part of Indy SDK. Askar is a key management service, handling the creation and use of private keys managed by Aries agents. It\u2019s also the secure storage for DIDs, verifiable credentials, and data used by issuers of verifiable credentials for signing. As the Aries moniker indicates, Askar is suitable for use with any Aries agent, and for managing any keys, whether for use with Indy or any other Verifiable Data Registry (VDR).
  2. Indy VDR: the interface to publishing to and retrieving data from Hyperledger Indy networks. Indy VDR is scoped at the appropriate level for any client application using Hyperledger Indy networks.
  3. CredX: a Rust implementation of AnonCreds that evolved from the Indy SDK implementation. CredX is within the indy-shared-rs repository. It has significant performance enhancements over the version in the Indy SDK, particularly for Issuers.
  4. Hyperledger AnonCreds: a newer implementation of AnonCreds that is \u201cledger-agnostic\u201d \u2014 it can be used with Hyperledger Indy and any other suitable verifiable data registry.

In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.

If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.

"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"

What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.

"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"

If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar configuration parameter. You will automatically be using all of the shared components.

As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.

"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"

If you have an existing deployment, in changing the --wallet-type configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the aries-acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.

Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.

We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.

askar-upgrade \\\n  --strategy dbpw \\\n  --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n  --wallet-name <wallet name> \\\n  --wallet-key <wallet key>\n

It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.

"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"

If you have questions, comments, or suggestions about the upgrade process, please use the Aries Cloud Agent Python channel on Hyperledger Discord, or submit a GitHub issue to the ACA-Py repository.

"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"

Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.

"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"

Poetry manages virtual environments for your projects to ensure clean and isolated development environments.

"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"
poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"
poetry shell\n

Alternatively you can source the environment settings in the current shell

source $(poetry env info --path)/bin/activate\n

for powershell users this would be

(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"

When using poetry shell

exit\n

When using the activate script

deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"

Poetry uses the pyproject.toml file to manage dependencies. Add new dependencies to this file and update existing ones as needed.

"},{"location":"deploying/Poetry/#adding-a-dependency","title":"Adding a Dependency","text":"
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"
poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"
poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"
poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"

Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.

"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"
poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"
poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"

Poetry streamlines the process of building and publishing Python packages.

"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"
poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"
poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"

Extras allow you to specify additional dependencies based on project requirements.

"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"
poetry install -E extras-name\n

for example

poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"

Development dependencies are useful for tasks like testing, linting, and documentation generation.

"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"
poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":""},{"location":"deploying/RedisPlugins/","title":"ACA-Py Redis Plugins","text":""},{"location":"deploying/RedisPlugins/#aries-acapy-plugin-redis-events-redis_queue","title":"aries-acapy-plugin-redis-events redis_queue","text":"

It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configuration yaml","text":"
redis_queue:\n  connection: \n    connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n  ### For Inbound ###\n  inbound:\n    acapy_inbound_topic: \"acapy_inbound\"\n    acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n  ### For Outbound ###\n  outbound:\n    acapy_outbound_topic: \"acapy_outbound\"\n    mediator_mode: false\n\n  ### For Event ###\n  event:\n    event_topic_maps:\n      ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n      ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n      ^acapy::record::([^:])?: acapy-record-$wallet_id\n      acapy::basicmessage::received: acapy-basicmessage-received\n      acapy::problem_report: acapy-problem_report\n      acapy::ping::received: acapy-ping-received\n      acapy::ping::response_received: acapy-ping-response_received\n      acapy::actionmenu::received: acapy-actionmenu-received\n      acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n      acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n      acapy::keylist::updated: acapy-keylist-updated\n      acapy::revocation-notification::received: acapy-revocation-notification-received\n      acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n      acapy::forward::received: acapy-forward-received\n    event_webhook_topic_maps:\n      acapy::basicmessage::received: basicmessages\n      acapy::problem_report: problem_report\n      acapy::ping::received: ping\n      acapy::ping::response_received: ping\n      acapy::actionmenu::received: actionmenu\n      acapy::actionmenu::get-active-menu: get-active-menu\n      acapy::actionmenu::perform-menu-action: perform-menu-action\n      acapy::keylist::updated: keylist\n    deliver_webhook: true\n
"},{"location":"deploying/RedisPlugins/#redis-plugin-usage","title":"Redis Plugin Usage","text":""},{"location":"deploying/RedisPlugins/#redis-plugin-with-docker","title":"Redis Plugin With Docker","text":"

Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.

docker-compose up --build -d\n

More details can be found here.

"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"

Installation

pip install git+https://github.com/bcgov/aries-acapy-plugin-redis-events.git\n

Startup ACA-Py with redis_queue plugin loaded

docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n    --plugin redis_queue.v1_0.events \\\n    --plugin-config plugins-config.yaml \\\n    -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n    # ... the remainder of your startup arguments\n

Regardless of the options above, you will need to startup deliverer and relay/mediator service as a bridge to receive inbound messages. Consider the following to build your docker-compose file which should also start up your redis cluster:

Both relay and mediator demos are also available.

"},{"location":"deploying/RedisPlugins/#aries-acapy-cache-redis-redis_cache","title":"aries-acapy-cache-redis redis_cache","text":"

ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.

More details can be found here.

"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configuration yaml","text":"
redis_cache:\n  connection: \"redis://default:test1234@172.28.0.103:6379\"\n  max_connection: 50\n  credentials:\n    username: \"default\"\n    password: \"test1234\"\n  ssl:\n    cacerts: ./ca.crt\n
"},{"location":"deploying/RedisPlugins/#redis-cache-usage","title":"Redis Cache Usage","text":""},{"location":"deploying/RedisPlugins/#redis-cache-using-docker","title":"Redis Cache Using Docker","text":""},{"location":"deploying/RedisPlugins/#redis-cache-without-docker","title":"Redis Cache Without Docker","text":"

Installation

pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n

Startup ACA-Py with redis_cache plugin loaded

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config plugins-config.yaml \\\n    # ... the remainder of your startup arguments\n

or

aca-py start \\\n    --plugin acapy_cache_redis.v0_1 \\\n    --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n    --plugin-config-value \"redis_cache.max_connections=90\" \\\n    --plugin-config-value \"redis_cache.credentials.username=username\" \\\n    --plugin-config-value \"redis_cache.credentials.password=password\" \\\n    # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"

If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue or redis_cache plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster (onto the root_profile). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.

"},{"location":"deploying/UpgradingACA-Py/","title":"Upgrading ACA-Py Data","text":"

Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.

"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"

The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.

Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.

Once an upgrade is identified as needed, the process is:

"},{"location":"deploying/UpgradingACA-Py/#forced-offline-upgrades","title":"Forced Offline Upgrades","text":"

In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.

If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.

Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!

"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"

Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade is required when running name tags based upgrade (i.e. providing --named-tag).

Tags are specified in YML file as below:

fix_issue_rev_reg:\n  fix_issue_rev_reg_records: true\n

Example:

 ./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"

With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.

There are 2 options to perform such upgrades:

This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).

This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.

Note: multiple specifications allowed

"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"

There are a couple of upgrade exception conditions to consider, as outlined in the following sections.

"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"

Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.

"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"

If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>\" and \"--force-upgrade\", the --from-version version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.

"},{"location":"deploying/deploymentModel/","title":"Deployment Model","text":""},{"location":"deploying/deploymentModel/#aries-cloud-agent-python-aca-py-deployment-model","title":"Aries Cloud Agent-Python (ACA-Py) - Deployment Model","text":"

This document is a \"concept of operations\" for an instance of an Aries cloud agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).

The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.

The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.

Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.

Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.

The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.

"},{"location":"deploying/deploymentModel/#aries-cloud-agent","title":"Aries Cloud Agent","text":"

Aries cloud agent implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.

Instances of the Aries cloud agents are configured with the following sub-components:

"},{"location":"deploying/deploymentModel/#controller","title":"Controller","text":"

A controller provides the personality of Aries cloud agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single Aries cloud agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.

Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding Aries cloud agent and invoking the DIDComm administration capabilities of the Aries cloud agent by calling the REST API exposed by that cloud agent. As well as responding to Aries cloud agent events, the controller initiates DIDComm protocol instances using the same REST API.

The controller and Aries cloud agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.

A controller implements the following capabilities.

While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the Aries cloud agent it controls via the REST API exposed by that agent.

"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"

The Aries cloud agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the Aries cloud agent instance. In the most common scenario, the Aries cloud agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and Aries cloud agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the Aries cloud agent to use the Postgres wallet supports enterprise scale agent deployments.

Current examples of deployed instances of Aries cloud agent and controllers include:

"},{"location":"design/AnoncredsW3CCompatibility/","title":"Supporting AnonCreds in W3C VC/VP Formats in Aries Cloud Agent Python","text":"

This design proposes to extend the Aries Cloud Agent Python (ACA-PY) to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.

"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"

The pre-requisites for the work are:

As of 2024-01-15, these pre-requisites have been met.

"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"

Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer and issue messages and when receiving request messages.

Related notes:

A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.

A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.

"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"

A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request message with an RFC 0510 DIF Presentation Exchange format attachment.

If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.

An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.

"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"

A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer and issue messages, and when sending request messages.

On receiving an Issue Credential v2.0 offer message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request message.

On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.

On receiving an RFC 0510 DIF Presentation Exchange request message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issues-to-consider","title":"Issues to consider","text":""},{"location":"design/AnoncredsW3CCompatibility/#flow-chart","title":"Flow Chart","text":""},{"location":"design/AnoncredsW3CCompatibility/#key-questions","title":"Key Questions","text":""},{"location":"design/AnoncredsW3CCompatibility/#what-is-the-roadmap-for-delivery-what-will-we-build-first-then-second","title":"What is the roadmap for delivery? What will we build first, then second?","text":"

It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.

"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"
  1. Update Admin API endpoints to initiate an Issue Credential v2.0 protocol to issue an AnonCreds credential in W3C VC format using RFC 0809 VC-DI format attachments.
  2. Add support for the RFC 0809 VC-DI message attachment formats.
  3. Should the attachment format be made pluggable as part of this? From the maintainers: If we did make it pluggable, this would be the point where that would take place. Since these values are hard coded, it is not pluggable currently, as noted. I've been dissatisfied with how this particular piece works for a while. I think making it pluggable, if done right, could help clean it up nicely. A plugin would then define their own implementation of V20CredFormatHandler. (@dbluhm)
  4. Update the v2.0 Issue Credential protocol handler to support a \"RFC 0809 VC-DI mode\" such that when a protocol instance starts with that format, it continues with it until completion, supporting issuing AnonCreds credentials in the process. This includes both the sending and receiving of all protocol message types.
"},{"location":"design/AnoncredsW3CCompatibility/#present-proof","title":"Present Proof","text":"
  1. Adjust as needed the sending of a Present Proof request using the RFC 0510 DIF Presentation Exchange with support (to be defined) for requesting AnonCreds VCs.
  2. Adjust as needed the processing of a Present Proof request message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.
  3. AnonCreds VCs issued as legacy or W3C VC format credentials should be usable in AnonCreds W3C VP format presentations.
  4. Update the creation of an RFC 0510 DIF Presentation Exchange presentation submission to support the use of AnonCreds VCs as the source of the VPs.
  5. Update the verifier receipt of a Present Proof v2.0 presentation message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.
"},{"location":"design/AnoncredsW3CCompatibility/#what-are-the-functions-we-are-going-to-wrap","title":"What are the functions we are going to wrap?","text":"

After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject impacted by changes are as follows:

W3CCredential

W3CPresentation

They will be added to __init__.py as additional exports of AnoncredsObject.

We also have to consider which classes or anoncreds objects have been modified

The classes modified according to the same PR mentioned above are:

Credential

PresentCredential

"},{"location":"design/AnoncredsW3CCompatibility/#creating-a-w3c-vc-credential-from-credential-definition-and-issuing-and-presenting-it-as-is","title":"Creating a W3C VC credential from credential definition, and issuing and presenting it as is","text":"

The issuance, presentation and verification of legacy anoncreds are implemented in this ./aries_cloudagent/anoncreds directory. Therefore, we will also start from there.

Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer class:

Create VC_DI Credential Offer

According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,

could be the parameters for create_offer method.

Create VC_DI Credential

NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.

async def create_credential(\n        self,\n        credential_offer: dict,\n        credential_request: dict,\n        credential_values: dict,\n    ) -> str:\n...\n...\n  try:\n    credential = await asyncio.get_event_loop().run_in_executor(\n        None,\n        lambda: W3CCredential.create(\n            cred_def.raw_value,\n            cred_def_private.raw_value,\n            credential_offer,\n            credential_request,\n            raw_values,\n            None,\n            None,\n            None,\n            None,\n        ),\n    )\n...\n

Create VC_DI Credential Request

async def create_vc_di_credential_request(\n        self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n    ) -> Tuple[str, str]:\n...\n...\ntry:\n  secret = await self.get_master_secret()\n  (\n      cred_req,\n      cred_req_metadata,\n  ) = await asyncio.get_event_loop().run_in_executor(\n      None,\n      W3CCredentialRequest.create,\n      None,\n      holder_did,\n      credential_definition.to_native(),\n      secret,\n      AnonCredsHolder.MASTER_SECRET_ID,\n      credential_offer,\n  )\n...\n

Create VC_DI Credential Presentation

async def create_vc_di_presentation(\n        self,\n        presentation_request: dict,\n        requested_credentials: dict,\n        schemas: Dict[str, AnonCredsSchema],\n        credential_definitions: Dict[str, CredDef],\n        rev_states: dict = None,\n    ) -> str:\n...\n...\n  try:\n    secret = await self.get_master_secret()\n    presentation = await asyncio.get_event_loop().run_in_executor(\n        None,\n        Presentation.create,\n        presentation_request,\n        present_creds,\n        self_attest,\n        secret,\n        {\n            schema_id: schema.to_native()\n            for schema_id, schema in schemas.items()\n        },\n        {\n            cred_def_id: cred_def.to_native()\n            for cred_def_id, cred_def in credential_definitions.items()\n        },\n    )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"

In this case, we can use to_w3c method of Credential class to convert from legacy to w3c and to_legacy method of W3CCredential class to convert from w3c to legacy.

We could call to_w3c method like this:

vc_di_cred = Credential.to_w3c(cred_def)\n

and for to_legacy:

legacy_cred = W3CCredential.to_legacy()\n

We don't need to input any parameters to it as it in turn calls Credential.from_w3c() method under the hood.

"},{"location":"design/AnoncredsW3CCompatibility/#format-handler-for-issue_credential-v2_0-protocol","title":"Format Handler for Issue_credential V2_0 Protocol","text":"

Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI in ./protocols/issue_credential/v2_0/messages/cred_format.py -

# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n    \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n    INDY = FormatSpec(...)\n    LD_PROOF = FormatSpec(...)\n    VC_DI = FormatSpec(\n        \u201cvc_di/\u201d,\n        CredExRecordVCDI,\n        DeferLoad(\n            \u201caries_cloudagent.protocols.issue_credential.v2_0\u201d\n            \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n        ),\n    )\n

And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof

# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n    \"\"\"Credential exchange W3C detail record.\"\"\"\n\n    class Meta:\n        \"\"\"CredExRecordW3C metadata.\"\"\"\n\n        schema_class = \"CredExRecordW3CSchema\"\n\n    RECORD_ID_NAME = \"cred_ex_w3c_id\"\n    RECORD_TYPE = \"w3c_cred_ex_v20\"\n    TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n    RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n

Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -

{\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n

Assuming VCDIDetail and VCDIOptions are already in place, VCDIDetailSchema can be created like so:

# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n    \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n    class Meta:\n        \"\"\"Accept parameter overload.\"\"\"\n\n        unknown = INCLUDE\n        model_class = VCDIDetail\n\n    credential = fields.Nested(\n        CredentialSchema(),\n        required=True,\n        metadata={\n            \"description\": \"Detail of the VC_DI Credential to be issued\",\n            \"example\": {\n                \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n                \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n                \"comment\": \"<some comment>\",\n                \"formats\": [\n                    {\n                        \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"format\": \"didcomm/w3c-di-vc@v0.1\"\n                    }\n                ],\n                \"credentials~attach\": [\n                    {\n                        \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n                        \"mime-type\": \"application/ld+json\",\n                        \"data\": {\n                            \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n                        }\n                    }\n                ]\n            }\n        },\n    )\n

Then create w3c format handler with mapping like so:

# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n            CRED_20_PROPOSAL: VCDIDetailSchema,\n            CRED_20_OFFER: VCDIDetailSchema,\n            CRED_20_REQUEST: VCDIDetailSchema,\n            CRED_20_ISSUE: VerifiableCredentialSchema,\n        }\n

Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di flag to the corresponding routes.

"},{"location":"design/AnoncredsW3CCompatibility/#admin-api-attachments","title":"Admin API Attachments","text":"

To make sure that once an endpoint has been called to trigger the Issue Credential flow in 0809 W3C_DI attachment formats the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI format.

# Format specifications\nATTACHMENT_FORMAT = {\n    CRED_20_PROPOSAL: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_OFFER: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_REQUEST: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n    },\n    CRED_20_ISSUE: {\n        V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n        V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n        V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n    },\n}\n

And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:

The same goes for ATTACHMENT_FORMAT of Present Proof flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:

"},{"location":"design/AnoncredsW3CCompatibility/#credential-exchange-admin-routes","title":"Credential Exchange Admin Routes","text":"

This route indirectly calls _formats_filters function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n            ...\n            ...\n        }\n    }\n}\n

This route indirectly calls _format_result_with_details function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:

{\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-issue\": true,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n

The request body for this route might look like this:

{\n    \"connection_id\": <connection_id>,\n    \"filter\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-remove\": true,\n    \"replacement_id\": <replacement_id>,\n    \"holder_did\": <holder_did>,\n    \"credential_preview\": {\n        \"@type\": \"issue-credential/2.0/credential-preview\",\n        \"attributes\": {\n           ...\n           ...\n        }\n    }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-present\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    ...\n    ...\n    \"connection_id\": <connection_id>,\n    \"presentation_proposal\": [\"vc_di\"],\n    \"comment: <some_comment>,\n    \"auto-verify\": true,\n    \"auto-remove\": true,\n    \"trace\": false\n}\n

The request body for this route might look like this:

{\n    \"presentation_definition\": <presentation_definition_schema>,\n    \"auto_remove\": true,\n    \"dif\": {\n        issuer_id: \"<issuer_id>\",\n        record_ids: {\n            \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n            \"<input descriptor id_2>\": [\"<record id>\"],\n        }\n    },\n    \"reveal_doc\": {\n        // vc_di dict\n    }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"

Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.

One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.

We will duplicate this store_credential function and modify it:

async def store_w3c_credential(...) {\n    ...\n    ...\n    try:\n        cred = W3CCredential.load(credential_data)\n    ...\n    ...\n}\n

Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?

Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"

Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy(), all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c() signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.

"},{"location":"design/AnoncredsW3CCompatibility/#compatibility-with-afj-how-can-we-make-sure-that-we-are-compatible","title":"Compatibility with AFJ: how can we make sure that we are compatible?","text":"

We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.

"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"

Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.

If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.

"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"

So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?

"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"

Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?

"},{"location":"design/AnoncredsW3CCompatibility/#how-can-a-wallet-retain-the-capability-to-present-only-an-anoncred-credential","title":"How can a wallet retain the capability to present ONLY an anoncred credential?","text":"

If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof

How will wallets support that in a way that is developer-friendly to wallet devs?

"},{"location":"features/AdminAPI/","title":"ACA-Py Administration API","text":""},{"location":"features/AdminAPI/#using-the-openapi-swagger-interface","title":"Using the OpenAPI (Swagger) Interface","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py agent with the --admin {HOST} {PORT} and --admin-insecure-mode command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY} and add the X-API-Key: {KEY} header to all requests instead of using the --admin-insecure-mode parameter.

To invoke a specific method:

The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.

Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.

The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.

"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"

When ACA-Py is started with the --webhook-url {URL} command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state property is updated.

When a webhook is dispatched, the record topic is appended as a path component to the URL. For example, https://webhook.host.example becomes https://webhook.host.example/topic/connections when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.

"},{"location":"features/AdminAPI/#webhooks-over-websocket","title":"Webhooks over WebSocket","text":"

ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.

Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.

To open a WebSocket, connect to the /ws endpoint of the Admin API.

"},{"location":"features/AdminAPI/#pairwise-connection-record-updated-connections","title":"Pairwise Connection Record Updated (/connections)","text":""},{"location":"features/AdminAPI/#basic-message-received-basicmessages","title":"Basic Message Received (/basicmessages)","text":""},{"location":"features/AdminAPI/#forward-message-received-forward","title":"Forward Message Received (/forward)","text":"

Enable using --monitor-forward.

"},{"location":"features/AdminAPI/#credential-exchange-record-updated-issue_credential","title":"Credential Exchange Record Updated (/issue_credential)","text":""},{"location":"features/AdminAPI/#presentation-exchange-record-updated-present_proof","title":"Presentation Exchange Record Updated (/present_proof)","text":""},{"location":"features/AdminAPI/#api-standard-behavior","title":"API Standard Behavior","text":"

The best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.

The routes.py file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).

The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422 HTTP response with an error message if the schema validation fails.)

API endpoints are defined using aiohttp_apispec tags (e.g. @doc, @request_schema, @response_schema etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register() method and added to the Swagger page in the post_process_routes() method.

The APIs should return the following HTTP status:

...and should not return:

"},{"location":"features/AnonCredsMethods/","title":"Adding AnonCreds Methods to ACA-Py","text":"

ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.

The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.

This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.

"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"

We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.

Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.

"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"

The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:

The links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of aries_cloudagent/anoncreds/base.py in the main branch.

The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.

Models for the API are defined here

"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"

When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_* event (e.g., AnonCredsIssuer.finish_schema, AnonCredsIssuer.finish_cred_def, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_* to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.

"},{"location":"features/AnonCredsMethods/#questions-or-comments","title":"Questions or Comments","text":"

The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aries-cloudagent-python channel on the Hyperledger Discord Server or open an issue in this repo to get help.

Pull Requests to the ACA-Py repository to improve this content are welcome!

"},{"location":"features/AnoncredsProofValidation/","title":"Anoncreds Proof Validation in ACA-Py","text":"

ACA-Py performs pre-validation when verifying Anoncreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the anoncreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.

The list of possible verification messages can be found here, and consists of:

class PresVerifyMsg(str, Enum):\n    \"\"\"Credential verification codes.\"\"\"\n\n    RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n    RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n    TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n    CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n    PRES_VALUE_ERROR = \"VALUE_ERROR\"\n    PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n

If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid (which means the attribute identified by 19_uuid contained a timestamp outside of the non-revocation interval (this is just a warning)).

A presentation verification may include multiple messages, for example:

    ...\n    \"verified\": \"true\",\n    \"verified_msgs\": [\n        \"TS_OUT_NRI::18_uuid\",\n        \"TS_OUT_NRI::18_id_GE_uuid\",\n        \"TS_OUT_NRI::18_busid_GE_uuid\"\n    ],\n    ...\n

... or it may include a single message, for example:

    ...\n    \"verified\": \"false\",\n    \"verified_msgs\": [\n        \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n    ],\n    ...\n

... or the verified_msgs may be null or an empty array.

"},{"location":"features/AnoncredsProofValidation/#presentation-modifications-and-warnings","title":"Presentation Modifications and Warnings","text":"

The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:

"},{"location":"features/AnoncredsProofValidation/#presentation-pre-validation-errors","title":"Presentation Pre-validation Errors","text":"

The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:

VALUE_ERROR::<description of the failed validation>\n

These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...) in the code.

A summary of the possible errors includes:

"},{"location":"features/AnoncredsProofValidation/#anoncreds-verification-exceptions","title":"Anoncreds Verification Exceptions","text":"

Typically, when you call the anoncreds verifier_verify_proof() method, it will return a True or False based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:

VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.

ACA-Py provides a DIDMethods registry holding all the DID methods supported for storage in a wallet

Askar and InMemory are the only wallets supporting this registry.

"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"

By default, ACA-Py supports did:key and did:sov. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web to the registry from a plugin setup method.

WEB = DIDMethod(\n    name=\"web\",\n    key_types=[ED25519, BLS12381G2],\n    rotation=True,\n    holder_defined_did=HolderDefinedDid.REQUIRED  # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n    methods = context.inject(DIDMethods)\n    methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"

POST /wallet/did/create can be provided with parameters for any registered DID method. Here's a follow-up to the did:web method example:

{\n    \"method\": \"web\",\n    \"options\": {\n        \"did\": \"did:web:doma.in\",\n        \"key_type\": \"ed25519\"\n    }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"

For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.

"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"

Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.

A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.

For example, given the DID did:example:1234abcd, a DID Resolver that supports did:example might return:

{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n  \"id\": \"did:example:1234abcd#keys-1\",\n  \"type\": \"Ed25519VerificationKey2018\",\n  \"controller\": \"did:example:1234abcd\",\n  \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n  \"id\": \"did:example:1234abcd#did-communication\",\n  \"type\": \"did-communication\",\n  \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n

For more details on DIDs and DID Resolution, see the W3C DID Specification.

In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.

"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver","text":"

In ACA-Py, the DIDResolver provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers list. This registry enables additional resolvers to be loaded via plugin.

"},{"location":"features/DIDResolution/#example-usage","title":"Example usage","text":"
class ExampleMessageHandler:\n    async def handle(context: RequestContext, responder: BaseResponder):\n    \"\"\"Handle example message.\"\"\"\n    resolver = await context.inject(DIDResolver)\n\n    doc: dict = await resolver.resolve(\"did:example:123\")\n    assert doc[\"id\"] == \"did:example:123\"\n\n    verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n    # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"

On DIDResolver.resolve or DIDResolver.dereference, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:

The selection algorithm roughly follows the following steps:

  1. Filter out all resolvers where resolver.supports(did) returns false.
  2. Partition remaining resolvers by type with all native resolvers followed by non-native resolvers (registration order preserved within partitions).
  3. For each resolver in the resulting list, attempt to resolve the DID and return the first successful result.
"},{"location":"features/DIDResolution/#resolver-plugins","title":"Resolver Plugins","text":"

Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool, writing your own should require minimal overhead.

"},{"location":"features/DIDResolution/#writing-a-resolver-plugin","title":"Writing a resolver plugin","text":"

Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver class as the base. BaseDIDResolver is an abstract base class that defines the interface that the core DIDResolver expects for Method resolvers.

The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py will be in charge of injecting the plugin, and example_resolver.py will have the logic implementation to resolve for a fabricated did:example method.

"},{"location":"features/DIDResolution/#__init-__py","title":"__init __.py","text":"

```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver

from .example_resolver import ExampleResolver

async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)

#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n    \"\"\"ExampleResolver class.\"\"\"\n\n    def __init__(self):\n        super().__init__(ResolverType.NATIVE)\n        # Alternatively, ResolverType.NON_NATIVE\n        self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n    @property\n    def supported_did_regex(self) -> Pattern:\n        \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n        return self._supported_did_regex\n\n    async def setup(self, context):\n        \"\"\"Setup the example resolver (none required).\"\"\"\n\n    async def _resolve(self, profile: Profile, did: str) -> dict:\n        \"\"\"Resolve example DIDs.\"\"\"\n        if did != \"did:example:1234abcd\":\n            raise DIDNotFound(\n                \"We only actually resolve did:example:1234abcd. Sorry!\"\n            )\n\n        return {\n            \"@context\": \"https://www.w3.org/ns/did/v1\",\n            \"id\": \"did:example:1234abcd\",\n            \"verificationMethod\": [{\n                \"id\": \"did:example:1234abcd#keys-1\",\n                \"type\": \"Ed25519VerificationKey2018\",\n                \"controller\": \"did:example:1234abcd\",\n                \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n            }],\n            \"service\": [{\n                \"id\": \"did:example:1234abcd#did-communication\",\n                \"type\": \"did-communication\",\n                \"serviceEndpoint\": \"https://agent.example.com/\"\n            }]\n        }\n

"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"

There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.

"},{"location":"features/DIDResolution/#using-resolver-plugins","title":"Using Resolver Plugins","text":"

In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github DIDs.

The resolution algorithm is simple: for the github DID did:github:dbluhm, the method specific identifier dbluhm (a GitHub username) is used to lookup an index.jsonld file in the ghdid repository in that GitHub users profile. See GitHub DID Method Specification for more details.

To use this plugin, first install it into your project's python environment:

pip install git+https://github.com/dbluhm/acapy-resolver-github\n

Then, invoke ACA-Py as you normally do with the addition of:

$ aca-py start \\\n    --plugin acapy_resolver_github \\\n    # ... the remainder of your startup arguments\n

Or add the following to your configuration file:

plugin:\n  - acapy_resolver_github\n

The following is a fully functional Dockerfile encapsulating this setup:

```dockerfile= FROM ghcr.io/hyperledger/aries-cloudagent-python:py3.9-0.12.0rc2 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github

CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]

To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n

"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":""},{"location":"features/DIDResolution/#references","title":"References","text":"

https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/

"},{"location":"features/DevReadMe/","title":"Developer's Read Me for Hyperledger Aries Cloud Agent - Python","text":"

See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.

"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/DevReadMe/#introduction","title":"Introduction","text":"

Aries Cloud Agent Python (ACA-Py) is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.

The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.

"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"

To put ACA-Py through its paces at the command line, checkout our demos page.

"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"

ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help option to discover the available command line parameters. There are a lot of them--for good and bad.

"},{"location":"features/DevReadMe/#docker","title":"Docker","text":"

To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"

If you installed the PyPi package, the executable aca-py should be available on your PATH.

Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:

aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n

If you get an error about a missing module indy (e.g. ModuleNotFoundError: No module named 'indy') when running aca-py, you will need to install the Indy libraries from the command line:

pip install python3_indy\n

Once that completes successfully, you should be able to run aca-py --version and the other examples above.

"},{"location":"features/DevReadMe/#about-aca-py-command-line-parameters","title":"About ACA-Py Command Line Parameters","text":"

ACA-Py invocations are separated into two types - initially provisioning an agent (provision) and starting a new agent process (start). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.

When starting an agent instance, at least one inbound and one outbound transport MUST be specified.

For example:

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --outbound-transport http\n

or

aca-py start    --inbound-transport http 0.0.0.0 8000 \\\n                --inbound-transport ws 0.0.0.0 8001 \\\n                --outbound-transport ws \\\n                --outbound-transport http\n

ACA-Py ships with both inbound and outbound transport drivers for http and ws (websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.

Most configuration parameters are provided to the agent at startup. Refer to the Running sections above for details on listing the available command line parameters.

"},{"location":"features/DevReadMe/#provisioning-secure-storage","title":"Provisioning Secure Storage","text":"

It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...).

aca-py provision --wallet-type askar --seed $SEED\n

For additional provision options, execute aca-py provision --help.

Additional information about secure storage options and configuration settings can be found here.

"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"

ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.

"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"

ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.

"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"

ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.

"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"

Docker must be installed to run software locally and to run the test suite.

"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"

The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.

One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.

"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"

Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.

./scripts/run_docker start <args>\n

For example:

./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

To enable the ptvsd Python debugger for Visual Studio/VSCode use the --debug command line parameter.

Any ports you will be using from the docker container should be published using the PORTS environment variable. For example:

PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n

Refer to the previous section for instructions on how to run ACA-Py.

"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"

You can find more details about logging and log levels here.

"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"

To run the ACA-Py test suite, use the following script:

./scripts/run_tests\n

To run the ACA-Py test suite with ptvsd debugger enabled:

./scripts/run_tests --debug\n

To run specific tests pass parameters as defined by pytest:

./scripts/run_tests aries_cloudagent/protocols/connections\n

To run the tests including Indy SDK and related dependencies, run the script:

./scripts/run_tests_indy\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"

You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).

Check out and run AATH tests as follows (this tests the aca-py main branch):

git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n

The manage script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.

"},{"location":"features/DevReadMe/#development-workflow","title":"Development Workflow","text":"

We use Ruff to enforce a coding style guide.

We use Black to automatically format code.

Please write tests for the work that you submit.

Tests should reside in a directory named tests alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_ to be automatically picked up the test runner.

There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!

The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.

Please also refer to the contributing guidelines and code of conduct.

"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"

The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.

"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"

The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext instance, currently within conductor.py. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass); for instance the wallet instance may be injected using wallet = context.inject(BaseWallet). The inject method normally throws an exception if no implementation of the base class is provided, but can be called with required=False for optional dependencies (in which case a value of None may be returned).

Providers are registered with either context.injector.bind_instance(BaseClass, instance) for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider) for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.

The BaseProvider classes in the config.provider module include ClassProvider, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet). ClassProvider accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass), allowing dynamic injection of dependencies when the class instance is instantiated.

"},{"location":"features/Endorser/","title":"Transaction Endorser Support","text":"

ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.

"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"

Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.

Once the connection is established and active, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info endpoint.

"},{"location":"features/Endorser/#requesting-transaction-endorsement","title":"Requesting Transaction Endorsement","text":"

Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id (of the Endorser connection) and create_transaction_for_endorser.

(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)

If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:

Protocol Step Author Endorser Request Endorsement /transactions/create-request Endorse Transaction /transactions/{tran_id}/endorse Write Transaction /transactions/{tran_id}/write

Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.

Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).

"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"

The following start-up parameters are supported by ACA-Py:

Endorsement:\n  --endorser-protocol-role <endorser-role>\n                        Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n                        Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n                        the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n  --endorser-public-did <endorser-public-did>\n                        For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n                        DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n  --endorser-alias <endorser-alias>\n                        For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n  --auto-request-endorsement\n                        For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n                        transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n  --auto-endorse-transactions\n                        For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n                        [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n  --auto-write-transactions\n                        For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n                        var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n  --auto-create-revocation-transactions\n                        For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n                        the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n  --auto-promote-author-did\n                        For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"

Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:

The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).

The overall architecture can be illustrated as:

"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"

An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:

You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).

At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.

For example:

Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.

"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"

... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:

You can see the same endorsement processes in this sequence.

Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.

"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"

By design Hyperledger Aries is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, Indy and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.

"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/JsonLdCredentials/#general-concept","title":"General Concept","text":"

The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:

"},{"location":"features/JsonLdCredentials/#bbs","title":"BBS+","text":"

BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.

Some other resources that can help you get started with BBS+ credentials:

"},{"location":"features/JsonLdCredentials/#preparing-to-issue-a-credential","title":"Preparing to Issue a Credential","text":"

Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.

"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"

It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:

<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]

For credentials the https://www.w3.org/2018/credentials/v1 context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1 URL MUST be present in the context. For convenience this URL will be automatically added to the @context of the credential if not present.

{\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://other-contexts.com\"\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"

Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:

Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.

For the remainder of this guide, we will be using the example UniversityDegreeCredential type and https://www.w3.org/2018/credentials/examples/v1 context from the Verifiable Credential Data Model. You should not use this for production use cases.

"},{"location":"features/JsonLdCredentials/#signature-suite","title":"Signature Suite","text":"

Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports two signature suites for issuing credentials:

Generally you should always use BbsBlsSignature2020 as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.

"},{"location":"features/JsonLdCredentials/#did-method","title":"Did Method","text":"

Besides the JSON-LD context, we need a did to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:

"},{"location":"features/JsonLdCredentials/#didsov","title":"did:sov","text":"

When using did:sov you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public endpoint. For backwards compatibility the did is returned without did:sov prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB becomes did:sov:DViYrCMPWfuLiY7LLs8giB)

"},{"location":"features/JsonLdCredentials/#didkey","title":"did:key","text":"

A did:key did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.

You can create a did:key using the /wallet/did/create endpoint with the following body. Use ed25519 for Ed25519Signature2018, bls12381g2 for BbsBlsSignature2020.

{\n  \"method\": \"key\",\n  \"options\": {\n    \"key_type\": \"bls12381g2\" // or ed25519\n  }\n}\n

The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj

"},{"location":"features/JsonLdCredentials/#issuing-credentials","title":"Issuing Credentials","text":"

Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0)

The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.

All endpoints in API use the aries/ld-proof-vc-detail@v1.0. We'll use the /issue-credential-2.0/send as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.

The detail should be included under the filter.ld_proof property. To issue a credential call the /issue-credential-2.0/send endpoint, with the example body below and the connection_id and issuer keys replaced. The value of issuer should be the did that you created in the Did Method paragraph above.

If you don't have auto-respond-credential-offer and auto-store-credential enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request and /issue-credential-2.0/records/{cred_ex_id}/store to finalize the credential issuance.

See the example body
{\n  \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n  \"filter\": {\n    \"ld_proof\": {\n      \"credential\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        }\n      },\n      \"options\": {\n        \"proofType\": \"BbsBlsSignature2020\"\n      }\n    }\n  }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"

After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.

Call the /credentials/w3c endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.

See the example response
{\n  \"results\": [\n    {\n      \"contexts\": [\n        \"https://www.w3.org/2018/credentials/examples/v1\",\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/bbs/v1\"\n      ],\n      \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n      \"schema_ids\": [],\n      \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n      \"subject_ids\": [],\n      \"proof_types\": [\"BbsBlsSignature2020\"],\n      \"cred_value\": {\n        \"@context\": [\n          \"https://www.w3.org/2018/credentials/v1\",\n          \"https://www.w3.org/2018/credentials/examples/v1\",\n          \"https://w3id.org/security/bbs/v1\"\n        ],\n        \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n        \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n        \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n        \"credentialSubject\": {\n          \"degree\": {\n            \"type\": \"BachelorDegree\",\n            \"name\": \"Bachelor of Science and Arts\"\n          },\n          \"college\": \"Faber College\"\n        },\n        \"proof\": {\n          \"type\": \"BbsBlsSignature2020\",\n          \"proofPurpose\": \"assertionMethod\",\n          \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n          \"created\": \"2021-05-03T12:31:28.561945\",\n          \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n        }\n      },\n      \"cred_tags\": {},\n      \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n    }\n  ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"

\u26a0\ufe0f TODO: https://github.com/hyperledger/aries-cloudagent-python/pull/1125

"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"

In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.

These endpoints include:

To learn more about using these endpoints, please refer to the available postman collection.

"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":""},{"location":"features/Mediation/#command-line-arguments","title":"Command Line Arguments","text":"

The minimum set of arguments required to enable mediation are:

aca-py start ... \\\n    --open-mediation\n

To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):

aca-py start ... \\\n    --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n

If a default mediator has already been established, then the --default-mediator-id argument can be used instead of the --mediator-invitation.

"},{"location":"features/Mediation/#didcomm-messages","title":"DIDComm Messages","text":"

See Aries RFC 0211: Coordinate Mediation Protocol.

"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":""},{"location":"features/Mediation/#mediator-message-flow-overview","title":"Mediator Message Flow Overview","text":""},{"location":"features/Mediation/#using-a-mediator","title":"Using a Mediator","text":"

After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.

"},{"location":"features/Multicredentials/","title":"Multi-Credentials","text":"

It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.

With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of Aries Cloud Agent Python being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.

Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using Aries Cloud Agent Python as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.

In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.

"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"

Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger is supported. Configurable write ledgers can be assigned using is_write in the configuration or using any of the --genesis-url, --genesis-file, and --genesis-transactions startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError is raised.

More background information including problem statement, design (algorithm) and more can be found here.

"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/Multiledger/#usage","title":"Usage","text":"

Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list startup parameter. This parameter accepts a string which is the path to the YAML configuration file. For example:

--genesis-transactions-list ./aries_cloudagent/config/multi_ledger_config.yml

If --genesis-transactions-list is specified, then --genesis-url, --genesis-file, --genesis-transactions should not be specified.

"},{"location":"features/Multiledger/#example-config-file","title":"Example config file","text":"
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n  is_production: false\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n  endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n  endorser_alias: \"endorser_test\"\n- id: greenlightDev\n  is_production: true\n  is_write: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: is_write property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev ledgers are write configurable. By default, on startup bcovrinTest will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger endpoint, either greenlightDev and bcovrinTest can be set as the write ledger.

Note 2: The greenlightDev ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.

- id: localVON\n  is_production: false\n  is_write: true\n  genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n  is_production: true\n  genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n

Note: For instance with regards to example config above, localVON will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.

"},{"location":"features/Multiledger/#config-properties","title":"Config properties","text":"

For each ledger, the required properties are as following:

For connecting to ledger, one of the following needs to be specified:

Optional properties:

Note: Both endorser_did and endorser_alias are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did and endorser.endorser_alias profile setting respectively.

"},{"location":"features/Multiledger/#multi-ledger-admin-api","title":"Multi-ledger Admin API","text":"

Multi-ledger related actions are grouped under the ledger topic in the SwaggerUI.

"},{"location":"features/Multiledger/#ledger-selection","title":"Ledger Selection","text":""},{"location":"features/Multiledger/#read-requests","title":"Read Requests","text":"

The following process is executed for these functions in ACA-Py:

  1. get_schema
  2. get_credential_definition
  3. get_revoc_reg_def
  4. get_revoc_reg_entry
  5. get_key_for_did
  6. get_all_endpoints_for_did
  7. get_endpoint_for_did
  8. get_nym_role
  9. get_revoc_reg_delta

If multiple ledgers are configured then IndyLedgerRequestsExecutor service extracts DID from the record identifier and executes the check below, else it returns the BaseLedger instance.

"},{"location":"features/Multiledger/#for-checking-ledger-in-parallel","title":"For checking ledger in parallel","text":""},{"location":"features/Multiledger/#write-requests","title":"Write Requests","text":"

On startup, the first configured applicable ledger is assigned as the write_ledger (BaseLedger), the selection is dependent on the order (top-down) and whether it is production or non_production. For instance, considering this example configuration, ledger bcovrinTest will be set as write_ledger as it is the topmost production ledger. If no production ledgers are included in configuration then the topmost non_production ledger is selected.

"},{"location":"features/Multiledger/#a-special-warning-for-taa-acceptance","title":"A Special Warning for TAA Acceptance","text":"

When you run in multi-ledger mode, ACA-Py will use the pool-name (or id) specified in the ledger configuration file for each ledger.

(When running in single-ledger mode, ACA-Py uses default as the ledger name.)

If you are running against a ledger in write mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name as a key.

This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:

or:

Once you re-start ACA-Py, you can check the GET /ledger/taa endpoint to verify your TAA acceptance status.

"},{"location":"features/Multiledger/#impact-on-other-aca-py-function","title":"Impact on other ACA-Py function","text":"

There should be no impact/change in functionality to any ACA-Py protocols.

IndySdkLedger was refactored by replacing wallet: IndySdkWallet instance variable with profile: Profile and accordingly .aries_cloudagent/indy/credex/verifier, .aries_cloudagent/indy/models/pres_preview, .aries_cloudagent/indy/sdk/profile.py, .aries_cloudagent/indy/sdk/verifier, ./aries_cloudagent/indy/verifier were also updated.

Added build_and_return_get_nym_request and submit_get_nym_request helper functions to IndySdkLedger and IndyVdrLedger.

Best practice/feedback emerging from Askar session deadlock issue and endorser refactoring PR was also addressed here by not leaving sessions open unnecessarily and changing context.session to context.profile.session, etc.

These changes are made here:

"},{"location":"features/Multiledger/#known-issues","title":"Known Issues","text":""},{"location":"features/Multitenancy/","title":"Multi-tenancy in ACA-Py","text":"

Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.

This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.

"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":""},{"location":"features/Multitenancy/#general-concept","title":"General Concept","text":"

When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.

"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"

Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.

The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.

The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.

"},{"location":"features/Multitenancy/#usage","title":"Usage","text":"

Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin startup parameter. See Multi-tenant Admin API below for more info on the admin API.

The --jwt-secret startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.

Example:

# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#multi-tenant-admin-api","title":"Multi-tenant Admin API","text":"

The multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the Authorization header as specified in Authentication).

Multi-tenancy related actions are grouped under the /multitenancy path or the multitenancy topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the --multitenant-admin startup parameter can be used.

See the SwaggerUI for the exact API definition for multi-tenancy.

"},{"location":"features/Multitenancy/#managed-vs-unmanaged-mode","title":"Managed vs Unmanaged Mode","text":"

Multi-tenancy in ACA-Py is designed with two key management modes in mind.

"},{"location":"features/Multitenancy/#managed-mode","title":"Managed Mode","text":"

In managed mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.

"},{"location":"features/Multitenancy/#unmanaged-mode","title":"Unmanaged Mode","text":"

In unmanaged mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See Authentication for more info.

It is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.

Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.

"},{"location":"features/Multitenancy/#mode-usage","title":"Mode Usage","text":"

The mode used can be specified when creating a wallet using the key_management_mode parameter.

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"

In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. Hyperledger Aries defines two types of routing methods, mediation and relaying.

See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.

"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"

In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.

"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"

ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:

  1. The base wallet has a default mediator set that will be used by sub wallets.
  2. Use --mediator-invitation to connect to the mediator, request mediation, and set it as the default mediator
  3. Use default-mediator-id if you're already connected to the mediator and mediation is granted (e.g. after restart).
  4. When a sub wallet creates a connection or key it will be registered at the mediator via the base wallet connection. The base wallet will still act as a relay and route the messages to the correct sub wallets.
  5. Pro: Not every wallet needs to create a connection with the mediator
  6. Con: Sub wallets have no control over the mediator.
  7. Sub wallet creates a connection with mediator and requests mediation
  8. Use mediation as you would in a non-multi-tenant agent, however, the base wallet will still act as a relay.
  9. You can set the default mediator to use for connections (using the mediation API).
  10. Pro: Sub wallets have control over the mediator.
  11. Con: Every wallet

The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.

A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.

+---------------------+      +----------------------+      +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+      +----------------------+      +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"

ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.

When creating a wallet wallet_dispatch_type be used to specify how webhooks for the wallet should be dispatched. The options are:

If either default or both is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls option.

Example:

// POST /multitenancy/wallet\n{\n  // ... other params ...\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_webhook_urls\": [\n    \"https://webhook-url.com/path\",\n    \"https://another-url.com/site\"\n  ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"

When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.

For HTTP events the wallet id is included as the x-wallet-id header. For WebSockets, the wallet id is included in the enclosing JSON object.

HTTP example:

POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n    // event payload\n}\n

WebSocket example:

{\n  \"topic\": \"{topic}\",\n  \"wallet_id\": \"{wallet_id}\",\n  \"payload\": {\n    // event payload\n  }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"

When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key header. As there is only a single wallet, this provides sufficient authentication and authorization.

For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.

Example

GET /connections [headers=\"Authorization: Bearer {token}]\n

The Authorization header is in addition to the Admin API key. So if the admin-api-key is enabled (which should be enabled in production) both the Authorization and the x-api-key headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key should be provided.

"},{"location":"features/Multitenancy/#getting-a-token","title":"Getting a token","text":"

A token can be obtained in two ways. The first method is the token parameter from the response of the create wallet (POST /multitenancy/wallet) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token) endpoint.

"},{"location":"features/Multitenancy/#method-1-register-new-tenant","title":"Method 1: Register new tenant","text":"

This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token as well as other useful information like your wallet id will be returned to you.

Example

new_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample.png\",\n  \"key_management_mode\": \"managed\",\n  \"label\": \"example-label-02\",\n  \"wallet_dispatch_type\": \"default\",\n  \"wallet_key\": \"example-encryption-key-02\",\n  \"wallet_name\": \"example-name-02\",\n  \"wallet_type\": \"askar\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook\"\n  ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02\",\n    \"image_url\": \"https://aries.ca/images/sample.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"

This method allows you to retrieve a tenant token for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key and the wallet_id of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d { \"wallet_key\": \"example-encryption-key-02\" }\n

Response

{\n  \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n

In unmanaged mode, the get token endpoint also requires the wallet_key parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.

{\n  \"wallet_id\": \"wallet_id\",\n  // \"wallet_key\" in only present in unmanaged mode\n  \"wallet_key\": \"wallet_key\"\n}\n

In unmanaged mode, sending the wallet_key to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.

"},{"location":"features/Multitenancy/#jwt-secret","title":"JWT Secret","text":"

For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret param is added to the ACA-Py agent that will be used for JWT creation and verification.

"},{"location":"features/Multitenancy/#swaggerui","title":"SwaggerUI","text":"

When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize button at the top to set the correct authentication headers. Make sure to also include the Bearer part in the input field. This won't be automatically added.

"},{"location":"features/Multitenancy/#tenant-management","title":"Tenant Management","text":"

After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.

"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"

The following properties can be updated: image_url, label, wallet_dispatch_type, and wallet_webhook_urls for tenants of a multitenancy wallet. To update these properties you will PUT a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID} admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.

Example

update_tenant='{\n  \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n  \"label\": \"example-label-02-updated\",\n  \"wallet_webhook_urls\": [\n    \"https://example.com/webhook/updated\"\n  ]\n}'\n
echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n

Response

{\n  \"settings\": {\n    \"wallet.type\": \"askar\",\n    \"wallet.name\": \"example-name-02\",\n    \"wallet.webhook_urls\": [\n      \"https://example.com/webhook/updated\"\n    ],\n    \"wallet.dispatch_type\": \"default\",\n    \"default_label\": \"example-label-02-updated\",\n    \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n    \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n  },\n  \"key_management_mode\": \"managed\",\n  \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n  \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n  \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n

An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error

"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"

The following information is required to delete a tenant:

Example

curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n

Response

{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"

To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:

Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_role

Added extra_settings dict field to request schema. extra_settings can be configured in the request body as below:

Example Request

{\n    \"wallet_name\": \" ... \",\n    \"default_label\": \" ... \",\n    \"wallet_type\": \" ... \",\n    \"wallet_key\": \" ... \",\n    \"key_management_mode\": \"managed\",\n    \"wallet_webhook_urls\": [],\n    \"wallet_dispatch_type\": \"base\",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"public-invites\": true\n    },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n  -H \"Content-Type: application/json\" \\\n  -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n  -d @-\n

Added extra_settings dict field to request schema.

Example Request

  {\n    \"wallet_webhook_urls\": [ ... ],\n    \"wallet_dispatch_type\": \"default\",\n    \"label\": \" ... \",\n    \"image_url\": \" ... \",\n    \"extra_settings\": {\n        \"ACAPY_LOG_LEVEL\": \"INFO\",\n        \"ACAPY_INVITE_PUBLIC\": true,\n        \"ACAPY_PUBLIC_INVITES\": false\n    },\n  }\n
  echo $update_tenant | curl  -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n   -H \"Content-Type: application/json\" \\\n   -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n   -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: Aca-Py Plug-Ins","text":""},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-it-work","title":"What's in a Plug-In and How does it Work?","text":"

Plug-ins are loaded on Aca-Py startup based on the following parameters:

The --plug-in parameter specifies a package that is loaded by Aca-Py at runtime, and extends Aca-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.

The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py routes.py (to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).

You can discover which plug-ins are installed in an aca-py instance by calling (in the \"server\" section) the GET /plugins endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config to inspect the Aca-Py configuration, which will include the configuration for the external plug-ins.)

"},{"location":"features/PlugIns/#setup-method","title":"setup method","text":"

If a setup method is provided, it will be called. If not, the message_types.py and routes.py will be explicitly loaded.

This would be in the package/module __init__.py:

async def setup(context: InjectionContext):\n    pass\n

TODO I couldn't find an implementation of a custom setup in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.

"},{"location":"features/PlugIns/#message_typespy","title":"message_types.py","text":"

When loading a plug-in, if there is a message_types.py available, Aca-Py will check the following attributes to initialize the protocol(s):

"},{"location":"features/PlugIns/#routespy","title":"routes.py","text":"

If routes.py is available, then Aca-Py will call the following functions to initialize the Admin endpoints:

"},{"location":"features/PlugIns/#definitionpy","title":"definition.py","text":"

If definition.py is available, Aca-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):

versions = [\n    {\n        \"major_version\": 1,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v1_0\",\n    },\n    {\n        \"major_version\": 2,\n        \"minimum_minor_version\": 0,\n        \"current_minor_version\": 0,\n        \"path\": \"v2_0\",\n    },\n]\n

The attributes are:

"},{"location":"features/PlugIns/#loading-aca-py-plug-ins-at-runtime","title":"Loading Aca-Py Plug-Ins at Runtime","text":"

The load sequence for a plug-in (the \"Startup\" class depends on how Aca-Py is running - upgrade, provision or start):

sequenceDiagram\n  participant Startup\n  Note right of Startup: Configuration is loaded on startup<br/>from aca-py config params\n    Startup->>+ArgParse: configure\n    ArgParse->>settings:  [\"external_plugins\"]\n    ArgParse->>settings:  [\"blocked_plugins\"]\n\n    Startup->>+Conductor: setup()\n      Note right of Conductor: Each configured plug-in is validated and loaded\n      Conductor->>DefaultContext:  build_context()\n      DefaultContext->>DefaultContext:  load_plugins()\n      DefaultContext->>+PluginRegistry:  register_package() (for built-in protocols)\n        PluginRegistry->>PluginRegistry:  register_plugin() (for each sub-package)\n      DefaultContext->>PluginRegistry:  register_plugin() (for non-protocol built-ins)\n      loop for each external plug-in\n      DefaultContext->>PluginRegistry:  register_plugin()\n      alt if a setup method is provided\n        PluginRegistry->>ExternalPlugIn:  has setup\n      else if routes and/or message_types are provided\n        PluginRegistry->>ExternalPlugIn:  has routes\n        PluginRegistry->>ExternalPlugIn:  has message_types\n      end\n      opt if definition is provided\n        PluginRegistry->>ExternalPlugIn:  definition()\n      end\n      end\n      DefaultContext->>PluginRegistry:  init_context()\n        loop for each external plug-in\n        alt if a setup method is provided\n          PluginRegistry->>ExternalPlugIn:  setup()\n        else if a setup method is NOT provided\n          PluginRegistry->>PluginRegistry:  load_protocols()\n          PluginRegistry->>PluginRegistry:  load_protocol_version()\n          PluginRegistry->>ProtocolRegistry:  register_message_types()\n          PluginRegistry->>ProtocolRegistry:  register_controllers()\n        end\n        PluginRegistry->>PluginRegistry:  register_protocol_events()\n      end\n\n      Conductor->>Conductor:  load_transports()\n\n      Note right of Conductor: If the admin server is enabled, plug-in routes are added\n      Conductor->>AdminServer:  create admin server if enabled\n\n    Startup->>Conductor: start()\n      Conductor->>Conductor:  start_transports()\n      Conductor->>AdminServer:  start()\n\n    Note right of Startup: the following represents an<br/>admin server api request\n    Startup->>AdminServer:  setup_context() (called on each request)\n      AdminServer->>PluginRegistry:  register_admin_routes()\n      loop for each external plug-in\n        PluginRegistry->>ExternalPlugIn:  routes.register() (to register endpoints)\n      end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"

When developing a new plug-in:

"},{"location":"features/PlugIns/#pip-vs-poetry-support","title":"PIP vs Poetry Support","text":"

Most Aca-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.

"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"

TBD

"},{"location":"features/PlugIns/#aca-py-plug-ins","title":"Aca-Py Plug-ins","text":"

This list was originally published in this hackmd document.

Maintainer Name Features Last Update Link BCGov Redis Events Inbound/Outbound message queue Sep 2022 https://github.com/bcgov/aries-acapy-plugin-redis-events Hyperledger Aries Toolbox UI for ACA-py Aug 2022 https://github.com/hyperledger/aries-toolbox Hyperledger Aries ACApy Plugin Toolbox Protocol Handlers Aug 2022 https://github.com/hyperledger/aries-acapy-plugin-toolbox Indicio Data Transfer Specific Data import Aug 2022 https://github.com/Indicio-tech/aries-acapy-plugin-data-transfer Indicio Question & Answer Non-Aries Protocol Aug 2022 https://github.com/Indicio-tech/acapy-plugin-qa Indicio Acapy-plugin-pickup Fetching Messages from Mediator Aug 2022 https://github.com/Indicio-tech/acapy-plugin-pickup Indicio Machine Readable GF Governance Framework Mar 2022 https://github.com/Indicio-tech/mrgf Indicio Cache Redis Cache for Scalability Jul 2022 https://github.com/Indicio-tech/aries-acapy-cache-redis SICPA Dlab Kafka Events Event Bus Integration Aug 2022 https://github.com/sicpa-dlab/aries-acapy-plugin-kafka-events SICPA Dlab DidComm Resolver Universal Resolver for DIDComm Aug 2022 https://github.com/sicpa-dlab/acapy-resolver-didcomm SICPA Dlab Universal Resolver Multi-ledger Reading Jul 2021 https://github.com/sicpa-dlab/acapy-resolver-universal DDX mydata-did-protocol Oct 2022 https://github.com/decentralised-dataexchange/acapy-mydata-did-protocol BCGov Basic Message Storage Basic message storage (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/basicmessage_storage BCGov Multi-tenant Provider Multi-tenant Provider (traction) Dec 2022 https://github.com/bcgov/traction/tree/develop/plugins/multitenant_provider BCGov Traction Innkeeper Innkeeper (traction) Feb 2023 https://github.com/bcgov/traction/tree/develop/plugins/traction_innkeeper"},{"location":"features/PlugIns/#references","title":"References","text":"

The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)

Configuration params:

Loading plug-ins:

Versioning for plug-ins:

"},{"location":"features/SelectiveDisclosureJWTs/","title":"SD-JWT Implementation in ACA-Py","text":"

This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.

This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.

In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.

"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"

The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss, iat, exp, and cnf are always visible.

The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:

{\n    \"birthdate\": \"1940-01-01\",\n    \"address\": {\n        \"street_address\": \"123 Main St\",\n        \"locality\": \"Anytown\",\n        \"region\": \"Anystate\",\n        \"country\": \"US\",\n    },\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\"

The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.

"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"

The issuer can decide to treat the address claim in the above example payload as a block that can either be disclosed completely or not at all.

The issuer lists out all the claims inside \"address\" in the non_sd_list, but not address itself:

non_sd_list = [\n    \"address.street_address\",\n    \"address.locality\",\n    \"address.region\",\n    \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"

The issuer may instead decide to make the address claim contents selectively disclosable individually.

The issuer lists only \"address\" in the non_sd_list.

non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"

The issuer may also decide to make the address claim contents selectively disclosable recursively, i.e., the address claim is made selectively disclosable as well as its sub-claims.

The issuer lists neither address nor the subclaims of address in the non_sd_list, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list need not be defined explicitly.

"},{"location":"features/SelectiveDisclosureJWTs/#walk-through-of-sd-jwt-implementation","title":"Walk-Through of SD-JWT Implementation","text":""},{"location":"features/SelectiveDisclosureJWTs/#signing-sd-jwts","title":"Signing SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtsign-endpoint","title":"Example input to /wallet/sd-jwt/sign endpoint","text":"
{\n  \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n  \"headers\": {},\n  \"payload\": {\n    \"sub\": \"user_42\",\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"email\": \"johndoe@example.com\",\n    \"phone_number\": \"+1-202-555-0101\",\n    \"phone_number_verified\": true,\n    \"address\": {\n      \"street_address\": \"123 Main St\",\n      \"locality\": \"Anytown\",\n      \"region\": \"Anystate\",\n      \"country\": \"US\"\n    },\n    \"birthdate\": \"1940-01-01\",\n    \"updated_at\": 1570000000,\n    \"nationalities\": [\"US\", \"DE\", \"SA\"],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000\n  },\n  \"non_sd_list\": [\n    \"given_name\",\n    \"family_name\",\n    \"nationalities\"\n  ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"
\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n

The sd_jwt_sign() method:

"},{"location":"features/SelectiveDisclosureJWTs/#verifying-sd-jwts","title":"Verifying SD-JWTs","text":""},{"location":"features/SelectiveDisclosureJWTs/#example-input-to-walletsd-jwtverify-endpoint","title":"Example input to /wallet/sd-jwt/verify endpoint","text":"

Using the output from the /wallet/sd-jwt/sign example above, we have decided to only reveal two of the selectively disclosable claims (user and updated_at) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.

{\n  \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"

Note that attributes in the non_sd_list (given_name, family_name, and nationalities), as well as essential verification data (iss, iat, exp) are visible directly within the payload. The disclosures include only the values for the user and updated_at claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"] list.

{\n  \"headers\": {\n    \"typ\": \"JWT\",\n    \"alg\": \"EdDSA\",\n    \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n  },\n  \"payload\": {\n    \"_sd\": [\n      \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n      \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n      \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n      \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n      \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n      \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n      \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n    ],\n    \"given_name\": \"John\",\n    \"family_name\": \"Doe\",\n    \"nationalities\": [\n      {\n        \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n      },\n      {\n        \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n      },\n      {\n        \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n      }\n    ],\n    \"iss\": \"https://example.com/issuer\",\n    \"iat\": 1683000000,\n    \"exp\": 1883000000,\n    \"_sd_alg\": \"sha-256\"\n  },\n  \"valid\": true,\n  \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n  \"disclosures\": [\n    [\n      \"xvDX00fjZferiNiPod51qQ\",\n      \"updated_at\",\n      1570000000\n    ],\n    [\n      \"X99s3_LixBcor_hntREZcg\",\n      \"sub\",\n      \"user_42\"\n    ]\n  ]\n}\n

The sd_jwt_verify() method:

"},{"location":"features/SupportedRFCs/","title":"Aries AIP and RFCs Supported in Aries Cloud Agent Python","text":"

This document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on Hyperledger Discord or through an issue in this repo.

Last Update: 2024-03-05, Release 0.12.0rc2

The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.

"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"

See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other Aries Frameworks and Agents.

AIP Version Supported Notes AIP 1.0 Fully supported. AIP 2.0 Fully supported, with a couple of very minor exceptions noted below.

A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.

"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at ghcr.io/hyperledger/aries-cloudagent-python. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using the Ed25519Signature2018, BbsBlsSignature2020 and BbsBlsSignatureProof2020 signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov DID published on an Indy network. did:sov did:web Resolution only did:key did:peer Algorithms 2/3 and 4 Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type will eventually be the same as askar when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Deprecated Full support for the features of the \"indy-wallet\" secure storage capabilities found in the Indy SDK.

New installations of ACA-Py should NOT use the Indy SDK. Existing deployments using the Indy SDK should transition to Aries Askar and related components as soon as possible.

"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"

All RFCs listed in AIP 1.0 are fully supported in ACA-Py. The following table provides notes about the implementation of specific RFCs.

RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol The agent supports Connection/DID exchange initiated from both plaintext invitations and public DIDs that enable bypassing the invitation message."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"

All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.

RFC Supported Notes 0587-encryption-envelope-v2 Supporting the DIDComm v2 encryption envelope does not make sense until DIDComm v2 is to be supported. 0317-please-ack An investigation was done into supporting please-ack and a number of complications were found. As a result, we expect that please-ack will be dropped from AIP 2.0. It has not been implemented by any Aries frameworks or deployments.

There is a PR to the Aries RFCs repository to remove those RFCs from AIP 2.0. If that PR is removed, the RFCs will be removed from the table above.

"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"

ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.

The running agent provides a Swagger User Interface that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.

"},{"location":"features/UsingOpenAPI/#aca-py-openapi-raw-output-characteristics","title":"ACA-Py, OpenAPI Raw Output Characteristics","text":"

ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json or from the browser Swagger User Interface directly.

The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec. This tool starts ACA-Py, retrieves the swagger.json file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json language output. The output of this tool enables comparison with the checked-in open-api/swagger.json and open-api/openapi.json, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation is turned off via the open-api/openAPIJSON.config file, so warning messages are printed for non-conformance, but the json is still output. Most of the warnings reported by generate-open-api-spec relate to missing operationId fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId annotations via tags.

The generate-open-api-spec tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript. It is recommended that the generate-open-api-spec is run prior to each release, and the resulting open-api/openapi.json file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.

"},{"location":"features/UsingOpenAPI/#generating-language-wrappers-for-aca-py","title":"Generating Language Wrappers for ACA-Py","text":"

There are inevitably differences around best practice for method naming based on coding language and organization standards.

Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId fields. This allows the greatest flexibility in conforming to external naming requirements.

Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.

The OpenAPI Tools was found to offer some nice features when generating Typescript. It creates separate files for each class and allows the use of a .openapi-generator-ignore file to override generation if there is a spec file issue that needs to be maintained manually.

If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.

Another suggestion for code generation is to keep the modelPropertyNaming set to original when generating code. Although it is tempting to try and enable marshaling into standard naming formats such as camelCase, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model shows when looking at the ACA-Py Swagger UI in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshaling correct in all circumstances when changing model name format.

"},{"location":"features/UsingOpenAPI/#existing-language-wrappers-for-aca-py","title":"Existing Language Wrappers for ACA-Py","text":""},{"location":"features/UsingOpenAPI/#python","title":"Python","text":""},{"location":"features/UsingOpenAPI/#go","title":"Go","text":""},{"location":"features/UsingOpenAPI/#java","title":"Java","text":""},{"location":"features/devcontainer/","title":"ACA-Py Development with Dev Container","text":"

The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer and will use VS Code to illustrate.

By no means is ACA-Py limited to these tools; they are merely examples.

For information on running demos and tests using provided shell scripts, see DevReadMe readme.

"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"

The primary use case for this devcontainer is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.

There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose all within this container.

"},{"location":"features/devcontainer/#files","title":"Files","text":"

The .devcontainer folder contains the devcontainer.json file which defines this container. We are using a Dockerfile and post-install.sh to build and configure the container run image. The Dockerfile is simple but in place for simplifying image enhancements (ex. adding poetry to the image). The post-install.sh will install some additional development libraries (including for BDD support).

"},{"location":"features/devcontainer/#devcontainer","title":"Devcontainer","text":"

What are Development Containers?

A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.

see https://containers.dev.

In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:

"},{"location":"features/devcontainer/#open-aca-py-in-the-devcontainer","title":"Open ACA-Py in the devcontainer","text":"

To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:

  1. Open Visual Studio Code, and use the Command Palette and use Dev Containers: Open Folder in Container...
  2. Open Visual Studio Code and File|Open Folder..., you should be prompted to Reopen in Container.

NOTE follow any prompts to install Python Extension or reload window for Pylance when first building the container.

ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window as some extensions seem to require this in order to work as expected.

"},{"location":"features/devcontainer/#devcontainerjson","title":"devcontainer.json","text":"

When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.9 image (bash shell) and loading it with all the ACA-Py requirements (and black). We also load a few Visual Studio settings (for running Pytests and formatting with Flake and Black).

"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"

The Python libraries / dependencies are installed using poetry. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run black .). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.

In VS Code, open a Terminal, you should be able to run the following commands:

python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\nblack . --check\npoetry --version\n

The first command should show you that aries_cloudagent module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit installed) and Pull Requests.

When running ruff check . in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13) - that's ok. If there are actual ruff errors, you should see something like:

error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"

We have added Black formatter and Ruff extensions. Although we have added launch settings for both ruff and black, you can also use the extension commands from the command palette.

More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window to ensure the extensions are loaded correctly.

"},{"location":"features/devcontainer/#running-docker-in-docker-demos","title":"Running docker-in-docker demos","text":"

Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n

If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.

git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":""},{"location":"features/devcontainer/#aca-py-debugging","title":"ACA-Py Debugging","text":"

To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json and settings.json, please cut and paste what you want/need.

cp -R .vscode-sample .vscode\n

This will add a launch.json, settings.json and multiple ACA-Py configuration files for developing with different scenarios.

Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.

For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url config. If you want to use a non localhost server then you will need to change the url.

"},{"location":"features/devcontainer/#faber","title":"Faber","text":""},{"location":"features/devcontainer/#alice","title":"Alice","text":""},{"location":"features/devcontainer/#endorser","title":"Endorser","text":""},{"location":"features/devcontainer/#author","title":"Author","text":""},{"location":"features/devcontainer/#multitenant-admin","title":"Multitenant-Admin","text":""},{"location":"features/devcontainer/#try-running-faber-and-alice-at-the-same-time-and-add-break-points-and-recreate-the-demo","title":"Try running Faber and Alice at the same time and add break points and recreate the demo","text":"

To run your ACA-Py code in debug mode, go to the Run and Debug view, select the agent(s) you want to start and click Start Debugging (F5).

This will start your source code as a running ACA-Py instance, all configuration is in the *.yml files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000). You could change this or another ledger such as http://test.bcovrin.vonx.io. These are purposefully, very simple configurations.

For example, open aries_cloudagent/admin/server.py and set a breakpoint in async def status_handler(self, request: web.BaseRequest):, then call GET /status in the Admin Console and hit your breakpoint.

"},{"location":"features/devcontainer/#pytest","title":"Pytest","text":"

Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests will scan and find the tests.

See Python Testing for more details, and Test Commands for usage.

WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File to start debugging.

WARNING: the project configuration found in pyproject.toml include performing ruff checks when we run pytest. Including ruff does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.

"},{"location":"features/devcontainer/#next-steps","title":"Next Steps","text":"

At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal. This isn't a panacea but should get you going in the right direction and provide you with some development tools.

"},{"location":"gettingStarted/","title":"Becoming an Indy/Aries Developer","text":"

This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own Aries agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an Aries agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.

Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.

Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.

"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between Aries Agents","text":"

Use an ACA-Py issuer/verifier to establish a connection with an Aries mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/AriesAgentArchitecture/","title":"Aries Cloud Agent Internals: Agent and Controller","text":"

This section talks in particular about the architecture of this Aries cloud agent implementation. An instance of an Aries agent is actually made up of to two parts - the agent itself and a controller.

The agent handles all of the core Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.

Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.

As such, the agent is just a configured dependency in an Aries cloud agent deployment. Thus, the vast majority of Aries developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of Aries cloud agent maintainers will focus on adding and maintaining the agent dependency.

Want more details about the agent and controller internals? Take a look at the Aries cloud agent deployment model document.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBasics/","title":"What is Aries?","text":"

Hyperledger Aries provides a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for blockchain-rooted, peer-to-peer interactions. It includes a shared cryptographic wallet for blockchain clients as well as a communications protocol for allowing off-ledger interaction between those clients.

A Hyperledger Aries agent (such as the one in this repository):

The concepts and features that make up the Aries project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs. The Aries Working Group meets weekly to expand the design and components of Aries.

The Aries Cloud Agent Python currently only supports Hyperledger Indy-based verifiable credentials and public ledger. Longer term (as we'll see later in this guide) protocols will be extended or added to support other verifiable credential implementations and public ledgers.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesBigPicture/","title":"Aries Agents in context: The Big Picture","text":"

Aries agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.

The agents in the picture shares many attributes:

While there can be many other agent setups, the picture above shows the most common ones - edge agents for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.

Misleading in the picture is that (almost) all agents connect directly to the Ledger network. In this picture it's the Sovrin ledger, but that could be any Indy network (e.g. set of nodes running indy-node software) and in future, ledgers from other providers. That implies most agents embed the ledger SDK (e.g. indy-sdk) and makes calls to the ledger SDK to interact with the ledger and other SDK controlled resources (e.g. secure storage). Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the ledger - they do it directly. Super small IoT devices are an instance of an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the ledger.

While current Aries agents currently only support Indy-based ledgers, the intention is to add support for other ledgers.

The (most common) purpose of cloud agents is to enable secure and privacy preserving routing of messages between edge agents. Rather than messages going directly from edge agent to edge agent (which is often impossible - for example sending to a mobile agent), messages sent from edge agent to edge agent are routed through a sequence of cloud agents. Some of those cloud agents might be controlled by the sender, some by the receiver and others might be gateways owned by agent vendors (called \"Agencies\"). In all cases, an edge agent tells routing agents \"here's how to send messages to me\", so a routing agent sending a message only has to know how to send a peer-to-peer message. While quite complicated, the protocols used by the agents largely take care of this complexity, and most developers don't have to know much about it.

Note the many caveats in this section - \"most common\", \"commonly\", etc. There are many small building blocks available in Aries and underlying components that can be combined in infinite ways. We recommend not worrying about the alternate use cases for now. Focus on understanding the common use cases while remembering that other configurations are possible.

We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how indy-node or indy-sdk work to be able to build your first Aries-based application. Later in this guide we'll covering the starting point you do need to know.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesDeveloperDemos/","title":"Developer Demos and Samples of Aries Agent","text":"

Here are some demos that developers can use to get up to speed on Aries. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.

"},{"location":"gettingStarted/AriesDeveloperDemos/#open-api-demo","title":"Open API demo","text":"

This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.

Collaborating Agents OpenAPI Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"

Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.

Python-based Alice/Faber Demo

"},{"location":"gettingStarted/AriesDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/AriesMessaging/","title":"An overview of Aries messaging","text":"

Aries Agents communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. Aries agents use (an early instance of) the did:peer DID method, which uses DIDs that are not published to a public ledger, but only shared privately between the communicating parties - usually just two agents.

Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:

Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.

Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.

Developers building Aries agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/AriesRoutingExample/","title":"Aries Routing - an example","text":"

In this example, we'll walk through an example of complex routing in Aries, outlining some of the possibilities that can be implemented.

We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.

What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?

"},{"location":"gettingStarted/AriesRoutingExample/#the-scenario","title":"The Scenario","text":"

Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.

"},{"location":"gettingStarted/AriesRoutingExample/#all-the-dids","title":"All the DIDs","text":"

A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:

That's a lot more than just the Bob and Alice relationship we usually think about!

"},{"location":"gettingStarted/AriesRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"

From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):

Let's look at the did-communication service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:

The null serviceEndpoint for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.

"},{"location":"gettingStarted/AriesRoutingExample/#preparing-bobs-diddoc-for-alice","title":"Preparing Bob's DIDDoc for Alice","text":"

Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".

We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:

Note: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.

With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.

At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.

Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:

"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"

To be completed.

"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"

Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?

Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:

Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:

That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.

From experience, we\u2019ve added to two extra features to deal with unexpected conditions:

"},{"location":"gettingStarted/CredentialRevocation/#using-aca-py-revocation","title":"Using ACA-Py Revocation","text":"

The following are the ACA-Py steps and APIs involved in handling credential revocation.

To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.

Include the command line parameter --tails-server-base-url <indy-tails-server url>

  1. Publish credential definition

    Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.

    POST /credential-definitions\n{\n  \"schema_id\": schema_id,\n  \"support_revocation\": true,\n  # Only needed if support_revocation is true. Defaults to 100\n  \"revocation_registry_size\": size_int,\n  \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n  \"credential_definition_id\": \"credential_definition_id\"\n}\n
  2. Issue credential

    This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.

    POST /issue-credential/send-offer\n{\n    \"cred_def_id\": credential_definition_id,\n    \"revoc_reg_id\": revocation_registry_id\n    \"auto_remove\": False, # We need the credential exchange record when revoking\n    ...\n}\nResponse\n{\n    \"credential_exchange_id\": credential_exchange_id\n}\n
  3. Revoking credential

    POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>\n}\n

    If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations to publish pending revocations in batches. Revocation are not written to ledger until this is called.

  4. When asking for proof, specify the time span when the credential is NOT revoked

     POST /present-proof/send-request\n {\n   \"connection_id\": ...,\n   \"proof_request\": {\n     \"requested_attributes\": [\n       {\n         \"name\": ...\n         \"restrictions\": ...,\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"requested_predicates\": [\n       {\n         \"name\": ...\n         ...\n         \"non_revoked\": # Optional, override the global one when specified\n         {\n           \"from\": <seconds from Unix Epoch> # Optional, default is 0\n           \"to\": <seconds from Unix Epoch>\n         }\n       },\n       ...\n     ],\n     \"non_revoked\": # Optional, only check revocation if specified\n     {\n       \"from\": <seconds from Unix Epoch> # Optional, default is 0\n       \"to\": <seconds from Unix Epoch>\n     }\n   }\n }\n
"},{"location":"gettingStarted/CredentialRevocation/#revocation-notification","title":"Revocation Notification","text":"

ACA-Py supports Revocation Notification v1.0.

Note: The optional ~please_ack is not currently supported.

"},{"location":"gettingStarted/CredentialRevocation/#issuer-role","title":"Issuer Role","text":"

To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:

Your request might look something like:

POST /revocation/revoke\n{\n    \"rev_reg_id\": <revocation_registry_id>\n    \"cred_rev_id\": <credential_revocation_id>,\n    \"publish\": <true|false>,\n    \"notify\": true,\n    \"connection_id\": <connection id>,\n    \"thread_id\": <thread id>,\n    \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"

On receipt of a revocation notification, an event with topic acapy::revocation-notification::received and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.

If the argument --monitor-revocation-notification is used on startup, a webhook with the topic revocation-notification and a payload containing the thread ID and comment is emitted to registered webhook urls.

"},{"location":"gettingStarted/CredentialRevocation/#manually-creating-revocation-registries","title":"Manually Creating Revocation Registries","text":"

NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.

The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.

However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.

There are several endpoints that must be called, and they must be called in this order:

  1. Create revoc registry POST /revocation/create-registry

  2. you need to provide the credential definition id and the size of the registry

  3. Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}

  4. here you need to provide the full URI that will be written to the ledger, for example:

{\n  \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
  1. Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition

  2. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  3. Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file

  4. the tails server will check that the registry definition is already written to the ledger

  5. Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry

  6. if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser

  7. this operation MUST be performed on the the new revoc registry def BEFORE any revocation operations are performed
"},{"location":"gettingStarted/CredentialRevocation/#revocation-registry-rotation","title":"Revocation Registry Rotation","text":"

From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init, generated, posted, active, full, decommissioned. When issuing revocable credentials, the work is done with the active registry record. There are always 2 active registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active registry. When rotating, all registry records (except records in init state) are decommissioned and a new pair of active registry records are created.

Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate

It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id} after rotation and before issuance (if possible).

"},{"location":"gettingStarted/DIDcommMsgs/","title":"Deeper Dive: DIDComm Messaging","text":"

DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type. The namespace is currently defined to be a public DID that should be globally resolvable to a protocol specification. Currently, \"core\" messages use a DID that is not yet globally resolvable - Daniel Hardman has the keys associated with the DID.

Link: Message Types

As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.

"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"

In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"

The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"

Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"

Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"

Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.

"},{"location":"gettingStarted/IndyAriesDevOptions/","title":"What should I work on? Options for Aries/Indy Developers","text":"

Now that you know the basics of the Indy/Aries eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.

This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node (the Indy public ledger) or the indy-sdk. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.

In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.

"},{"location":"gettingStarted/IndyAriesDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"

If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in this repository (aries-cloudagent-python).

If you want to build a mobile agent, there are open source options available, including Aries-MobileAgent-Xamarin (aka \"Aries MAX\"), which is built on Aries Framework .NET, and Aries Mobile Agent React Native, which is built on Aries Framework JavaScript.

As a developer building applications that use/embed Aries agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.

Note that if building apps is what you want to do, you don't need to do a deep dive into the Aries SDK, the Indy SDK or the Indy Node public ledger. You need to know the concepts, but it's not a requirement that know the code base intimately.

"},{"location":"gettingStarted/IndyAriesDevOptions/#contributing-to-aries-cloudagent-python","title":"Contributing to aries-cloudagent-python","text":"

Of course as you build applications using aries-cloudagent-python, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.

"},{"location":"gettingStarted/IndyAriesDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"

aries-cloudagent-python currently supports only Hyperledger Indy-based public ledgers and verifiable credentials exchange. A goal of Hyperledger Aries is to be ledger-agnostic, and to support other ledgers. We're experimenting with adding support for other ledgers, and would welcome assistance in doing that.

"},{"location":"gettingStarted/IndyAriesDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"

Although controllers for an aries-cloudagent-python instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the aries-cloudagent-python, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader Aries community so that this and other codebases improve.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-aries-sdk","title":"Improving Aries SDK","text":"

This code base and other Aries agent implementations currently embed the indy-sdk. However, much of the code in the indy-sdk is being migrated into a variety of Aries language specific repositories. How this migration is to be done is still being decided, but it makes sense that the agent-type things be moved to Aries repositories. A number of language specific Aries SDK repos have been created and are being populated.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-the-indy-sdk","title":"Improving the Indy SDK","text":"

Dropping down a level from Aries and into Indy, the indy-sdk needs to continue to evolve. The code base is robust, of high quality and well thought out, but it needs to continue to add new capabilities and improve existing features. The indy-sdk is implemented in Rust, to produce a C-callable library that can be used by client libraries built in a variety of languages.

"},{"location":"gettingStarted/IndyAriesDevOptions/#improving-indy-node","title":"Improving Indy Node","text":"

If you are interested in getting into the public ledger part of Indy, particularly if you are going to be a Sovrin Steward, you should take a deep look into indy-node. Like the indy-sdk, indy-node is robust, of high quality and is well thought out. As the network grows, use cases change and new cryptographic primitives move into the mainstream, indy-node capabilities will need to evolve. indy-node is coded in Python.

"},{"location":"gettingStarted/IndyAriesDevOptions/#working-in-cryptography","title":"Working in Cryptography","text":"

Finally, at the deepest level, and core to all of the projects is the cryptography in Hyperledger Ursa. If you are a cryptographer, that's where you want to be - and we want you there.

"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"

NOTE: If you are developer building apps on top of Aries and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. Aries takes care of those details for you. The introduction linked here should be sufficient.

If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).

Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - almost a year! We've got much more relevant examples later in this guide.

As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and Aries.

"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"

Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.

Back to the Aries Developer - Getting Started Guide.

"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"

Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"

Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) Aries Cloud Agent Python-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!

"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"

Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:

Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.

The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).

"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"

To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:

Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.

Link: DIDDoc conventions for inbound routing

"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"

Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.

Link: Mediators and Relays

"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"

The DIDComm encryption handling is handling within the Aries agent, and not really something a developer building applications using an agent needs to worry about. Further, within an Aries agent, the handling of the encryption is left to libraries to handle - ultimately calling dependencies from Hyperledger Ursa. To encrypt a message, the agent code calls a pack() function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack() function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.

Much thought has also gone into repudiable and non-repudiable messaging, as described here.

"},{"location":"gettingStarted/YourOwnAriesAgent/","title":"Creating Your Own Aries Agent","text":"

Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.

"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"

The aca-py agent supports message tracing, according to the Tracing RFC.

Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.

Tracing is configured globally for the agent.

"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"

The following options can be specified when starting the aca-py agent:

  --trace               Generate tracing events.\n  --trace-target <trace-target>\n                        Target for trace events (\"log\", \"message\", or http\n                        endpoint).\n  --trace-tag <trace-tag>\n                        Tag to be included when logging events.\n  --trace-label <trace-label>\n                        Label (agent name) used logging events.\n

The --trace option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log).

Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...} in the JSON payload to the API call (for credential and proof exchanges).

"},{"location":"testing/AgentTracing/#enabling-tracing-in-the-alicefaber-demo","title":"Enabling Tracing in the Alice/Faber Demo","text":"

The run_demo script supports the following parameters and environment variables.

Environment variables:

TRACE_ENABLED          Flag to enable tracing\n\nTRACE_TARGET_URL       Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET             Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG              Tag to be included in all logged trace events\n

Parameters:

--trace-log            Enables tracing to the standard log output\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http           Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n                       (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n

When running the Faber controller, tracing can be enabled using the T menu option:

Faber      | Connected\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n    (1) Issue Credential\n    (2) Send Proof Request\n    (3) Send Message\n    (T) Toggle tracing on credential/proof exchange\n    (X) Exit?\n\n[1/2/3/T/X]\n

When Exchange Tracing is ON, all exchanges will include tracing.

"},{"location":"testing/AgentTracing/#logging-trace-events-to-an-elk-stack","title":"Logging Trace Events to an ELK Stack","text":"

You can use the ELK stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:

DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"

ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url option, which requires the WEBHOOK_URL environment variable. Configure an end point running on the docker host system, port 8888, use the following:

WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/INTEGRATION-TESTS/","title":"Integration Tests for Aca-py using Behave","text":"

Integration tests for aca-py are implemented using Behave functional tests to drive aca-py agents based on the alice/faber demo framework.

If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness tests before you submit your pull requests.

"},{"location":"testing/INTEGRATION-TESTS/#getting-started","title":"Getting Started","text":"

To run the aca-py Behave tests, open a bash shell run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone https://github.com/hyperledger/aries-cloudagent-python\ncd aries-cloudagent-python/demo\n./run_bdd -t ~@taa_required\n

Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).

Note also that some tests require a ledger with TAA enabled, how to run these tests will be described later.

By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:

# run the above commands, up to cd aries-cloudagent-python/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

To run the tests against the back-end askar libraries (as opposed to indy-sdk) run the following:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n

(Note that wallet-type is currently the only extra argument supported.)

You can run individual tests by specifying the tag(s):

./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/INTEGRATION-TESTS/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"

To run a local von-network with TAA enabled,run the following:

git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n

You can then run the TAA-enabled tests as follows:

./run_bdd -t @taa_required\n

or:

BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n

The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)

To overriide the default port settings:

AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tags>\n

(Note that since the test run multiple agents you require up to 60 available ports.)

"},{"location":"testing/INTEGRATION-TESTS/#aca-py-integration-tests-vs-aries-agent-test-harness-aath","title":"Aca-py Integration Tests vs Aries Agent Test Harness (AATH)","text":"

Aca-py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running aca-py agent (or in the case of AATH, against any compatible Aries agent), however the aca-py integration tests focus on aca-py specific features.

AATH:

Aca-py integration tests:

"},{"location":"testing/INTEGRATION-TESTS/#configuration-driven-tests","title":"Configuration-driven Tests","text":"

Aca-py integration tests use the same configuration approach as AATH, documented here.

In addition to support for external schemas, credential data etc, the aca-py integration tests support configuration of the aca-py agents that are used to run the test. For example:

Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n  Given \"3\" agents\n     | name  | role     | capabilities        |\n     | Acme  | issuer   | <Acme_capabilities> |\n     | Faber | verifier | <Acme_capabilities> |\n     | Bob   | prover   | <Bob_capabilities>  |\n  And \"<issuer>\" and \"Bob\" have an existing connection\n  And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n  ...\n\n  Examples:\n     | issuer | Acme_capabilities        | Bob_capabilities | Schema_name    | Credential_data          | Proof_request  |\n     | Acme   | --public-did             |                  | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n     | Faber  | --public-did  --mediator | --mediator       | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n

In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.

The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.

"},{"location":"testing/INTEGRATION-TESTS/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All Aca-py Agents Under Test","text":"

You can specify parameters that are applied to all aca-py agents using the ACAPY_ARG_FILE environment variable, for example:

ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n

... will apply the parameters in the postgres-indy-args.yml file (which just happens to configure a postgres wallet) to all agents under test.

Or the following:

ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n

... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).

Any aca-py arguement can be included in the yml file, and order-of-precidence applies (see https://pypi.org/project/ConfigArgParse/).

"},{"location":"testing/INTEGRATION-TESTS/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"

Aca-py integration tests support the following environment-driven configuration:

"},{"location":"testing/INTEGRATION-TESTS/#running-specific-test-scenarios","title":"Running specific test scenarios","text":"

Behave tests are tagged using the same standard tags as used in AATH.

To run a specific set of Aca-py integration tests (or exclude specific tests):

./run_bdd -t tag1 -t ~tag2\n

(All command line parameters are passed to the behave command, so all parameters supported by behave can be used.)

"},{"location":"testing/INTEGRATION-TESTS/#aries-agent-test-harness-aca-py-tests","title":"Aries Agent Test Harness ACA-Py Tests","text":"

This video is a presentation by Aries Cloud Agent Python (ACA-Py) developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.

"},{"location":"testing/Logging/","title":"Logging docs","text":"

ACA_Py supports multiple configurations of logging.

"},{"location":"testing/Logging/#log-level","title":"Log level","text":"

ACA-Py's logging is based on python's logging lib. Log levels DEBUG, INFO and WARNING are available. Other log levels fall back to WARNING.

"},{"location":"testing/Logging/#per-tenant-logging","title":"Per Tenant Logging","text":"

Supports writing of log messages to a file with wallet_id as the tenant identifier for each. To enable this, both multitenant mode (--multitenant) and writing to log file option (--log-file) are required. If both --multitenant and --log-file are not passed when starting up ACA-Py, then it will use default_logging_config.ini config (backward compatible) and not log at a per tenant level.

"},{"location":"testing/Logging/#command-line-arguments","title":"Command Line Arguments","text":"

Example:

./bin/aca-py start --log-level debug --log-file acapy.log --log-config aries_cloudagent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./aries_cloudagent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"

The log level can be configured using the environment variable ACAPY_LOG_LEVEL. The log file can be set by ACAPY_LOG_FILE. The log config can be set by ACAPY_LOG_CONFIG.

Example:

ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#acapy-config-file","title":"Acapy Config File","text":"

Following parameters can be used in a configuration file like this.

log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n

Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.

"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"

The path to config file is provided via --log-config.

Find an example in default_logging_config.ini.

You can find more detail description in the logging documentation.

For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler and StreamHandler handlers. Custom TimedRotatingFileMultiProcessHandler handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name, when, interval and backupCount can be passed as args=('acapy.log', 'd', 7, 1,) (also shown below). Note: backupCount of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here

[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n

For DictConfig (dict logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini file.

version: 1\nformatters:\n  default:\n    format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n  console:\n    class: logging.StreamHandler\n    level: DEBUG\n    formatter: default\n    stream: ext://sys.stderr\n  rotating_file:\n    class: logging.handlers.TimedRotatingFileMultiProcessHandler\n    level: DEBUG\n    filename: 'acapy.log'\n    when: 'd'\n    interval: 7\n    backupCount: 1\n    formatter: default\nroot:\n  level: INFO\n  handlers:\n    - console\n    - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting Aries Cloud Agent Python","text":"

This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.

Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.

"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":""},{"location":"testing/Troubleshooting/#unable-to-connect-to-ledger","title":"Unable to Connect to Ledger","text":"

The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.

"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"

Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:

"},{"location":"testing/Troubleshooting/#any-firewalls","title":"Any Firewalls","text":"

Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.

"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"

We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, aca-py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (Andrew/s PR # 1804 (merged) should mitigate but probably won't completely eliminate this from happening).

For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:

To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:

Note that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.

We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.

We add an integration test that demonstrates/tests this issue here.

To run the scenario either manually or using the integration tests, you can do the following:

"},{"location":"testing/UnitTests/","title":"ACA-Py Unit Tests","text":"

The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.

This video is a presentation of the material covered in this document by developer @shaangill025.

"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":""},{"location":"testing/UnitTests/#pytest","title":"Pytest","text":"

Example: aries_cloudagent/core/tests/test_event_bus.py

@pytest.fixture\ndef event_bus():\n    yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n    yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n    event = Event(topic=\"anything\", payload=\"payload\")\n    yield event\n\nclass MockProcessor:\n    def __init__(self):\n        self.profile = None\n        self.event = None\n\n    async def __call__(self, profile, event):\n        self.profile = profile\n        self.event = event\n\n\n@pytest.fixture\ndef processor():\n    yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n    \"\"\"Test subscribe and unsubscribe.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    assert event_bus.topic_patterns_to_subscribers\n    assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n    event_bus.unsubscribe(re.compile(\".*\"), processor)\n    assert not event_bus.topic_patterns_to_subscribers\n

From aries_cloudagent/core/event_bus.py

class EventBus:\n    def __init__(self):\n        self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n        if pattern not in self.topic_patterns_to_subscribers:\n            self.topic_patterns_to_subscribers[pattern] = []\n        self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n    if pattern in self.topic_patterns_to_subscribers:\n        try:\n            index = self.topic_patterns_to_subscribers[pattern].index(processor)\n        except ValueError:\n            return\n        del self.topic_patterns_to_subscribers[pattern][index]\n        if not self.topic_patterns_to_subscribers[pattern]:\n            del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n    \"\"\"Test subscriber receives event.\"\"\"\n    event_bus.subscribe(re.compile(\".*\"), processor)\n    await event_bus.notify(profile, event)\n    assert processor.profile == profile\n    assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n    partials = []\n    for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n        match = pattern.match(event.topic)\n\n        if not match:\n            continue\n\n        for subscriber in subscribers:\n            partials.append(\n                partial(\n                    subscriber,\n                    profile,\n                    event.with_metadata(EventMetadata(pattern, match)),\n                )\n            )\n\n    for processor in partials:\n        try:\n            await processor()\n        except Exception:\n            LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"

From: aries_cloudagent/protocols/didexchange/v1_0/tests/test.manager.py

class TestDidExchangeManager(AsyncTestCase, TestConfig):\n    async def setUp(self):\n        self.responder = MockResponder()\n\n        self.oob_mock = async_mock.MagicMock(\n            clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n        )\n\n        self.route_manager = async_mock.MagicMock(RouteManager)\n        ...\n        self.profile = InMemoryProfile.test_profile(\n            {\n                \"default_endpoint\": \"http://aries.ca/endpoint\",\n                \"default_label\": \"This guy\",\n                \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n                \"debug.auto_accept_invites\": True,\n                \"debug.auto_accept_requests\": True,\n                \"multitenant.enabled\": True,\n                \"wallet.id\": True,\n            },\n            bind={\n                BaseResponder: self.responder,\n                OobMessageProcessor: self.oob_mock,\n                RouteManager: self.route_manager,\n                ...\n            },\n        )\n        ...\n\n    async def test_receive_invitation_no_auto_accept(self):\n        async with self.profile.session() as session:\n            mediation_record = MediationRecord(\n                role=MediationRecord.ROLE_CLIENT,\n                state=MediationRecord.STATE_GRANTED,\n                connection_id=self.test_mediator_conn_id,\n                routing_keys=self.test_mediator_routing_keys,\n                endpoint=self.test_mediator_endpoint,\n            )\n            await mediation_record.save(session)\n            with async_mock.patch.object(\n                self.multitenant_mgr, \"get_default_mediator\"\n            ) as mock_get_default_mediator:\n                mock_get_default_mediator.return_value = mediation_record\n                invi_rec = await self.oob_manager.create_invitation(\n                    my_endpoint=\"testendpoint\",\n                    hs_protos=[HSProto.RFC23],\n                )\n\n                invitee_record = await self.manager.receive_invitation(\n                    invi_rec.invitation,\n                    auto_accept=False,\n                )\n                assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n    self,\n    invitation: OOBInvitationMessage,\n    their_public_did: Optional[str] = None,\n    auto_accept: Optional[bool] = None,\n    alias: Optional[str] = None,\n    mediation_id: Optional[str] = None,\n) -> ConnRecord:\n    ...\n    accept = (\n        ConnRecord.ACCEPT_AUTO\n        if (\n            auto_accept\n            or (\n                auto_accept is None\n                and self.profile.settings.get(\"debug.auto_accept_invites\")\n            )\n        )\n        else ConnRecord.ACCEPT_MANUAL\n    )\n    service_item = invitation.services[0]\n    # Create connection record\n    conn_rec = ConnRecord(\n        invitation_key=(\n            DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n            if isinstance(service_item, OOBService)\n            else None\n        ),\n        invitation_msg_id=invitation._id,\n        their_label=invitation.label,\n        their_role=ConnRecord.Role.RESPONDER.rfc23,\n        state=ConnRecord.State.INVITATION.rfc23,\n        accept=accept,\n        alias=alias,\n        their_public_did=their_public_did,\n        connection_protocol=DIDX_PROTO,\n    )\n\n    async with self.profile.session() as session:\n        await conn_rec.save(\n            session,\n            reason=\"Created new connection record from invitation\",\n            log_params={\n                \"invitation\": invitation,\n                \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n            },\n        )\n\n        # Save the invitation for later processing\n        ...\n\n    return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":"
  with self.assertRaises(DIDXManagerError) as ctx:\n     ...\n  assert \" ... error ...\" in str(ctx.exception)\n
"}]} \ No newline at end of file diff --git a/main/sitemap.xml.gz b/main/sitemap.xml.gz index 3b23ac0316..b88fd7e40d 100644 Binary files a/main/sitemap.xml.gz and b/main/sitemap.xml.gz differ