From e4f85f321c647d1bf92b209dffd9e771a8a4d153 Mon Sep 17 00:00:00 2001
From: "github-actions[bot]"
Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y
Note that you need to Publish the revocation information to the ledger. Once you've revoked a credential any proof which uses this credential will fail to verify. \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol. There is no other affect on the operation of the agents.\n\nWith DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, connection re-use, and use of qualified DIDs:\n\n- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations\n- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation)\n- `--multi-use-invitations` - inviter will issue multi-use invitations\n- `--emit-did-peer-4` - participants will prefer use of did:peer:4 for their pairwise connection DIDs\n- `--emit-did-peer-2` - participants will prefer use of did:peer:2 for their pairwise connection DIDs\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Mediation\n\nTo enable mediation, run the `alice` or `faber` demo with the `--mediation` option:\n\n```bash\n./run_demo faber --mediation\n
This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.
"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"To enable multiple ledger mode, run the alice
or faber
demo with the --multi-ledger
option:
./run_demo faber --multi-ledger\n
The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml
.
To enable support for multi-tenancy, run the alice
or faber
demo with the --multitenant
option:
./run_demo faber --multitenant\n
(This option can be used with both (or either) alice
and/or faber
.)
You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").
Faber:
(1) Issue Credential\n (1a) Set Credential Type (indy)\n (2) Send Proof Request\n (3) Send Message\n (4) Create New Invitation\n (W) Create and/or Enable Wallet\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n
Alice:
(3) Send Message\n (4) Input New Invitation\n (W) Create and/or Enable Wallet\n (X) Exit?\n
When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)
[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber | Register or switch to wallet new_wallet_12\nFaber | Created new profile\nFaber | Profile backend: indy\nFaber | Profile name: new_wallet_12\nFaber | No public DID\n... etc\n
Note that faber
will create a public DID for this wallet, and will create a schema and credential definition.
Once you have created a new wallet, you must establish a connection between alice
and faber
(remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").
In faber, create a new invitation:
[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n
In alice, accept the invitation:
[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n
You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:
Show me a screenshot - multi-tenancy via admin APINote that with multi-tenancy enabled:
Documentation on ACA-Py's multi-tenancy support can be found here.
"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"There are two options for configuring mediation with multi-tenancy, documented here.
This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.
Run the demo (Alice or Faber) specifying both options:
./run_demo faber --multitenant --mediation\n
This works exactly as the vanilla multi-tenancy, except that all connections are mediated.
"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)
To override the default port settings:
AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n
(The agent requires up to 10 available ports.)
To pass extra arguments to the agent (for example):
DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n
Additionally, separating the build and run functionalities in the script allows for smoother development and debugging processes. With the mounting of volumes from the host into the Docker container, code changes can be automatically reloaded without the need to repeatedly build the demo.
Build Command:
./demo/run_demo build alice --wallet-type askar-anoncreds --events\n
Run Command:
./demo/run_demo run alice --wallet-type askar-anoncreds --events\n
"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"These Alice and Faber scripts (in the demo/runners
folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py
). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.
The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.
"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This ACA-Py OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connecting, issuing a credential, and presenting a proof sequence.
"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"Another example in the demo/runners
folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.
To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:
faber
) with performance
.alice
) at all.The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.
A second version of the performance test can be run by adding the parameter --routing
to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.
You can also run the demo against a postgres database using the following:
./run_demo performance --arg-file demo/postgres-indy-args.yml\n
(Obviously you need to be running a postgres database - the command to start postgres is in the yml file provided above.)
You can tweak the number of credentials issued using the --count
and --batch
parameters, and you can run against an Askar database using the --wallet-type askar
option (or run using indy-sdk using --wallet-type indy
).
An example full set of options is:
./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n
Or:
./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:
The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:
All done? Checkout how we added the missing code segments here.
"},{"location":"demo/ACA-Py-Workshop/","title":"ACA-Py and AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/ACA-Py-Workshop/#introduction","title":"Introduction","text":"Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.
To run the labs, you\u2019ll need an ACA-Py agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant decentralized trust agent built on ACA-Py. Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), **and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! **Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.
The four labs in this workshop are laid out as follows:
Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py
Jump in!
"},{"location":"demo/ACA-Py-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.
"},{"location":"demo/ACA-Py-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"Action
in the Endorser section.active
.active
, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.That's it--you should be ready to start issuing and receiving verifiable credentials.
"},{"location":"demo/ACA-Py-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:
claims
) in a credential. An issuer often publishes their own schema, but they may also use one published by someone else. For example, a group of universities all might use the schema published by the \"Association of Universities and Colleges\" to which they belong.CredDef
) is published by the issuer, linking together Issuer's DID with the schema upon which the credentials will be issued, and containing the public key material needed to verify presentations of the credential. Revocation Registries are also linked to the Credential Definition, enabling an issuer to revoke credentials when necessary.Schema Id
with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0
.>
) link, and then the subsequent >
to \u201cView Raw Content.\"Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.
"},{"location":"demo/ACA-Py-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.
"},{"location":"demo/ACA-Py-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"YYYYMMDD
, e.g., 20231001
. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.
"},{"location":"demo/ACA-Py-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.
"},{"location":"demo/ACA-Py-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"p_value
should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD
format (the integer form of the date).p_type
should be >=
for the \u201colder than\u201d, and =<
for \u201cnot expired\u201d. See the table below for the form of the expression form.That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!
"},{"location":"demo/ACA-Py-Workshop/#whats-next","title":"What's Next","text":"The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.
Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!
"},{"location":"demo/ACA-Py-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"Are you going to build an app that uses Traction or an instance of ACA-Py? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.
To access and use your Tenant's OpenAPI (aka Swagger) interface:
The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.
We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.
"},{"location":"demo/ACA-Py-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"If you are challenged to use Traction or ACA-Py to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.
"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.
Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo
"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)
To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:
Open 2 bash shells, and in each run:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
In one shell run Faber:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n
... and in the second shell run Alice:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n
When Faber has produced an invitation, copy it over to Alice.
Then, in the Faber shell, select option 1
to issue a credential to Alice. (You can select option 2
if you like, to confirm via proof.)
Then, in the Faber shell, enter X
to exit the controller, and then run the Acme controller:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n
In the Alice shell, select option 4
(to enter a new invitation) and then copy over Acme's invitation once it's available.
Then, in the Acme shell, you can select option 2
and then option 1
, which don't do anything ... yet!!!
In the Acme code acme.py
we are going to add code to issue a proof request to Alice, and then validate the received proof.
First the following import statements and constants that we will need near the top of acme.py:
import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n
Next locate the code that is triggered by option 2
:
elif option == \"2\":\n log_status(\"#20 Request proof of degree from alice\")\n # TODO presentation requests\n
Replace the # TODO
comment with the following code:
req_attrs = [\n {\n \"name\": \"name\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n },\n {\n \"name\": \"date\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n },\n {\n \"name\": \"degree\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n }\n ]\n req_preds = []\n indy_proof_request = {\n \"name\": \"Proof of Education\",\n \"version\": \"1.0\",\n \"nonce\": str(uuid4().int),\n \"requested_attributes\": {\n f\"0_{req_attr['name']}_uuid\": req_attr\n for req_attr in req_attrs\n },\n \"requested_predicates\": {}\n }\n proof_request_web_request = {\n \"connection_id\": agent.connection_id,\n \"presentation_request\": {\"indy\": indy_proof_request},\n }\n # this sends the request to our agent, which forwards it to Alice\n # (based on the connection_id)\n await agent.admin_POST(\n \"/present-proof-2.0/send-request\",\n proof_request_web_request\n )\n
Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):
if state == \"presentation-received\":\n # TODO handle received presentations\n pass\n
then replace the # TODO
comment and the pass
statement:
log_status(\"#27 Process the proof provided by X\")\n log_status(\"#28 Check if proof is valid\")\n proof = await self.admin_POST(\n f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n )\n self.log(\"Proof = \", proof[\"verified\"])\n\n # if presentation is a degree schema (proof of education),\n # check values received\n pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n pres = message[\"by_format\"][\"pres\"][\"indy\"]\n is_proof_of_education = (\n pres_req[\"name\"] == \"Proof of Education\"\n )\n if is_proof_of_education:\n log_status(\"#28.1 Received proof of education, check claims\")\n for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n if referent in pres['requested_proof']['revealed_attrs']:\n self.log(\n f\"{attr_spec['name']}: \"\n f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n )\n else:\n self.log(\n f\"{attr_spec['name']}: \"\n \"(attribute not revealed)\"\n )\n for id_spec in pres[\"identifiers\"]:\n # just print out the schema/cred def id's of presented claims\n self.log(f\"schema_id: {id_spec['schema_id']}\")\n self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n # TODO placeholder for the next step\n else:\n # in case there are any other kinds of proofs received\n self.log(\"#28.1 Received \", pres_req[\"name\"])\n
Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.
Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!
"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"Now we can issue a work credential to Alice!
There are two options for this. We can (a) add code under option 1
to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.
We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!
First though we need to register a schema and credential definition. Find this code:
# acme_schema_name = \"employee id schema\"\n # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n await acme_agent.initialize(\n the_agent=agent,\n # schema_name=acme_schema_name,\n # schema_attrs=acme_schema_attrs,\n )\n\n # TODO publish schema and cred def\n
... and uncomment the code lines. Replace the # TODO
comment with the following code:
with log_timer(\"Publish schema and cred def duration:\"):\n # define schema\n version = format(\n \"%d.%d.%d\"\n % (\n random.randint(1, 101),\n random.randint(1, 101),\n random.randint(1, 101),\n )\n )\n # register schema and cred def\n (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n \"employee id schema\",\n version,\n [\"employee_id\", \"name\", \"date\", \"position\"],\n support_revocation=False,\n revocation_registry_size=TAILS_FILE_COUNT,\n )\n
For option (1) we want to replace the # TODO
comment here:
elif option == \"1\":\n log_status(\"#13 Issue credential offer to X\")\n # TODO credential offers\n
with the following code:
agent.cred_attrs[cred_def_id] = {\n \"employee_id\": \"ACME0009\",\n \"name\": \"Alice Smith\",\n \"date\": date.isoformat(date.today()),\n \"position\": \"CEO\"\n }\n cred_preview = {\n \"@type\": CRED_PREVIEW_TYPE,\n \"attributes\": [\n {\"name\": n, \"value\": v}\n for (n, v) in agent.cred_attrs[cred_def_id].items()\n ],\n }\n offer_request = {\n \"connection_id\": agent.connection_id,\n \"comment\": f\"Offer on cred def id {cred_def_id}\",\n \"credential_preview\": cred_preview,\n \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n }\n await agent.admin_POST(\n \"/issue-credential-2.0/send-offer\", offer_request\n )\n
... and then locate the code that handles the credential request callback:
if state == \"request-received\":\n # TODO issue credentials based on offer preview in cred ex record\n pass\n
... and replace the # TODO
comment and pass
statement with the following code to issue the credential as Acme offered it:
# issue credentials based on offer preview in cred ex record\n if not message.get(\"auto_issue\"):\n await self.admin_POST(\n f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n )\n
Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.
"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.
This demo also introduces revocation of credentials.
"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":"faber
With Extra ParametersThis demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.
If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.
"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.
"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"Open a new bash shell and in a project directory run the following:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
We'll come back to this in a minute, when we start the faber
agent!
There are a couple of extra steps you need to take to prepare to run the Faber agent locally:
"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"ngrok is used to expose public endpoints for services running locally on your computer.
jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.
You can install ngrok from here
You can download jq releases here
"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.
Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accessible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:
ngrok http 8020\n
This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.
You will see something like this:
Forwarding http://abc123.ngrok.io -> http://localhost:8020\nForwarding https://abc123.ngrok.io -> http://localhost:8020\n
This creates a public url for ports 8020 on your local machine.
Note that an ngrok process is created automatically for your tails server.
Keep this process running as we'll come back to it in a moment.
"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.
Open a new bash shell and in a project directory run the following:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
We'll come back to this in a minute, when we start the faber
agent!
For revocation to function, we need another component running that is used to store what are called tails files.
If you are not running with revocation enabled you can skip this step.
"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"Open a new bash shell, and in a project directory, run:
git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n
This will run the required components for the tails server to function and make a tails server available on port 6543.
This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:
ngrok-tails-server_1 | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1 | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n
Note the server name in the url=https://c5789aa0.ngrok.io
parameter (https://c5789aa0.ngrok.io
) - this is the external url for your tails server. Make sure you use the https
url!
Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.
Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events
option when you run the Faber agent to reduce the quantity of information logged to the screen.
faber
With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"If you are running in a local bash shell, navigate to the demo
directory in your fork/clone of the ACA-Py repository and run:
TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n
(Note that we have to start faber with --aip 10
for compatibility with mobile clients.)
The TAILS_NETWORK
parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).
If you are running in Play with Docker, navigate to the demo
folder in the clone of ACA-Py and run the following:
PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n
The PUBLIC_TAILS_URL
parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.
Use the ngrok url for the tails server that you noted earlier.
*Note that you must use the https
url for the tails server endpoint.
*Note - you may want to leave off the --events
option when you run the Faber agent, if you are finding you are getting too much logging output.
The Preparing agent image...
step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:
TAILS_NETWORK
parameter tells the ./run_demo
script how to connect to the tails server and determine the public ngrok endpoint.PUBLIC_TAILS_URL
environment variable is the address of your tails server (must be https
).--revocation
parameter to the ./run-demo
script activates the ACA-Py revocation issuance.As part of its startup process, the agent will publish a revocation registry to the ledger.
Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.
Click here to view screenshotThe mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".
Click here to view screenshot Click here to view screenshotSwitch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.
Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL
option, and copy the invitation_url
, which will look something like:
https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n
Or this:
http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n
Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.
"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.
In the Faber console, select option 1
to send a credential to the mobile agent.
The Faber agent outputs details to the console; e.g.,
Faber | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber | Credential revocation ID: 1\nFaber | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n
The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id
later, so we recommend that you copy it it now and paste it into a text file or some place that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id}
endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created
endpoint).
The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:
Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.
In the Faber console, select option 2
to send a proof request to the mobile agent.
The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:
Click here to view screenshot Click here to view screenshot Click here to view screenshotIf the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.
The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.
"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"In the Faber console window, the proof should be received as validated.
Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber
options 5
and 6
). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.
Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.
Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.
This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)
If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).
Then in the faber demo, select option 2a
- Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.
Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py
code in the repository.
That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.
"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.
The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.
"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"Clone this repository to a directory on your local:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
Open up a second shell (so you have 2 shells open in the demo
directory) and in one shell:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n
... and in the other:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n
Note that you start the faber
agent with AIP2.0 options. (When you specify --cred-type json-ld
faber will set aip to 20
automatically, so the --aip
option is not strictly required). Note as well the use of the LEDGER_URL
. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.
Also note that the above will only work with the /issue-credential-2.0/create-offer
endpoint. If you want to use the /issue-credential-2.0/send
endpoint - which automates each step of the credential exchange - you will need to include the --no-auto
option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).
(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh
and ./alice-local.sh
scripts in the demo
directory.)
Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.
(If you are running with --no-auto
you will also need to call the /connections/{conn_id}/accept-invitation
endpoint in alice's admin api swagger page.)
Now open up two browser windows to the Faber and Alice admin api swagger pages.
Using the Faber admin api, you have to create a DID with the appropriate:
Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.
For example, in Faber's swagger page call the /wallet/did/create
endpoint with the following payload:
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"bls12381g2\" // or ed25519\n }\n}\n
This will return something like:
{\n \"result\": {\n \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n \"posture\": \"wallet_only\",\n \"key_type\": \"bls12381g2\",\n \"method\": \"key\"\n }\n}\n
You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).
You will need to create a DID as above for Alice as well (/wallet/did/create
etc ...).
Congratulations, you are now ready to start issuing JSON-LD credentials!
connection_id
into the examples below.issuer
).credentialSubject.id
- this is required for Alice to sign the proof (the credentialSubject.id
is not required, but then the provided presentation can't be verified).To issue a credential, use the /issue-credential-2.0/send-offer
endpoint. (You can also use the /issue-credential-2.0/send
) endpoint, if, as mentioned above, you have included the --no-auto
when starting both of the agents.)
You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:
{\n \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"degreeType\": \"Undergraduate\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request
, /store
, etc endpoints to complete the protocol.
To see the issued credential, call the /credentials/w3c
endpoint on Alice's admin api - this will return something like:
{\n \"results\": [\n {\n \"contexts\": [\n \"https://w3id.org/security/bbs/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://www.w3.org/2018/credentials/v1\"\n ],\n \"types\": [\n \"UniversityDegreeCredential\",\n \"VerifiableCredential\"\n ],\n \"schema_ids\": [],\n \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"subject_ids\": [],\n \"proof_types\": [\n \"BbsBlsSignature2020\"\n ],\n \"cred_value\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\",\n \"UniversityDegreeCredential\"\n ],\n \"issuer\": \"did:key:zUC71Kd...poCE\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"degreeType\": \"Undergraduate\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n },\n \"proof\": {\n \"type\": \"BbsBlsSignature2020\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n \"created\": \"2021-05-19T16:19:44.458170\",\n \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n }\n },\n \"cred_tags\": {},\n \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n }\n ]\n}\n
If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records
) and check the state. If the state is credential-received
, then the credential has been received but not stored, in this case just call the /store
endpoint for this credential exchange.
The above example uses the https://www.w3.org/2018/credentials/examples/v1
context, which should never be used in a real application.
To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.
"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).
You first include https://schema.org
in the @context
block of the credential as follows:
\"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n],\n
Then you review the attributes and objects defined by https://schema.org
and decide what you need to include in your credential.
For example to issue a credential with givenName, familyName and alumniOf attributes, submit the following:
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n ],\n \"type\": [\"VerifiableCredential\", \"Person\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Note that with https://schema.org
, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject
in the above with:
\"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\",\n \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n
... and the credential issuance should fail, however https://schema.org
defines a @vocab
that by default all terms derive from (see here).
You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName
and familyName
):
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n ],\n \"type\": [\"VerifiableCredential\", \"Person\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"student\": {\n \"type\": \"Person\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\"\n }\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org
, you just shouldn't use this directly in your credential.)
The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id
, issuer
and credentialSubject.id
with your local values):
{\n \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/citizenship/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\",\n \"PermanentResident\"\n ],\n \"id\": \"https://credential.example.com/residents/1234567890\",\n \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"type\": [\n \"PermanentResident\"\n ],\n \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n \"givenName\": \"ALICE\",\n \"familyName\": \"SMITH\",\n \"gender\": \"Female\",\n \"birthCountry\": \"Bahamas\",\n \"birthDate\": \"1958-07-17\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Copy and paste this content into Faber's /issue-credential-2.0/send-offer
endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.
In Alice's swagger page, submit the /credentials/records/w3c
endpoint to see the issued credential.
To request a proof, submit the following (with appropriate connection_id
) to Faber's /present-proof-2.0/send-request
endpoint:
{\n \"comment\": \"string\",\n \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n \"presentation_request\": {\n \"dif\": {\n \"options\": {\n \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n \"domain\": \"4jt78h47fh47\"\n },\n \"presentation_definition\": {\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"EU Driver's License\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"is_holder\": [\n {\n \"directive\": \"required\",\n \"field_id\": [\n \"1f44d55f-f161-4938-a659-f8026467f126\"\n ]\n }\n ],\n \"fields\": [\n {\n \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n },\n {\n \"path\": [\n \"$.credentialSubject.givenName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\"\n }\n ]\n }\n }\n ]\n }\n }\n }\n}\n
Note that the is_holder
property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName
). Later on, the received presentation will be signed and verifiable only if is_holder
with \"directive\": \"required\"
is included in the presentation request.
There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation
:
{\n \"dif\": {\n }\n}\n
There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.
Firstly, Alice can include the received presentation request in the body to the /send-presentation
endpoint, and can include additional constraints on the fields:
{\n \"dif\": {\n \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"presentation_definition\": {\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"Some kind of citizenship check\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"is_holder\": [\n {\n \"directive\": \"required\",\n \"field_id\": [\n \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"332be361-823a-4863-b18b-c3b930c5623e\"\n ],\n }\n ],\n \"fields\": [\n {\n \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n },\n {\n \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n \"path\": [\n \"$.id\"\n ],\n \"purpose\": \"Specify the id of the credential to present\",\n \"filter\": {\n \"const\": \"https://credential.example.com/residents/1234567890\"\n }\n }\n ]\n }\n }\n ]\n }\n }\n}\n
Note the additional constraint on \"path\": [ \"$.id\" ]
- this restricts the presented credential to the one with the matching credential.id
. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.
Another option is for Alice to specify the credential record_id
- this is an internal value within ACA-Py:
{\n \"dif\": {\n \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"presentation_definition\": {\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"Some kind of citizenship check\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"fields\": [\n {\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n }\n ]\n }\n }\n ]\n },\n \"record_ids\": {\n \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n }\n }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"TBD the following credential is based on the W3C Vaccination schema:
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/vaccination/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"type\": \"VaccinationEvent\",\n \"batchNumber\": \"1183738569\",\n \"administeringCentre\": \"MoH\",\n \"healthProfessional\": \"MoH\",\n \"countryOfVaccination\": \"NZ\",\n \"recipient\": {\n \"type\": \"VaccineRecipient\",\n \"givenName\": \"JOHN\",\n \"familyName\": \"SMITH\",\n \"gender\": \"Male\",\n \"birthDate\": \"1958-07-17\"\n },\n \"vaccine\": {\n \"type\": \"Vaccine\",\n \"disease\": \"COVID-19\",\n \"atcCode\": \"J07BX03\",\n \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n }\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"There are two ways to run the alice/faber demo with endorser support enabled.
"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.
Start a VON Network instance and a Tails server:
--logs
option if you want to use the same terminal for running both VON Network and the Tails server. When you are finished with VON Network, follow the Stopping And Removing a VON Network instructions.Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):
TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n
Start up Alice as normal:
./run_demo alice\n
You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.
If you issue more than 5 credentials, you will see Faber creating a new revocation registry (including endorser operations).
"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:
Start a VON Network and a Tails server using the instructions above.
Start up Faber as Endorser:
TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n
Start up Alice as Author:
TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n
Copy the invitation from Faber to Alice to complete the connection.
Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).
This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.
Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).
If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.
"},{"location":"demo/OpenAPIDemo/","title":"Aries OpenAPI Demo","text":"What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.
"},{"location":"demo/OpenAPIDemo/#contents","title":"Contents","text":"We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.
Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.
For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.
Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg
. In that:
LEDGER_URL
environment variable informs the agent what ledger to use.--events
option indicates that we want the controller to display the webhook events from ACA-Py in the log displayed on the terminal.--no-auto
option indicates that we don't want the ACA-Py agent to automatically handle some events such as connecting. We want the controller (you!) to handle each step of the protocol.--bg
option indicates that the docker container will run in the background, so accidentally hitting Ctrl-C won't stop the process.To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.
"},{"location":"demo/OpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f faber\n
Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021
. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct...
.
Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.
NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f alice\n
You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).
Once the Alice agent has started up (with the invite:
prompt displayed), click the link near the top of the screen 8031
. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct...
.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.
Show me a screenshot!You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.
"},{"location":"demo/OpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.
"},{"location":"demo/OpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.
In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f faber\n
If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.
Show me a screenshot!"},{"location":"demo/OpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"To start Alice's agent, open up a second terminal window and in it, change to the same demo
directory as where Faber's agent was started above. Once there, start Alice's agent:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f alice\n
You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR)
that may appear.
If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice
Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.
Show me a screenshot!"},{"location":"demo/OpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:
docker stop faber\ndocker stop alice\n
"},{"location":"demo/OpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.
From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.
In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.
Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation
. That means you must:
Try it out
button;Execute
;So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.
Enough with the preliminaries, let\u2019s get started!
"},{"location":"demo/OpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer
DID method, a public ledger is not needed.
In the Faber browser tab, navigate to the POST /connections/create-invitation
endpoint. Replace the sample body with and empty production ({}
) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.
Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on
Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/OpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"Copy the entire block of the invitation
object, from the curly brackets {}
, excluding the trailing comma.
Before switching over to the Alice browser tab, scroll to and execute the GET /connections
endpoint to see the list of Faber's connections. You should see a connection with a connection_id
that is identical to the invitation you just created, and that its state is invitation
.
Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation
endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute
you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation
.
Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on
Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation ResponseA key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.
"},{"location":"demo/OpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections
endpoint.
To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id
in the connection response from the previous POST /connections/receive-invitation
endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation
endpoint and paste the connection_id
in the id
parameter field (you will have to click the Try it out
button to see the available URL parameters). The response from clicking Execute
should show that the connection has a state of request
.
In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id
from the event for the next step.
Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.
"},{"location":"demo/OpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request
endpoint and paste the connection_id
you previously copied into the id
parameter field (you will have to click the Try it out
button to see the available URL parameters). The response from clicking the Execute
button should show that the connection has a state of response
, which indicates that Faber has accepted Alice's connection request.
Switch over to the Alice browser tab.
Scroll to and execute GET /connections
to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active
.
As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.
Show me the event"},{"location":"demo/OpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"You are connected! Switch to the Faber browser tab and run the same GET /connections
endpoint to see Faber's view of the connection. Its state is also active
. Note the connection_id
, you\u2019ll need it later in the tutorial.
Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.
"},{"location":"demo/OpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message
endpoint. Click on Try it Out
and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}
). Enter the connection id of Alice's connection in the field provided. Then click on Execute
.
How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:
Show me a screenshotFaber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).
"},{"location":"demo/OpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:
Show me a screenshotAgain, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received
.
The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer
method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.
Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:
The schema and credential definition could also be created through this swagger interface.
We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.
To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.
"},{"location":"demo/OpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public
endpoint. Click on Try it Out
and Execute
and you will see Faber's public DID.
On the ledger browser of the BCovrin ledger, click the Domain
page, refresh, and paste the Faber public DID into the Filter:
field:
The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:
You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created
endpoint to get a list of schemas, including the one schema_id
that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.
Likewise use the GET /credential-definitions/created
endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.
Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST
versions of these endpoints. Now you know!
The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!
"},{"location":"demo/OpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send
and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.
First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections
API endpoint, execute, copy it and paste the connection_id
value into the same field in the issue credential JSON.
For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:
issuer_did
the Faber public DID (use GET /wallet/DID/public
), schema_id
the Id of the schema Faber created (use GET /schemas/created
) and,cred_def_id
the Id of the credential definition Faber created (use GET /credential-definitions/created
)into the filter
section's indy
subsection. Remove the \"dif\"
subsection of the filter
section within the JSON, and specify the remaining indy filter criteria as follows:
schema_version
: set to the last segment of the schema_id
, a three part version number that was randomly generated on startup of the Faber agent. Segments of the schema_id
are separated by \":\"s.schema_issuer_did
: set to the same the value as in issuer_did
,schema_name
: set to the second last segment of the schema_id
, in this case degree schema
Finally, set the remaining values as follows: - auto_remove
: set to true
(no quotes), see note below - comment
: set to any string. It's intended to let Alice know something about the credential being offered. - trace
: set to false
(no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.
By setting auto_remove
to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.
Finally, we need put into the JSON the data values for the credential_preview
section of the JSON. Copy the following and paste it between the square brackets of the attributes
item, replacing what is there. Feel free to change the attribute value
items, but don't change the labels or names:
{\n \"name\": \"name\",\n \"value\": \"Alice Smith\"\n },\n {\n \"name\": \"timestamp\",\n \"value\": \"1234567890\"\n },\n {\n \"name\": \"date\",\n \"value\": \"2018-05-28\"\n },\n {\n \"name\": \"degree\",\n \"value\": \"Maths\"\n },\n {\n \"name\": \"birthdate_dateint\",\n \"value\": \"19640101\"\n }\n
(Note that the birthdate above is used to present later on to pass an \"age proof\".)
OK, finally, you are ready to click Execute
. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?
To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0
section and execute the GET /issue-credential-2.0/records
endpoint. You should see a lot of information about the exchange just initiated.
Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.
Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.
Show me a screenshot - issue credential"},{"location":"demo/OpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records
endpoint. Note that within the results, the cred_ex_record
just received has a state
of credential-received
, but not yet done
. Let's address that.
First, we need the cred_ex_id
from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store
to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id
string. A real controller might use the cred_ex_id
for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).
Now, in Alice\u2019s swagger browser tab, find the credentials
section and within that, execute the GET /credentials
endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent
is the value of the credential_id
element used in other calls. referent
is the name returned in the indy-sdk
call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.
On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.
Show me Faber's event activityNote that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records
You\u2019ve done it, issued a credential! w00t!
"},{"location":"demo/OpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential
protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.
POST /issue-credential-2.0/send
administrative message, which handles the back and forth for the issuer automatically. We could have used the other /issue-credential-2.0/
endpoints to allow the controller to handle each step of the protocol.issue_credential_v2_0
event always responds to credential offers with corresponding credential requests.If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/
messages. Use the GET /issue-credential-2.0/records
to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.
The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.
Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential OfferPOST /issue-credential-2.0/send-offer
REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request
REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue
REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store
REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/OpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.
"},{"location":"demo/OpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request
endpoint. After hitting Try it Now
, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id
(there are four instances) and connection_id
with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment
item to whatever you want.
{\n \"comment\": \"This is a comment about the reason for the proof\",\n \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n \"presentation_request\": {\n \"indy\": {\n \"name\": \"Proof of Education\",\n \"version\": \"1.0\",\n \"requested_attributes\": {\n \"0_name_uuid\": {\n \"name\": \"name\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_date_uuid\": {\n \"name\": \"date\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_degree_uuid\": {\n \"name\": \"degree\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_self_attested_thing_uuid\": {\n \"name\": \"self_attested_thing\"\n }\n },\n \"requested_predicates\": {\n \"0_age_GE_uuid\": {\n \"name\": \"birthdate_dateint\",\n \"p_type\": \"<=\",\n \"p_value\": 20030101,\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n }\n }\n }\n }\n}\n
(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18)
, and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py
demo code.)
Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute
and cross your fingers. If the request fails check your JSON!
As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.
Show me Alice's event activityIn a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.
"},{"location":"demo/OpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"Note that in the response, the state is request-sent
. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id
field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id}
endpoint. That should return a result showing the state
as done
and verified
as true
. Proof positive!
You can see some of Faber's activity below:
Show me Faber's event activity"},{"location":"demo/OpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0
event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.
If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0
messages.
The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.
Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof RequestPOST /present-proof-2.0/send-request
REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials
REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation
REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation
REST service Save Proof application data"},{"location":"demo/OpenAPIDemo/#conclusion","title":"Conclusion","text":"That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.
One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.
"},{"location":"demo/PostmanDemo/","title":"Aries Postman Demo","text":"In these demos we will use Postman as our controller client.
"},{"location":"demo/PostmanDemo/#contents","title":"Contents","text":"Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.
"},{"location":"demo/PostmanDemo/#installing-postman","title":"Installing Postman","text":"Download, install and launch postman.
"},{"location":"demo/PostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"Create a new postman workspace labeled \"acapy-demo\".
"},{"location":"demo/PostmanDemo/#importing-the-environment","title":"Importing the environment","text":"In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.
Make sure you have the environment set as your active environment.
"},{"location":"demo/PostmanDemo/#importing-the-collections","title":"Importing the collections","text":"In the collections tab from the left, click the import button.
The following collections are available:
Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.
You have your environment where you define variables to be accessed by your collections.
Each collection consists of a series of requests which can be configured independently.
"},{"location":"demo/PostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"Make sure you have a demo agent available. You can use the following command to deploy one:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n
When running for the first time, please allow some time for the images to build.
"},{"location":"demo/PostmanDemo/#register-new-dids","title":"Register new dids","text":"The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020
and BbsBlsSignature2020
credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create
endpoint.
For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.
{\n \"credential\": { \n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\"\n ],\n \"issuer\": \"did:example:123\",\n \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:example:123\"\n }\n },\n \"options\": {}\n}\n
Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.
"},{"location":"demo/PostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.
Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.
"},{"location":"demo/PostmanDemo/#verify-credentials","title":"Verify credentials","text":"You can verify your last issued credential with this endpoint or any issued credential you provide to it.
"},{"location":"demo/PostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.
"},{"location":"demo/PostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"The final request is to verify a presentation.
"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.
The requirements on your invitations (such as in the example below) are:
services
item MUST be a resolvable DID.services
item MUST NOT be an inline
service.services
item is the same one in every invitation.Example invitation:
{\n \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n \"label\": \"faber.agent\",\n \"handshake_protocols\": [\n \"https://didcomm.org/didexchange/1.0\"\n ],\n \"services\": [\n \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n ]\n}\n
Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.
request
with a response
message, and the connection is established.services
item in the invitation -- see example below) that it already has a connection to the Issuer, so instead of sending a DID Exchange request
message back to the Issuer, they send an RFC 0434 Out of Band reuse DIDComm message, and both parties know to use the existing connection.request
message, a new connection would have been established.The RFC 0434 Out of Band protocol requirement enables reuse
message by the invitee (the Wallet in the flow above) is that the service
in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov
DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services
item of the Out of Band invitation. As noted in the Out of Band specification, reuse
cannot be used with such DID types even if the contents are the same.
Example invitation:
{\n \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n \"label\": \"faber.agent\",\n \"handshake_protocols\": [\n \"https://didcomm.org/didexchange/1.0\"\n ],\n \"services\": [\n \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n ]\n}\n
The use of connection reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.
./run_demo faber --reuse-connections --public-did-connections --events
.events
option: ./run_demo alice --reuse-connections --events
8031
, path api/docs
), and then use the GET Connections
to see that Alice has one connection to Faber.4
to get a prompt for a new connection. This will generate a new invitation with the same public DID.4
to get a prompt for a new connection, and paste the new invitation.reuse
message is received from Alice, and as a result, no new connection was created.GET Connections
endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.--reuse-connections
parameter and compare the services
value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.
For example, to run faber with connection reuse using a non-public DID:
./run_demo faber --reuse-connections --events\n
To run faber using a did:peer
and reusable connections:
./run_demo faber --reuse-connections --emit-did-peer-2 --events\n
To run this demo using a multi-use invitation (from Faber):
./run_demo faber --reuse-connections --emit-did-peer-2 --multi-use-invitations --events\n
"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:
--wallet-type askar-anoncreds\n
When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs
instead of credx
.
There is a new package under acapy_agent/anoncreds
with code that supports the new library.
There are new endpoints (under /anoncreds
) for managing schemas, cred defs and revocation objects. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).
Within the protocols, there are new handler
libraries to support the new anoncreds
format (these are in parallel to the existing indy
libraries).
The existing indy
code are in:
acapy_agent/protocols/issue_credential/v2_0/formats/indy/handler.py\nacapy_agent/protocols/indy/anoncreds/pres_exch_handler.py\nacapy_agent/protocols/present_proof/v2_0/formats/indy/handler.py\n
The new anoncreds
code is in:
acapy_agent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\nacapy_agent/protocols/present_proof/anoncreds/pres_exch_handler.py\nacapy_agent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n
The Indy handler checks to see if the wallet type is askar-anoncreds
and if so delegates the calls to the anoncreds handler, for example:
# Temporary shim while the new anoncreds library integration is in progress\n wallet_type = profile.settings.get_value(\"wallet.type\")\n if wallet_type == \"askar-anoncreds\":\n self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n
... and then:
# Temporary shim while the new anoncreds library integration is in progress\n if self.anoncreds_handler:\n return self.anoncreds_handler.get_format_identifier(message_type)\n
To run the alice/faber demo using the new anoncreds library, start the demo with:
--wallet-type askar-anoncreds\n
There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:
--wallet-type askar-anoncreds\n
Everything should just work!!!
Theoretically AATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).
"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"The changes are significant. Notably:
The Tails File changes are minimal -- nothing about the file itself changed. What changed:
The main changes for the Credential and Presentation support are in the following two files:
acapy_agent/protocols/issue_credential/v2_0/messages/cred_format.py\nacapy_agent/protocols/present_proof/v2_0/messages/pres_format.py\n
The INDY
handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.
The new code is already in place (in comments). For example for the Credential handler:
To make the switch from indy to anoncreds replace the above with the following\n INDY = FormatSpec(\n \"hlindy/\",\n DeferLoad(\n \"acapy_agent.protocols.present_proof.v2_0\"\n \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n ),\n )\n
There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.
Some new methods were added within the Ledger class.
New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).
"},{"location":"deploying/AnoncredsControllerMigration/","title":"AnonCreds Controller Migration","text":"To upgrade an agent to use AnonCreds a controller should implement the required changes to endpoints and payloads in a way that is backwards compatible. The controller can then trigger the upgrade via the upgrade endpoint.
"},{"location":"deploying/AnoncredsControllerMigration/#step-1-endpoint-payload-and-response-changes","title":"Step 1 - Endpoint Payload and Response Changes","text":"There is endpoint and payload changes involved with creating schema, credential definition and revocation objects. Your controller will need to implement these changes for any endpoints it uses.
A good way to implement this with backwards compatibility is to get the wallet type via /settings and handle the existing endpoints when wallet.type is askar and the new anoncreds endpoints when wallet.type is askar-anoncreds. In this way the controller will handle both types of wallets in case the upgrade fails. After the upgrade is successful and stable the controller can be updated to only handle the new anoncreds endpoints.
"},{"location":"deploying/AnoncredsControllerMigration/#schemas","title":"Schemas","text":""},{"location":"deploying/AnoncredsControllerMigration/#creating-a-schema","title":"Creating a Schema:","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"attributes\": [\"score\"],\n \"schema_name\": \"simple\",\n \"schema_version\": \"1.0\"\n}\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n }\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"schema\": {\n \"ver\": \"1.0\",\n \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"name\": \"simple\",\n \"version\": \"1.0\",\n \"attrNames\": [\"score\"],\n \"seqNo\": 541\n }\n },\n \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"schema\": {\n \"ver\": \"1.0\",\n \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"name\": \"simple\",\n \"version\": \"1.0\",\n \"attrNames\": [\"score\"],\n \"seqNo\": 541\n }\n}\n
to
{\n \"job_id\": \"string\",\n \"registration_metadata\": {},\n \"schema_metadata\": {},\n \"schema_state\": {\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"state\": \"finished\"\n }\n}\n
With endorsement:
{\n \"sent\": {\n \"schema\": {\n \"attrNames\": [\n \"score\"\n ],\n \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"name\": \"schema_name\",\n \"seqNo\": 10,\n \"ver\": \"1.0\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\"\n },\n \"txn\": {...}\n}\n
to
{\n \"job_id\": \"12cb896d648242c8b9b0fff3b870ed00\",\n \"schema_state\": {\n \"state\": \"wait\",\n \"schema_id\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n \"schema\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"attrNames\": [\n \"score\"\n ],\n \"name\": \"simple\",\n \"version\": \"1.1\"\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"schema_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#getting-schemas","title":"Getting schemas","text":"{\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"name\": \"schema_name\",\n \"seqNo\": 10,\n \"ver\": \"1.0\",\n \"version\": \"1.0\"\n }\n}\n
to
{\n \"resolution_metadata\": {},\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"schema_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#credential-definitions","title":"Credential Definitions","text":""},{"location":"deploying/AnoncredsControllerMigration/#creating-a-credential-definition","title":"Creating a credential definition","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"revocation_registry_size\": 1000,\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:simple:1.0\",\n \"support_revocation\": true,\n \"tag\": \"default\"\n}\n
to
{\n \"credential_definition\": {\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"tag\": \"default\"\n },\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n \"revocation_registry_size\": 1000,\n \"support_revocation\": true\n }\n}\n
Responses
Without Endoresment:
{\n \"sent\": {\n \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n },\n \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n}\n
to
{\n \"schema_state\": {\n \"state\": \"finished\",\n \"schema_id\": \"BpGaCdTwgEKoYWm6oPbnnj:2:simple:1.0\",\n \"schema\": {\n \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n \"attrNames\": [\"score\"],\n \"name\": \"simple\",\n \"version\": \"1.0\"\n }\n },\n \"registration_metadata\": {},\n \"schema_metadata\": {\n \"seqNo\": 555\n }\n}\n
With Endorsement:
{\n \"sent\": {\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\"\n },\n \"txn\": {...}\n}\n
{\n \"job_id\": \"7082e58aa71d4817bb32c3778596b012\",\n \"credential_definition_state\": {\n \"state\": \"wait\",\n \"credential_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n \"credential_definition\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"schemaId\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n \"type\": \"CL\",\n \"tag\": \"default\",\n \"value\": {\n \"primary\": {...},\n \"revocation\": {...}\n }\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"credential_definition_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#getting-credential-definitions","title":"Getting credential definitions","text":"{\n \"credential_definition\": {\n \"id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"schemaId\": \"20\",\n \"tag\": \"tag\",\n \"type\": \"CL\",\n \"value\": {...},\n \"revocation\": {...}\n },\n \"ver\": \"1.0\"\n }\n}\n
to
{\n \"credential_definition\": {\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"tag\": \"default\",\n \"type\": \"CL\",\n \"value\": {...},\n \"revocation\": {...}\n }\n },\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"credential_definitions_metadata\": {},\n \"resolution_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#revocation","title":"Revocation","text":"Most of the changes with revocation endpoints only require prepending /anoncreds
to the path. There are some other subtle changes listed below.
{\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"max_cred_num\": 1000\n}\n
params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"revocation_registry_definition\": {\n \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"maxCredNum\": 777,\n \"tag\": \"default\"\n }\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n },\n \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n}\n
to
{\n \"revocation_registry_definition_state\": {\n \"state\": \"finished\",\n \"revocation_registry_definition_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\",\n \"revocation_registry_definition\": {\n \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n \"revocDefType\": \"CL_ACCUM\",\n \"credDefId\": \"BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default\",\n \"tag\": \"default\",\n \"value\": {...}\n }\n },\n \"registration_metadata\": {},\n \"revocation_registry_definition_metadata\": {\n \"seqNo\": 569\n }\n}\n
With endorsement:
{\n \"sent\": {\n \"result\": {\n \"created_at\": \"2021-12-31T23:59:59Z\",\n \"cred_def_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"error_msg\": \"Revocation registry undefined\",\n \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\",\n \"max_cred_num\": 1000,\n \"pending_pub\": [\n \"23\"\n ],\n \"record_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n \"revoc_def_type\": \"CL_ACCUM\",\n \"revoc_reg_def\": {\n \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n \"revocDefType\": \"CL_ACCUM\",\n \"tag\": \"string\",\n \"value\": {...},\n \"ver\": \"1.0\"\n },\n \"revoc_reg_entry\": {...},\n \"revoc_reg_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n \"state\": \"active\",\n \"tag\": \"string\",\n \"tails_hash\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\",\n \"tails_local_path\": \"string\",\n \"tails_public_uri\": \"string\",\n \"updated_at\": \"2021-12-31T23:59:59Z\"\n }\n },\n \"txn\": {...}\n}\n
to
{\n \"job_id\": \"25dac53a1fb84cb8a5bf1b4362fbca11\",\n \"revocation_registry_definition_state\": {\n \"state\": \"wait\",\n \"revocation_registry_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:4:RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default:CL_ACCUM:default\",\n \"revocation_registry_definition\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"revocDefType\": \"CL_ACCUM\",\n \"credDefId\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n \"tag\": \"default\",\n \"value\": {...}\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"revocation_registry_definition_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#send-revocation-entry-or-list-to-ledger","title":"Send revocation entry or list to ledger","text":"params\n - conn_id\n - create_transaction_for_endorser\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"rev_reg_def_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\"\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n },\n \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n}\n
to
\n
"},{"location":"deploying/AnoncredsControllerMigration/#get-current-active-registry","title":"Get current active registry:","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"rrid2crid\": {\n \"additionalProp1\": [\"12345\"],\n \"additionalProp2\": [\"12345\"],\n \"additionalProp3\": [\"12345\"]\n }\n}\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"rrid2crid\": {\n \"additionalProp1\": [\"12345\"],\n \"additionalProp2\": [\"12345\"],\n \"additionalProp3\": [\"12345\"]\n }\n}\n
The upgrade endpoint is at POST /anoncreds/wallet/upgrade.
You need to be careful doing this, as there is no way to downgrade the wallet. It is recommended highly recommended to back-up any wallets and to test the upgrade in a development environment before upgrading a production wallet.
Params: wallet_name
is the name of the wallet to upgrade. Used to prevent accidental upgrades.
The behavior for a base wallet (standalone) or admin wallet in multitenant mode is slightly different from the behavior of a subwallet (or tenant) in multitenancy mode. However, the upgrade process is the same.
The agent will get a 503 error during the upgrade process. Any agent instance will shut down when the upgrade is complete. It is up to the aca-py agent to start up again. After the upgrade is complete the old endpoints will no longer be available and result in a 400 error.
The aca-py agent will work after the restart. However, it will receive a warning for having the wrong wallet type configured. It is recommended to change the wallet-type
to askar-anoncreds
in the agent configuration file or start-up command.
The sub-tenant which is in the process of being upgraded will get a 503 error during the upgrade process. All other sub-tenants will continue to operate normally. After the upgrade is complete the sub-tenant will be able to use the new endpoints. The old endpoints will no longer be available and result in a 403 error. Any aca-py agents will remain running after the upgrade and it's not required that the aca-py agent restarts.
"},{"location":"deploying/BBSSignatures/","title":"BBS Signatures Support","text":"ACA-Py has supported BBS Signatures for some time. However, the dependency that is used (bbs
) does not support the ARM architecture, and its inclusion in the default ACA-Py artifacts mean that developers using ARM-based hardware (such as Apple M1 Macs or later) cannot run ACA-Py \"out-of-the-box\". We feel that providing a better developer experience by supporting the ARM architecture is more important than BBS Signature support at this time. As such, we have removed the BBS dependency from the base ACA-Py artifacts and made it an add-on that those using ACA-Py with BBS must take extra steps to build their own artifacts. This file describes how to do those extra steps.
Regarding future support for BBS Signatures in ACA-Py. There is currently a lot of work going on in developing implementations and BBS-based Verifiable Credential standards. However, at the time of this release, there is not an obvious approach to an implementation to use in ACA-Py that includes ARM support. As a result, we will hold off on updating the BBS Signatures support in ACA-Py until the standards and path forward clarify. In the meantime, maintainers of ACA-Py plan to continue to do all we can to push for newer and better ZKP-based Verifiable Credential standards.
If you require BBS for your deployment an optional \"extended\" ACA-Py image has been released (aries-cloudagent-bbs
) that includes BBS, with the caveat that it will very likely not install on ARM architecture.
If you are a contributor or are developing using a local build of ACA-Py and need BBS, the easiest way to include it is to install the optional dependency bbs
with poetry
(again with the caveat that it will very likely not install on ARM architecture). The --all-extras
flag will install the bbs
optional dependency in ACA-Py:
poetry install --all-extras\n
"},{"location":"deploying/BBSSignatures/#testing","title":"Testing","text":"WARNNG: if you do NOT have bbs
installed you should exclude the BBS specific integration tests from running with the tag ~@BBS
otherwise they will fail:
./run_bdd -t ~@BBS\n
See the Unit and Integration testing docs for more information on how to run tests.
"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"ACA-Py is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their deployments using the container images graciously provided by BC Gov and hosted through their bcgovimages
docker hub account. These images have been critical to the adoption of not only ACA-Py but also decentralized trust/SSI more generally.
Recognizing how critical these images are to the success of ACA-Py and consistent with the OpenWallet Foundation's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.
"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"This project builds and publishes the ghcr.io/openwallet-foundation/acapy
image. Multiple variants are available; see Tags.
ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. The following variants exist:
In the past, two image variants were published. These two variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.
The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.
Below is a table of all generated images and their tags:
Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.
libindy
aries
pyenv
indy-base
in the Dockerfile) which includes Indy dependencies; this could be replaced with an explicit indy-python
image from the Indy SDK repolibindy
but does NOT include the Indy CLIindy
pyenv
bcgovimages/aries-cloudagent
von-image
indy
libindy
and Indy CLIpyenv
.github/workflows/tests.yml
) - A reusable workflow that runs tests for the Standard ACA-Py variant for a given python version..github/workflows/pr-tests.yml
) - Run on pull requests; runs tests for the Standard ACA-Py variant for a \"default\" python version. Check this workflow for the current default python version in use..github/workflows/nightly-tests.yml
) - Run nightly; runs tests for the Standard ACA-Py variant for all currently supported python versions. Check this workflow for the set of currently supported versions in use..github/workflows/publish.yml
) - Run on new release published or when manually triggered; builds and pushes the Standard ACA-Py variant to the Github Container Registry..github/workflows/BDDTests.yml
) - Run on pull requests (to the openwallet-foundation fork only); runs BDD integration tests..github/workflows/format.yml
) - Run on pull requests; checks formatting of files modified by the PR..github/workflows/codeql.yml
) - Run on pull requests; performs CodeQL analysis..github/workflows/pythonpublish.yml
) - Run on release created; publishes ACA-Py python package to PyPI..github/workflows/pipaudit.yml
) - Run when manually triggered; performs pip audit.Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.
"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.
# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n
For this configuration, a folder called wallet will be created which contains a file called sqlite.db
.
The wallet can be configured to use PostgreSQL as storage.
# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n
In this case the hostname for the database is db
on port 5432.
A docker-compose file could look like this:
# docker-compose.yml\nversion: '3'\nservices:\n # acapy ...\n # database\n db:\n image: postgres:10\n environment:\n POSTGRES_PASSWORD: mysecretpassword\n POSTGRES_USER: postgres\n POSTGRES_DB: postgres\n ports:\n - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.
"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.
"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.
Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.
However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.
"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)
The components are:
In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.
If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.
"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.
"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar
configuration parameter. You will automatically be using all of the shared components.
As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.
"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"If you have an existing deployment, in changing the --wallet-type
configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.
Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.
We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.
askar-upgrade
script. For example:askar-upgrade \\\n --strategy dbpw \\\n --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n --wallet-name <wallet name> \\\n --wallet-key <wallet key>\n
--wallet-type
configuration setting to askar
--wallet-type
change to rollback to the pre-migration state.It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.
"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"If you have questions, comments, or suggestions about the upgrade process, please use the ACA-Py channel on OpenWallet Foundation Discord, or submit a GitHub issue to the ACA-Py repository.
"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.
"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"Poetry manages virtual environments for your projects to ensure clean and isolated development environments.
"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"poetry shell\n
Alternatively you can source the environment settings in the current shell
source $(poetry env info --path)/bin/activate\n
for powershell users this would be
(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"When using poetry shell
exit\n
When using the activate
script
deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"Poetry uses the pyproject.toml
file to manage dependencies. Add new dependencies to this file and update existing ones as needed.
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.
"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"Poetry streamlines the process of building and publishing Python packages.
"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"Extras allow you to specify additional dependencies based on project requirements.
"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"poetry install -E extras-name\n
for example
poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"Development dependencies are useful for tasks like testing, linting, and documentation generation.
"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":"redis_queue
","text":"It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.
More details can be found here.
"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configurationyaml
","text":"redis_queue:\n connection: \n connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n ### For Inbound ###\n inbound:\n acapy_inbound_topic: \"acapy_inbound\"\n acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n ### For Outbound ###\n outbound:\n acapy_outbound_topic: \"acapy_outbound\"\n mediator_mode: false\n\n ### For Event ###\n event:\n event_topic_maps:\n ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n ^acapy::record::([^:])?: acapy-record-$wallet_id\n acapy::basicmessage::received: acapy-basicmessage-received\n acapy::problem_report: acapy-problem_report\n acapy::ping::received: acapy-ping-received\n acapy::ping::response_received: acapy-ping-response_received\n acapy::actionmenu::received: acapy-actionmenu-received\n acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n acapy::keylist::updated: acapy-keylist-updated\n acapy::revocation-notification::received: acapy-revocation-notification-received\n acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n acapy::forward::received: acapy-forward-received\n event_webhook_topic_maps:\n acapy::basicmessage::received: basicmessages\n acapy::problem_report: problem_report\n acapy::ping::received: ping\n acapy::ping::response_received: ping\n acapy::actionmenu::received: actionmenu\n acapy::actionmenu::get-active-menu: get-active-menu\n acapy::actionmenu::perform-menu-action: perform-menu-action\n acapy::keylist::updated: keylist\n deliver_webhook: true\n
redis_queue.connection.connection_url
: This is required and is expected in redis://{username}:{password}@{host}:{port}
format.redis_queue.inbound.acapy_inbound_topic
: This is the topic prefix for the inbound message queues. Recipient key of the message are also included in the complete topic name. The final topic will be in the following format acapy_inbound_{recip_key}
redis_queue.inbound.acapy_direct_resp_topic
: Queue topic name for direct responses to inbound message.redis_queue.outbound.acapy_outbound_topic
: Queue topic name for the outbound messages. Used by Deliverer service to deliver the payloads to specified endpoint.redis_queue.outbound.mediator_mode
: Set to true, if using Redis as a http bridge when setting up a mediator agent. By default, it is set to false.event.event_topic_maps
: Event topic mapevent.event_webhook_topic_maps
: Event to webhook topic mapevent.deliver_webhook
: When set to true, this will deliver webhooks to endpoints specified in admin.webhook_urls
. By default, set to true.Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.
docker-compose up --build -d\n
More details can be found here.
"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"Installation
pip install git+https://github.com/openwallet-foundation/acapy-plugins.git\n
Startup ACA-Py with redis_queue
plugin loaded
docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n --plugin redis_queue.v1_0.events \\\n --plugin-config plugins-config.yaml \\\n -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n # ... the remainder of your startup arguments\n
Regardless of the options above, you will need to startup deliverer
and relay
/mediator
service as a bridge to receive inbound messages. Consider the following to build your docker-compose
file which should also start up your redis cluster:
Relay + Deliverer
relay:\n image: redis-relay\n build:\n context: ..\n dockerfile: redis_relay/Dockerfile\n ports:\n - 7001:7001\n - 80:80\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7001\n - STATUS_ENDPOINT_API_KEY=test_api_key_1\n - INBOUND_TRANSPORT_CONFIG=[[\"http\", \"0.0.0.0\", \"80\"]]\n - TUNNEL_ENDPOINT=http://relay-tunnel:4040\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n depends_on:\n - redis-cluster\n - relay-tunnel\n networks:\n - acapy_default\ndeliverer:\n image: redis-deliverer\n build:\n context: ..\n dockerfile: redis_deliverer/Dockerfile\n ports:\n - 7002:7002\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7002\n - STATUS_ENDPOINT_API_KEY=test_api_key_2\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n depends_on:\n - redis-cluster\n networks:\n - acapy_default\n
Mediator + Deliverer
mediator:\n image: acapy-redis-queue\n build:\n context: ..\n dockerfile: docker/Dockerfile\n ports:\n - 3002:3001\n depends_on:\n - deliverer\n volumes:\n - ./configs:/home/indy/configs:z\n - ./acapy-endpoint.sh:/home/indy/acapy-endpoint.sh:z\n environment:\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n - TUNNEL_ENDPOINT=http://mediator-tunnel:4040\n networks:\n - acapy_default\n entrypoint: /bin/sh -c '/wait && ./acapy-endpoint.sh poetry run aca-py \"$$@\"' --\n command: start --arg-file ./configs/mediator.yml\n\ndeliverer:\n image: redis-deliverer\n build:\n context: ..\n dockerfile: redis_deliverer/Dockerfile\n depends_on:\n - redis-cluster\n ports:\n - 7002:7002\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7002\n - STATUS_ENDPOINT_API_KEY=test_api_key_2\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n networks:\n - acapy_default\n
Both relay and mediator demos are also available.
"},{"location":"deploying/RedisPlugins/#acapy-cache-redis-redis_cache","title":"acapy-cache-redisredis_cache
","text":"ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.
More details can be found here.
"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configurationyaml
","text":"redis_cache:\n connection: \"redis://default:test1234@172.28.0.103:6379\"\n max_connection: 50\n credentials:\n username: \"default\"\n password: \"test1234\"\n ssl:\n cacerts: ./ca.crt\n
redis_cache.connection
: This is required and is expected in redis://{username}:{password}@{host}:{port}
format.redis_cache.max_connection
: Maximum number of redis pool connections. Default: 50redis_cache.credentials.username
: Redis instance usernameredis_cache.credentials.password
: Redis instance passwordredis_cache.ssl.cacerts
Running the plugin with docker is simple and straight-forward. There is an example docker-compose.yml file in the root of the project that launches both ACA-Py and an accompanying Redis instance. Running it is as simple as:
docker-compose up --build -d\n
To launch ACA-Py with an accompanying redis cluster of 6 nodes (3 primaries and 3 replicas), please refer to example docker-compose.cluster.yml and run the following:
Note: Cluster requires external docker network with specified subnet
docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\ndocker-compose -f docker-compose.cluster.yml up --build -d\n
Installation
pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n
Startup ACA-Py with redis_cache
plugin loaded
aca-py start \\\n --plugin acapy_cache_redis.v0_1 \\\n --plugin-config plugins-config.yaml \\\n # ... the remainder of your startup arguments\n
or
aca-py start \\\n --plugin acapy_cache_redis.v0_1 \\\n --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n --plugin-config-value \"redis_cache.max_connections=90\" \\\n --plugin-config-value \"redis_cache.credentials.username=username\" \\\n --plugin-config-value \"redis_cache.credentials.password=password\" \\\n # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue
or redis_cache
plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster
(onto the root_profile
). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.
Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.
"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.
Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.
Once an upgrade is identified as needed, the process is:
In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.
If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.
Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!
"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade
is required when running name tags based upgrade (i.e. providing --named-tag
).
Tags are specified in YML file as below:
fix_issue_rev_reg:\n fix_issue_rev_reg_records: true\n
Example:
./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.
There are 2 options to perform such upgrades:
--upgrade-all-subwallets
This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).
--upgrade-subwallet
This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.
Note: multiple specifications allowed
"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"There are a couple of upgrade exception conditions to consider, as outlined in the following sections.
"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.
"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>
\" and \"--force-upgrade
\", the --from-version
version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade
to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.
This document is a \"concept of operations\" for an instance of an ACA-Py agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).
The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.
The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.
Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.
Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.
The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.
"},{"location":"deploying/deploymentModel/#aca-py","title":"ACA-Py","text":"ACA-Py implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.
Instances of the ACA-Py agents are configured with the following sub-components:
A controller provides the personality of ACA-Py agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single ACA-Py agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.
Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding ACA-Py agent and invoking the DIDComm administration capabilities of the ACA-Py agent by calling the REST API exposed by that cloud agent. As well as responding to ACA-Py agent events, the controller initiates DIDComm protocol instances using the same REST API.
The controller and ACA-Py agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.
A controller implements the following capabilities.
While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the ACA-Py agent it controls via the REST API exposed by that agent.
"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"The ACA-Py agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the ACA-Py agent instance. In the most common scenario, the ACA-Py agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and ACA-Py agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the ACA-Py agent to use the Postgres wallet supports enterprise scale agent deployments.
Current examples of deployed instances of ACA-Py agent and controllers include:
This design proposes to extend the ACA-PY to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.
"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"The pre-requisites for the work are:
As of 2024-01-15, these pre-requisites have been met.
"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer
and issue
messages and when receiving request
messages.
Related notes:
A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.
A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.
"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request
message with an RFC 0510 DIF Presentation Exchange format attachment.
If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.
An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.
"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer
and issue
messages, and when sending request
messages.
On receiving an Issue Credential v2.0 offer
message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request
message.
On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.
On receiving an RFC 0510 DIF Presentation Exchange request
message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.
offer
message with an RFC 0809 VC-DI attachment.request
message with an RFC 0510 DIF Presentation Exchange format attachment that can be satisfied with AnonCreds credentials held by the holder.restrictions
and revocation
data elements conveyed?It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.
"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"request
message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.presentation
message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject
impacted by changes are as follows:
W3CCredential
create
, load
)process
, to_legacy
, add_non_anoncreds_integrity_proof
, set_id
, set_subject_id
, add_context
, add_type
)schema_id
, cred_def_id
, rev_reg_id
, rev_reg_index
)create_w3c_credential
, process_w3c_credential
, _object_from_json
, _object_get_attribute
, w3c_credential_add_non_anoncreds_integrity_proof
, w3c_credential_set_id
, w3c_credential_set_subject_id
, w3c_credential_add_context
, w3c_credential_add_type
)W3CPresentation
create
, load
)verify
)create_w3c_presentation
, _object_from_json
, verify_w3c_presentation
)They will be added to __init__.py as additional exports of AnoncredsObject.
We also have to consider which classes or anoncreds objects have been modified
The classes modified according to the same PR mentioned above are:
Credential
from_w3c
)to_w3c
)credential_from_w3c
, credential_to_w3c
)PresentCredential
_get_entry
, add_attributes
, add_predicates
)The issuance, presentation and verification of legacy anoncreds are implemented in this ./acapy_agent/anoncreds directory. Therefore, we will also start from there.
Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer
class:
Create VC_DI Credential Offer
According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,
could be the parameters for create_offer
method.
Create VC_DI Credential
NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.
async def create_credential(\n self,\n credential_offer: dict,\n credential_request: dict,\n credential_values: dict,\n ) -> str:\n...\n...\n try:\n credential = await asyncio.get_event_loop().run_in_executor(\n None,\n lambda: W3CCredential.create(\n cred_def.raw_value,\n cred_def_private.raw_value,\n credential_offer,\n credential_request,\n raw_values,\n None,\n None,\n None,\n None,\n ),\n )\n...\n
Create VC_DI Credential Request
async def create_vc_di_credential_request(\n self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n ) -> Tuple[str, str]:\n...\n...\ntry:\n secret = await self.get_master_secret()\n (\n cred_req,\n cred_req_metadata,\n ) = await asyncio.get_event_loop().run_in_executor(\n None,\n W3CCredentialRequest.create,\n None,\n holder_did,\n credential_definition.to_native(),\n secret,\n AnonCredsHolder.MASTER_SECRET_ID,\n credential_offer,\n )\n...\n
Create VC_DI Credential Presentation
async def create_vc_di_presentation(\n self,\n presentation_request: dict,\n requested_credentials: dict,\n schemas: Dict[str, AnonCredsSchema],\n credential_definitions: Dict[str, CredDef],\n rev_states: dict = None,\n ) -> str:\n...\n...\n try:\n secret = await self.get_master_secret()\n presentation = await asyncio.get_event_loop().run_in_executor(\n None,\n Presentation.create,\n presentation_request,\n present_creds,\n self_attest,\n secret,\n {\n schema_id: schema.to_native()\n for schema_id, schema in schemas.items()\n },\n {\n cred_def_id: cred_def.to_native()\n for cred_def_id, cred_def in credential_definitions.items()\n },\n )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"In this case, we can use to_w3c
method of Credential
class to convert from legacy to w3c and to_legacy
method of W3CCredential
class to convert from w3c to legacy.
We could call to_w3c
method like this:
vc_di_cred = Credential.to_w3c(cred_def)\n
and for to_legacy
:
legacy_cred = W3CCredential.to_legacy()\n
We don't need to input any parameters to it as it in turn calls Credential.from_w3c()
method under the hood.
Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI
in ./protocols/issue_credential/v2_0/messages/cred_format.py
-
# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n INDY = FormatSpec(...)\n LD_PROOF = FormatSpec(...)\n VC_DI = FormatSpec(\n \u201cvc_di/\u201d,\n CredExRecordVCDI,\n DeferLoad(\n \u201cacapy_agent.protocols.issue_credential.v2_0\u201d\n \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n ),\n )\n
And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof
# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n \"\"\"Credential exchange W3C detail record.\"\"\"\n\n class Meta:\n \"\"\"CredExRecordW3C metadata.\"\"\"\n\n schema_class = \"CredExRecordW3CSchema\"\n\n RECORD_ID_NAME = \"cred_ex_w3c_id\"\n RECORD_TYPE = \"w3c_cred_ex_v20\"\n TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n
Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -
{\n \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n \"comment\": \"<some comment>\",\n \"formats\": [\n {\n \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"format\": \"didcomm/w3c-di-vc@v0.1\"\n }\n ],\n \"credentials~attach\": [\n {\n \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"mime-type\": \"application/ld+json\",\n \"data\": {\n \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n }\n }\n ]\n}\n
Assuming VCDIDetail
and VCDIOptions
are already in place, VCDIDetailSchema
can be created like so:
# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n class Meta:\n \"\"\"Accept parameter overload.\"\"\"\n\n unknown = INCLUDE\n model_class = VCDIDetail\n\n credential = fields.Nested(\n CredentialSchema(),\n required=True,\n metadata={\n \"description\": \"Detail of the VC_DI Credential to be issued\",\n \"example\": {\n \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n \"comment\": \"<some comment>\",\n \"formats\": [\n {\n \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"format\": \"didcomm/w3c-di-vc@v0.1\"\n }\n ],\n \"credentials~attach\": [\n {\n \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"mime-type\": \"application/ld+json\",\n \"data\": {\n \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n }\n }\n ]\n }\n },\n )\n
Then create w3c format handler with mapping like so:
# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n CRED_20_PROPOSAL: VCDIDetailSchema,\n CRED_20_OFFER: VCDIDetailSchema,\n CRED_20_REQUEST: VCDIDetailSchema,\n CRED_20_ISSUE: VerifiableCredentialSchema,\n }\n
Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di
flag to the corresponding routes.
To make sure that once an endpoint has been called to trigger the Issue Credential
flow in 0809 W3C_DI attachment formats
the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI
format.
# Format specifications\nATTACHMENT_FORMAT = {\n CRED_20_PROPOSAL: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_OFFER: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_REQUEST: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_ISSUE: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n },\n}\n
And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:
/issue-credential-2.0/send-offer
route (in addition to other offer routes)/issue-credential-2.0/send-request
route/issue-credential-2.0/create
route/issue-credential-2.0/send
routeThe same goes for ATTACHMENT_FORMAT of Present Proof
flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:
/present-proof-2.0/send-proposal
route/present-proof-2.0/create-request
route/present-proof-2.0/send-request
routeThis route indirectly calls _formats_filters
function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:
{\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-issue\": true,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
This route indirectly calls _format_result_with_details
function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:
{\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-issue\": true,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"holder_did\": <holder_did>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"holder_did\": <holder_did>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-present\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-verify\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-verify\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n \"presentation_definition\": <presentation_definition_schema>,\n \"auto_remove\": true,\n \"dif\": {\n issuer_id: \"<issuer_id>\",\n record_ids: {\n \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n \"<input descriptor id_2>\": [\"<record id>\"],\n }\n },\n \"reveal_doc\": {\n // vc_di dict\n }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.
One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.
We will duplicate this store_credential function and modify it:
async def store_w3c_credential(...) {\n ...\n ...\n try:\n cred = W3CCredential.load(credential_data)\n ...\n ...\n}\n
Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?
Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.
"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy()
, all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c()
signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.
We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.
"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.
If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.
"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?
"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?
If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof
How will wallets support that in a way that is developer-friendly to wallet devs?
presentation
message of the Present Proof v2.0 protocol.To isolate an upgrade process and trigger it via API the following pattern was designed to handle multitenant scenarios. It includes an is_upgrading record in the wallet(DB) and a middleware to prevent requests during the upgrade process.
"},{"location":"design/UpgradeViaApi/#flow","title":"Flow","text":"The diagram below describes the sequence of events for the anoncreds upgrade process which it was designed, but the architecture can be used for any upgrade process.
sequenceDiagram\n participant A1 as Agent 1\n participant M1 as Middleware\n participant IAS1 as IsAnoncredsSingleton Set\n participant UIPS1 as UpgradeInProgressSingleton Set\n participant W as Wallet (DB)\n participant UIPS2 as UpgradeInProgressSingleton Set\n participant IAS2 as IsAnoncredsSingleton Set\n participant M2 as Middleware\n participant A2 as Agent 2\n\n Note over A1,A2: Start upgrade for non-anoncreds wallet\n A1->>M1: POST /anoncreds/wallet/upgrade\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is not in set\n M1-->>UIPS1: check if wallet is in set\n UIPS1-->>M1: wallet is not in set\n M1->>A1: OK\n A1-->>W: Add is_upgrading = anoncreds_in_progress record\n A1->>A1: Upgrade wallet\n A1-->>UIPS1: Add wallet to set\n\n Note over A1,A2: Attempted Requests During Upgrade\n\n Note over A1: Attempted Request\n A1->>M1: GET /any-endpoint\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is not in set\n M1-->>UIPS1: check if wallet is in set\n UIPS1-->>M1: wallet is in set\n M1->>A1: 503 Service Unavailable\n\n Note over A2: Attempted Request\n A2->>M2: GET /any-endpoint\n M2-->>IAS2: check if wallet is in set\n IAS2->>M2: wallet is not in set\n M2-->>UIPS2: check if wallet is in set\n UIPS2-->>M2: wallet is not in set\n A2-->>W: Query is_upgrading = anoncreds_in_progress record\n W-->>A2: record = anoncreds_in_progress\n A2->>A2: Loop until upgrade is finished in seperate process\n A2-->>UIPS2: Add wallet to set\n M2->>A2: 503 Service Unavailable\n\n Note over A1,A2: Agent Restart During Upgrade\n A1-->>W: Get is_upgrading record for wallet or all subwallets\n W-->>A1: \n A1->>A1: Resume upgrade if in progress\n A1-->>UIPS1: Add wallet to set\n\n Note over A2: Same as Agent 1\n\n Note over A1,A2: Upgrade Completes\n\n Note over A1: Finish Upgrade\n A1-->>W: set is_upgrading = anoncreds_finished\n A1-->>UIPS1: Remove wallet from set\n A1-->>IAS1: Add wallet to set\n A1->>A1: update subwallet or restart\n\n Note over A2: Detect Upgrade Complete\n A2-->>W: Check is_upgrading = anoncreds_finished\n W-->>A2: record = anoncreds_in_progress\n A2->>A2: Wait 1 second\n A2-->>W: Check is_upgrading = anoncreds_finished\n W-->>A2: record = anoncreds_finished\n A2-->>UIPS2: Remove wallet from set\n A2-->>IAS2: Add wallet to set\n A2->>A2: update subwallet or restart\n\n Note over A1,A2: Restarted Agents After Upgrade\n\n A1-->W: Get is_upgrading record for wallet or all subwallets\n W-->>A1: \n A1->>IAS1: Add wallet to set if record = anoncreds_finished\n\n Note over A2: Same as Agent 1\n\n Note over A1,A2: Attempted Requests After Upgrade\n\n Note over A1: Attempted Request\n A1->>M1: GET /any-endpoint\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is in set\n M1-->>A1: OK\n\n Note over A2: Same as Agent 1
"},{"location":"design/UpgradeViaApi/#example","title":"Example","text":"An example of the implementation can be found via the anoncreds upgrade components.
acapy_agent/wallet/routes.py
in the upgrade_anoncreds
controller wallet/anoncreds_upgrade.py
admin/server.py
in the upgrade_middleware
functionwallet/singletons.py
core/conductor.py
in the check_for_wallet_upgrades_in_progress
functionACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.
To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py
agent with the --admin {HOST} {PORT}
and --admin-insecure-mode
command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY}
and add the X-API-Key: {KEY}
header to all requests instead of using the --admin-insecure-mode
parameter.
To invoke a specific method:
The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.
Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.
The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.
"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"When ACA-Py is started with the --webhook-url {URL}
command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state
property is updated.
When a webhook is dispatched, the record topic
is appended as a path component to the URL. For example, https://webhook.host.example
becomes https://webhook.host.example/topic/connections
when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.
ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.
Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.
topic
: The topic of the webhook, such as connections
or basicmessages
payload
: The payload of the webhook; this is the data usually received in the request body when webhooks are delivered over HTTPwallet_id
: If using multitenancy, this is the wallet ID of the subwallet that emitted the webhook. This value will be omitted if not using multitenancy.To open a WebSocket, connect to the /ws
endpoint of the Admin API.
/connections
)","text":"connection_id
: the unique connection identifierstate
: init
/ invitation
/ request
/ response
/ active
/ error
/ inactive
my_did
: the DID this agent is using in the connectiontheir_did
: the DID the other agent in the connection is usingtheir_label
: a connection label provided by the other agenttheir_role
: a role assigned to the other agent in the connectioninbound_connection_id
: a connection identifier for the related inbound routing connectioninitiator
: self
/ external
/ multiuse
invitation_key
: a verification key used to identify the source connection invitationrequest_id
: the @id
property from the connection request messagerouting_state
: none
/ request
/ active
/ error
accept
: manual
/ auto
error_msg
: the most recent error messageinvitation_mode
: once
/ multi
alias
: a local alias for the connection record/basicmessages
)","text":"connection_id
: the identifier of the related pairwise connectionmessage_id
: the @id
of the incoming agent messagecontent
: the contents of the agent messagestate
: received
/forward
)","text":"Enable using --monitor-forward
.
connection_id
: the identifier of the connection associated with the recipient keyrecipient_key
: the recipient key of the forward message (to
field of the forward message)status
: The delivery status of the received forward message. Possible values:sent_to_session
: Message is sent directly to the connection over an active transport sessionsent_to_external_queue
: Message is sent to an external queue. No information is known on the delivery of the messagequeued_for_delivery
: Message is queued for delivery using outbound transport (recipient connection has an endpoint)waiting_for_pickup
: The connection has no reachable endpoint. Need to wait for the recipient to connect with return routing for deliveryundeliverable
: The connection has no reachable endpoint, and the internal queue for messages is not enabled (--enable-undelivered-queue
)./issue_credential
)","text":"credential_exchange_id
: the unique identifier of the credential exchangeconnection_id
: the identifier of the related pairwise connectionthread_id
: the thread ID of the previously received credential proposal or offerparent_thread_id
: the parent thread ID of the previously received credential proposal or offerinitiator
: issue-credential exchange initiator self
/ external
state
: proposal_sent
/ proposal_received
/ offer_sent
/ offer_received
/ request_sent
/ request_received
/ issued
/ credential_received
/ credential_acked
credential_definition_id
: the ledger identifier of the related credential definitionschema_id
: the ledger identifier of the related credential schemacredential_proposal_dict
: the credential proposal messagecredential_offer
: (Indy) credential offercredential_request
: (Indy) credential requestcredential_request_metadata
: (Indy) credential request metadatacredential_id
: the wallet identifier of the stored credentialraw_credential
: the credential record as receivedcredential
: the credential record as stored in the walletauto_offer
: (boolean) whether to automatically offer the credentialauto_issue
: (boolean) whether to automatically issue the credentialerror_msg
: the previous error message/present_proof
)","text":"presentation_exchange_id
: the unique identifier of the presentation exchangeconnection_id
: the identifier of the related pairwise connectionthread_id
: the thread ID of the previously received presentation proposal or offerinitiator
: present-proof exchange initiator: self
/ external
state
: proposal_sent
/ proposal_received
/ request_sent
/ request_received
/ presentation_sent
/ presentation_received
/ verified
presentation_proposal_dict
: the presentation proposal messagepresentation_request
: (Indy) presentation request (also known as proof request)presentation
: (Indy) presentation (also known as proof)verified
: (string) whether the presentation is verified: true
or false
auto_present
: (boolean) prover choice to auto-present proof as verifier requestserror_msg
: the previous error messageThe best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.
The routes.py
file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).
The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422
HTTP response with an error message if the schema validation fails.)
API endpoints are defined using aiohttp_apispec tags (e.g. @doc
, @request_schema
, @response_schema
etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register()
method and added to the Swagger page in the post_process_routes()
method.
The APIs should return the following HTTP status:
...and should not return:
ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.
The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.
This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.
"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.
Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.
"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:
BaseAnonCredsResolver
- hereBaseAnonCredsRegistrar
- hereThe links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of acapy_agent/anoncreds/base.py in the main
branch.
The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.
Models for the API are defined here
"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_*
event (e.g., AnonCredsIssuer.finish_schema
, AnonCredsIssuer.finish_cred_def
, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_*
to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.
The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aca-py
channel on the OpenWallet Foundation Discord Server or open an issue in this repo to get help.
Pull Requests to the ACA-Py repository to improve this content are welcome!
"},{"location":"features/AnoncredsProofValidation/","title":"AnonCreds Proof Validation in ACA-Py","text":"ACA-Py performs pre-validation when verifying AnonCreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the AnonCreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.
The list of possible verification messages can be found here, and consists of:
class PresVerifyMsg(str, Enum):\n \"\"\"Credential verification codes.\"\"\"\n\n RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n PRES_VALUE_ERROR = \"VALUE_ERROR\"\n PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n
If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid
(which means the attribute identified by 19_uuid
contained a timestamp outside of the non-revocation interval (this is just a warning)).
A presentation verification may include multiple messages, for example:
...\n \"verified\": \"true\",\n \"verified_msgs\": [\n \"TS_OUT_NRI::18_uuid\",\n \"TS_OUT_NRI::18_id_GE_uuid\",\n \"TS_OUT_NRI::18_busid_GE_uuid\"\n ],\n ...\n
... or it may include a single message, for example:
...\n \"verified\": \"false\",\n \"verified_msgs\": [\n \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n ],\n ...\n
... or the verified_msgs
may be null or an empty array.
The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:
The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:
VALUE_ERROR::<description of the failed validation>\n
These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...)
in the code.
A summary of the possible errors includes:
Typically, when you call the anoncreds verifier_verify_proof()
method, it will return a True
or False
based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:
VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.
ACA-Py provides a DIDMethods
registry holding all the DID methods supported for storage in a wallet
Askar and InMemory are the only wallets supporting this registry.
"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"By default, ACA-Py supports did:key
and did:sov
. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web
to the registry from a plugin setup
method.
WEB = DIDMethod(\n name=\"web\",\n key_types=[ED25519, BLS12381G2],\n rotation=True,\n holder_defined_did=HolderDefinedDid.REQUIRED # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n methods = context.inject(DIDMethods)\n methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"POST /wallet/did/create
can be provided with parameters for any registered DID method. Here's a follow-up to the did:web
method example:
{\n \"method\": \"web\",\n \"options\": {\n \"did\": \"did:web:doma.in\",\n \"key_type\": \"ed25519\"\n }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.
"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.
A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.
For example, given the DID did:example:1234abcd
, a DID Resolver that supports did:example
might return:
{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n \"id\": \"did:example:1234abcd#keys-1\",\n \"type\": \"Ed25519VerificationKey2018\",\n \"controller\": \"did:example:1234abcd\",\n \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n \"id\": \"did:example:1234abcd#did-communication\",\n \"type\": \"did-communication\",\n \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n
For more details on DIDs and DID Resolution, see the W3C DID Specification.
In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.
"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver
","text":"In ACA-Py, the DIDResolver
provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers
list. This registry enables additional resolvers to be loaded via plugin.
class ExampleMessageHandler:\n async def handle(context: RequestContext, responder: BaseResponder):\n \"\"\"Handle example message.\"\"\"\n resolver = await context.inject(DIDResolver)\n\n doc: dict = await resolver.resolve(\"did:example:123\")\n assert doc[\"id\"] == \"did:example:123\"\n\n verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"On DIDResolver.resolve
or DIDResolver.dereference
, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:
supports
method or a supported_did_regex
method. These methods are used to determine whether the given DID can be handled by the method resolver.The selection algorithm roughly follows the following steps:
resolver.supports(did)
returns false
.Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool
method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool
, writing your own should require minimal overhead.
Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver
class as the base. BaseDIDResolver
is an abstract base class that defines the interface that the core DIDResolver
expects for Method resolvers.
The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py
will be in charge of injecting the plugin, and example_resolver.py
will have the logic implementation to resolve for a fabricated did:example
method.
__init __.py
","text":"```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver
from .example_resolver import ExampleResolver
async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)
#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n \"\"\"ExampleResolver class.\"\"\"\n\n def __init__(self):\n super().__init__(ResolverType.NATIVE)\n # Alternatively, ResolverType.NON_NATIVE\n self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n @property\n def supported_did_regex(self) -> Pattern:\n \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n return self._supported_did_regex\n\n async def setup(self, context):\n \"\"\"Setup the example resolver (none required).\"\"\"\n\n async def _resolve(self, profile: Profile, did: str) -> dict:\n \"\"\"Resolve example DIDs.\"\"\"\n if did != \"did:example:1234abcd\":\n raise DIDNotFound(\n \"We only actually resolve did:example:1234abcd. Sorry!\"\n )\n\n return {\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n \"id\": \"did:example:1234abcd#keys-1\",\n \"type\": \"Ed25519VerificationKey2018\",\n \"controller\": \"did:example:1234abcd\",\n \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n \"id\": \"did:example:1234abcd#did-communication\",\n \"type\": \"did-communication\",\n \"serviceEndpoint\": \"https://agent.example.com/\"\n }]\n }\n
"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.
In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github
DIDs.
The resolution algorithm is simple: for the github DID did:github:dbluhm
, the method specific identifier dbluhm
(a GitHub username) is used to lookup an index.jsonld
file in the ghdid
repository in that GitHub users profile. See GitHub DID Method Specification for more details.
To use this plugin, first install it into your project's python environment:
pip install git+https://github.com/dbluhm/acapy-resolver-github\n
Then, invoke ACA-Py as you normally do with the addition of:
$ aca-py start \\\n --plugin acapy_resolver_github \\\n # ... the remainder of your startup arguments\n
Or add the following to your configuration file:
plugin:\n - acapy_resolver_github\n
The following is a fully functional Dockerfile encapsulating this setup:
```dockerfile= FROM ghcr.io/openwallet-foundation/acapy:py3.9-0.12.1 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github
CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]
To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n
"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":"https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/
"},{"location":"features/DevReadMe/","title":"Developer's Read Me for ACA-Py","text":"See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.
"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":"ACA-Py is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.
The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.
"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"To put ACA-Py through its paces at the command line, checkout our demos page.
"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-environment-variables","title":"Configuring ACA-PY: Environment Variables","text":"All CLI parameters in ACA-PY have equivalent environment variables. To convert a CLI argument to an environment variable:
Basic Conversion: Convert the CLI argument to uppercase and prefix it with ACAPY_
. For example, --admin
becomes ACAPY_ADMIN
.
Multiple Parameters: Arguments that take multiple parameters, such as --admin 0.0.0.0 11000
, should be wrapped in an array. For example, ACAPY_ADMIN=\"[0.0.0.0, 11000]\"
-it <module> <host> <port>
, which can be repeated, must be wrapped inside another array and string escaped. For example, instead of: -it http 0.0.0.0 11000 ws 0.0.0.0 8023
use: ACAPY_INBOUND_TRANSPORT=[[\\\"http\\\",\\\"0.0.0.0\\\",\\\"11000\\\"],[\\\"ws\\\",\\\"0.0.0.0\\\",\\\"8023\\\"]]
For a comprehensive list of all arguments, argument groups, CLI args, and their environment variable equivalents, please see the argparse.py file.
"},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help
option to discover the available command line parameters. There are a lot of them--for good and bad.
To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:
scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"If you installed the PyPi package, the executable aca-py
should be available on your PATH.
Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:
aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n
If you get an error about a missing module indy
(e.g. ModuleNotFoundError: No module named 'indy'
) when running aca-py
, you will need to install the Indy libraries from the command line:
pip install python3_indy\n
Once that completes successfully, you should be able to run aca-py --version
and the other examples above.
ACA-Py invocations are separated into two types - initially provisioning an agent (provision
) and starting a new agent process (start
). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.
When starting an agent instance, at least one inbound and one outbound transport MUST be specified.
For example:
aca-py start --inbound-transport http 0.0.0.0 8000 \\\n --outbound-transport http\n
or
aca-py start --inbound-transport http 0.0.0.0 8000 \\\n --inbound-transport ws 0.0.0.0 8001 \\\n --outbound-transport ws \\\n --outbound-transport http\n
ACA-Py ships with both inbound and outbound transport drivers for http
and ws
(websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.
Most configuration parameters are provided to the agent at startup. Refer to the Running
sections above for details on listing the available command line parameters.
It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...
).
aca-py provision --wallet-type askar --seed $SEED\n
For additional provision
options, execute aca-py provision --help
.
Additional information about secure storage options and configuration settings can be found here.
"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.
"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.
"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.
"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"Docker must be installed to run software locally and to run the test suite.
"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.
One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.
"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.
./scripts/run_docker start <args>\n
For example:
./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n
To enable the Debug Adapter Protocol using the debugpy implementation for Python 3 Python debugger for Visual Studio/VSCode use the --debug
command line parameter.
When debugging an agent running within a docker container, you will need to set the DAP_HOST environment variable (defaults to localhost
) to 0.0.0.0
to allow forwarding from within your docker container.
Note that you may still find references to PTVSD, the deprecated implementation of DAP. PTVSD_HOST and PTVSD_PORT are interchangeable with DAP_HOST and DAP_PORT.
Example:
ENV_VARS=\"DAP_HOST=0.0.0.0\" scripts/run_docker provision --log-level debug --wallet-type askar --wallet-name $(whoami) --wallet-key mysecretkey --endpoint http://localhost:8080 --no-ledger --debug\n
Any ports you will be using from the docker container should be published using the PORTS
environment variable. For example:
PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n
Refer to the previous section for instructions on how to run ACA-Py.
"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"You can find more details about logging and log levels here.
"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"To run the ACA-Py test suite, use the following script:
./scripts/run_tests\n
To run the ACA-Py test suite with ptvsd debugger enabled:
./scripts/run_tests --debug\n
To run specific tests pass parameters as defined by pytest:
./scripts/run_tests aries_cloudagent/protocols/connections\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).
Check out and run AATH tests as follows (this tests the aca-py main
branch):
git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n
The manage
script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.
We use Ruff to enforce a coding style guide.
Please write tests for the work that you submit.
Tests should reside in a directory named tests
alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_
to be automatically picked up the test runner.
There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!
The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.
Please also refer to the contributing guidelines and code of conduct.
"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.
"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext
instance, currently within conductor.py
. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass)
; for instance the wallet instance may be injected using wallet = context.inject(BaseWallet)
. The inject
method normally throws an exception if no implementation of the base class is provided, but can be called with required=False
for optional dependencies (in which case a value of None
may be returned).
Providers are registered with either context.injector.bind_instance(BaseClass, instance)
for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider)
for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.
The BaseProvider
classes in the config.provider
module include ClassProvider
, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet
). ClassProvider
accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass)
, allowing dynamic injection of dependencies when the class instance is instantiated.
ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.
Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.
"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.
Once the connection is established and active
, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role
endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info
endpoint.
Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id
(of the Endorser connection) and create_transaction_for_endorser
.
(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)
If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:
Protocol Step Author Endorser Request Endorsement/transactions/create-request
Endorse Transaction /transactions/{tran_id}/endorse
Write Transaction /transactions/{tran_id}/write
Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.
Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).
"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"The following start-up parameters are supported by ACA-Py:
Endorsement:\n --endorser-protocol-role <endorser-role>\n Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n --endorser-public-did <endorser-public-did>\n For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n --endorser-alias <endorser-alias>\n For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n --auto-request-endorsement\n For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n --auto-endorse-transactions\n For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n --auto-write-transactions\n For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n --auto-create-revocation-transactions\n For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n --auto-promote-author-did\n For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:
The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).
The overall architecture can be illustrated as:
"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:
You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).
At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.
For example:
Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.
"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:
You can see the same endorsement processes in this sequence.
Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.
"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"By design ACA-Py is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, AnonCreds and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.
"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":"did:sov
did:key
The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:
BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.
Some other resources that can help you get started with BBS+ credentials:
Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.
"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context
of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:
<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]
For credentials the https://www.w3.org/2018/credentials/v1
context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1
URL MUST be present in the context. For convenience this URL will be automatically added to the @context
of the credential if not present.
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://other-contexts.com\"\n ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:
Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.
For the remainder of this guide, we will be using the example UniversityDegreeCredential
type and https://www.w3.org/2018/credentials/examples/v1
context from the Verifiable Credential Data Model. You should not use this for production use cases.
Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports three signature suites for issuing credentials:
Ed25519Signature2018
- Very well supported. No zero knowledge proofs or selective disclosure.Ed25519Signature2020
- Updated version of 2018 suite.BbsBlsSignature2020
- Newer, but supports zero knowledge proofs and selective disclosure.Generally you should always use BbsBlsSignature2020
as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.
Besides the JSON-LD context, we need a DID to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:
did:sov
- Can only be used for Ed25519Signature2018
signature suite.did:key
- Can be used for both Ed25519Signature2018
and BbsBlsSignature2020
signature suites.did:sov
","text":"When using did:sov
you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public
endpoint. For backwards compatibility the did is returned without did:sov
prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB
becomes did:sov:DViYrCMPWfuLiY7LLs8giB
)
did:key
","text":"A did:key
did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.
You can create a did:key
using the /wallet/did/create
endpoint with the following body. Use ed25519
for Ed25519Signature2018
, bls12381g2
for BbsBlsSignature2020
.
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"bls12381g2\" // or ed25519\n }\n}\n
The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj
Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0
)
The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0
. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.
All endpoints in API use the aries/ld-proof-vc-detail@v1.0
. We'll use the /issue-credential-2.0/send
as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.
The detail should be included under the filter.ld_proof
property. To issue a credential call the /issue-credential-2.0/send
endpoint, with the example body below and the connection_id
and issuer
keys replaced. The value of issuer
should be the did that you created in the Did Method paragraph above.
If you don't have auto-respond-credential-offer
and auto-store-credential
enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request
and /issue-credential-2.0/records/{cred_ex_id}/store
to finalize the credential issuance.
{\n \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.
Call the /credentials/w3c
endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.
{\n \"results\": [\n {\n \"contexts\": [\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n \"schema_ids\": [],\n \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"subject_ids\": [],\n \"proof_types\": [\"BbsBlsSignature2020\"],\n \"cred_value\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n },\n \"proof\": {\n \"type\": \"BbsBlsSignature2020\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"created\": \"2021-05-03T12:31:28.561945\",\n \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n }\n },\n \"cred_tags\": {},\n \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n }\n ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"\u26a0\ufe0f TODO: https://github.com/openwallet-foundation/acapy/pull/1125
"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.
These endpoints include:
GET /vc/credentials
-> returns a list of all stored json-ld credentialsGET /vc/credentials/{id}
-> returns a json-ld credential based on it's IDPOST /vc/credentials/issue
-> signs a credentialPOST /vc/credentials/verify
-> verifies a credentialPOST /vc/credentials/store
-> stores an issued credentialPOST /vc/presentations/prove
-> proves a presentationPOST /vc/presentations/verify
-> verifies a presentationTo learn more about using these endpoints, please refer to the available postman collection.
"},{"location":"features/JsonLdCredentials/#external-suite-provider","title":"External Suite Provider","text":"It is possible to extend the signature suite support, including outsourcing signing JSON-LD Credentials to some other component (KMS, HSM, etc.), using the ExternalSuiteProvider
interface. This interface can be implemented and registered via plugin. The plugged in provider will be used by ACA-Py's LDP-VC subsystem to create a LinkedDataProof
object, which is responsible for signing normalized credential values.
This interface enables taking advantage of ACA-Py's JSON-LD processing to construct and format the credential while exposing a simple interface to a plugin to make it responsible for signatures. This can also be combined with plugged in DID Methods, VerificationKeyStrategy
, and other pluggable components.
See this example project here for more details on the interface and its usage: https://github.com/dbluhm/acapy-ld-signer
"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":"--open-mediation
- Instructs mediators to automatically grant all incoming mediation requests.--mediator-invitation
- Receive invitation, send mediation request and set as default mediator.--mediator-connections-invite
- Connect to mediator through a connection invitation. If not specified, connect using an OOB invitation.--default-mediator-id
- Set pre-existing mediator as default mediator.--clear-default-mediator
- Clear the stored default mediator.The minimum set of arguments required to enable mediation are:
aca-py start ... \\\n --open-mediation\n
To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):
aca-py start ... \\\n --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n
If a default mediator has already been established, then the --default-mediator-id
argument can be used instead of the --mediator-invitation
.
See Aries RFC 0211: Coordinate Mediation Protocol.
"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":"GET mediation/requests
conn_id
, state
, mediator_terms
and recipient_terms
.GET mediation/requests/{mediation_id}
DELETE mediation/requests/{mediation_id}
POST mediation/requests/{mediation_id}/grant
granted
message to client.POST mediation/requests/{mediation_id}/deny
denied
message to client.POST mediation/request/{conn_id}
GET mediation/keylists
client
for keys mediated by other agents and server
for keys mediated by this agent.POST mediation/keylists/{mediation_id}/send-keylist-update
POST mediation/keylists/{mediation_id}/send-keylist-query
GET mediation/default-mediator
(PR pending)PUT mediation/{mediation_id}/default-mediator
(PR pending)DELETE mediation/default-mediator
(PR pending)After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id
with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.
It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.
With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of ACA-Py being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.
Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using ACA-Py as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.
In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.
"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID
by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger
is supported. Configurable write ledgers can be assigned using is_write
in the configuration or using any of the --genesis-url
, --genesis-file
, and --genesis-transactions
startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError
is raised.
More background information including problem statement, design (algorithm) and more can be found here.
"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":"Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list
startup parameter. This parameter accepts a string which is the path to the YAML
configuration file. For example:
--genesis-transactions-list ./acapy_agent/config/multi_ledger_config.yml
If --genesis-transactions-list
is specified, then --genesis-url, --genesis-file, --genesis-transactions
should not be specified.
- id: localVON\n is_production: false\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n is_production: false\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n endorser_alias: \"endorser_test\"\n- id: greenlightDev\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
Note: is_write
property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest
and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev
ledgers are write configurable. By default, on startup bcovrinTest
will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger
endpoint, either greenlightDev
and bcovrinTest
can be set as the write ledger.
Note 2: The greenlightDev
ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.
- id: localVON\n is_production: false\n is_write: true\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n is_production: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
Note: For instance with regards to example config above, localVON
will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.
For each ledger, the required properties are as following:
id
*: The id (or name) of the ledger, can also be used as the pool name if none providedis_production
*: Whether the ledger is a production ledger. This is used by the pool selector algorithm to know which ledger to use for certain interactions (i.e. prefer production ledgers over non-production ledgers)For connecting to ledger, one of the following needs to be specified:
genesis_file
: The path to the genesis file to use for connecting to an Indy ledger.genesis_transactions
: String of genesis transactions to use for connecting to an Indy ledger.genesis_url
: The url from which to download the genesis transactions to use for connecting to an Indy ledger.is_write
: Whether this ledger is writable. At least one write ledger must be specified, unless running in read-only mode. Multiple write ledgers can be specified in config.Optional properties:
pool_name
: name of the indy pool to be openedkeepalive
: how many seconds to keep the ledger opensocks_proxy
endorser_did
: Endorser public DID registered on the ledger, needed for supporting Endorser protocol at multi-ledger level.endorser_alias
: Endorser alias for this ledger, needed for supporting Endorser protocol at multi-ledger level.Note: Both endorser_did
and endorser_alias
are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger
, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did
and endorser.endorser_alias
profile setting respectively.
Multi-ledger related actions are grouped under the ledger
topic in the SwaggerUI.
/ledger/config
: Returns the multiple ledger configuration currently in use/ledger/get-write-ledger
: Returns the current active/set write_ledger's
ledger_id
/ledger/get-write-ledgers
: Returns list of available write_ledger's
ledger_id
/ledger/{ledger_id}/set-write-ledger
: Set active write_ledger's
ledger_id
The following process is executed for these functions in ACA-Py:
get_schema
get_credential_definition
get_revoc_reg_def
get_revoc_reg_entry
get_key_for_did
get_all_endpoints_for_did
get_endpoint_for_did
get_nym_role
get_revoc_reg_delta
If multiple ledgers are configured then IndyLedgerRequestsExecutor
service extracts DID
from the record identifier and executes the check below, else it returns the BaseLedger
instance.
lookup_did_in_configured_ledgers
functionDID
in cache
for a corresponding applicable ledger_id
. If found, return the ledger info, else continue._get_ledger_by_did
tasks for each of the configured ledgers.applicable_prod_ledgers
and applicable_non_prod_ledgers
dictionaries, each with self_certified
and non_self_certified
inner dict which are sorted by the original order or index.self_certified
> production
> non_production
production
ledger where the DID
is self_certified
non_production
ledger where the DID
is self_certified
production
ledger where the DID
is not self_certified
non_production
ledger where the DID
is not self_certified
_get_ledger_by_did
functionGET_NYM
DID
is self certifiedlookup_did_in_configured_ledgers
On startup, the first configured applicable ledger is assigned as the write_ledger
(BaseLedger
), the selection is dependent on the order (top-down) and whether it is production
or non_production
. For instance, considering this example configuration, ledger bcovrinTest
will be set as write_ledger
as it is the topmost production
ledger. If no production
ledgers are included in configuration then the topmost non_production
ledger is selected.
When you run in multi-ledger mode, ACA-Py will use the pool-name
(or id
) specified in the ledger configuration file for each ledger.
(When running in single-ledger mode, ACA-Py uses default
as the ledger name.)
If you are running against a ledger in write
mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name
as a key.
This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:
id
for your writable ledger to default
(in your ledgers.yaml
file)or:
Once you re-start ACA-Py, you can check the GET /ledger/taa
endpoint to verify your TAA acceptance status.
There should be no impact/change in functionality to any ACA-Py protocols.
IndySdkLedger
was refactored by replacing wallet: IndySdkWallet
instance variable with profile: Profile
and accordingly .acapy_agent/indy/credex/verifier
, .acapy_agent/indy/models/pres_preview
, .acapy_agent/indy/sdk/profile.py
, .acapy_agent/indy/sdk/verifier
, ./acapy_agent/indy/verifier
were also updated.
Added build_and_return_get_nym_request
and submit_get_nym_request
helper functions to IndySdkLedger
and IndyVdrLedger
.
Best practice/feedback emerging from Askar session deadlock
issue and endorser refactoring
PR was also addressed here by not leaving sessions open unnecessarily and changing context.session
to context.profile.session
, etc.
These changes are made here:
./acapy_agent/ledger/routes.py
./acapy_agent/messaging/credential_definitions/routes.py
./acapy_agent/messaging/schemas/routes.py
./acapy_agent/protocols/actionmenu/v1_0/routes.py
./acapy_agent/protocols/actionmenu/v1_0/util.py
./acapy_agent/protocols/basicmessage/v1_0/routes.py
./acapy_agent/protocols/coordinate_mediation/v1_0/handlers/keylist_handler.py
./acapy_agent/protocols/coordinate_mediation/v1_0/routes.py
./acapy_agent/protocols/endorse_transaction/v1_0/routes.py
./acapy_agent/protocols/introduction/v0_1/handlers/invitation_handler.py
./acapy_agent/protocols/introduction/v0_1/routes.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_issue_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_offer_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_proposal_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_request_handler.py
./acapy_agent/protocols/issue_credential/v1_0/routes.py
./acapy_agent/protocols/issue_credential/v2_0/routes.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_handler.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_proposal_handler.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_request_handler.py
./acapy_agent/protocols/present_proof/v1_0/routes.py
./acapy_agent/protocols/trustping/v1_0/routes.py
./acapy_agent/resolver/routes.py
./acapy_agent/revocation/routes.py
Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.
This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.
"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":"When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.
"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.
The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.
The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed
argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.
Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant
startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin
startup parameter. See Multi-tenant Admin API below for more info on the admin API.
The --jwt-secret
startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.
Example:
# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#single-wallet-vs-multiple-wallets","title":"Single Wallet vs Multiple Wallets","text":"With askar wallets it's possible to have all tenant wallets in a single wallet or each have an individual wallet. The default is to have each tenant in a separate wallet. This is done to keep the wallets separate and to allow for more flexibility in the future. If you want to have all tenants in a single wallet you can set the multitenancy-config
with the value {\"wallet_type\": \"single-wallet-askar\"}
. If you want to explicitly set the wallet type for each tenant you can do so by setting the multitenancy-config
with the value {\"wallet_type\": \"basic\"}
. See .vscode-sample/multitenant-admin.yml for an example.
## Multi-tenant Admin API\n\nThe multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the `Authorization` header as specified in [Authentication](#authentication)).\n\nMulti-tenancy related actions are grouped under the `/multitenancy` path or the `multitenancy` topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the `--multitenant-admin` startup parameter can be used.\n\nSee the SwaggerUI for the exact API definition for multi-tenancy.\n\n## Managed vs Unmanaged Mode\n\nMulti-tenancy in ACA-Py is designed with two key management modes in mind.\n\n### Managed Mode\n\nIn **`managed`** mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.\n\n### Unmanaged Mode\n\nIn **`unmanaged`** mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See [Authentication](#authentication) for more info.\n\nIt is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.\n\n> :warning: Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.\n\n### Mode Usage\n\nThe mode used can be specified when creating a wallet using the `key_management_mode` parameter.\n\n```jsonc\n// POST /multitenancy/wallet\n{\n // ... other params ...\n \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. ACA-Py defines two types of routing methods, mediation and relaying.
See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.
"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.
"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:
--mediator-invitation
to connect to the mediator, request mediation, and set it as the default mediatordefault-mediator-id
if you're already connected to the mediator and mediation is granted (e.g. after restart).The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.
A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.
+---------------------+ +----------------------+ +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+ +----------------------+ +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.
When creating a wallet wallet_dispatch_type
be used to specify how webhooks for the wallet should be dispatched. The options are:
default
: Dispatch only to webhooks associated with this wallet.base
: Dispatch only to webhooks associated with the base wallet.both
: Dispatch to both webhook targets.If either default
or both
is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls
option.
Example:
// POST /multitenancy/wallet\n{\n // ... other params ...\n \"wallet_dispatch_type\": \"default\",\n \"wallet_webhook_urls\": [\n \"https://webhook-url.com/path\",\n \"https://another-url.com/site\"\n ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.
For HTTP events the wallet id is included as the x-wallet-id
header. For WebSockets, the wallet id is included in the enclosing JSON object.
HTTP example:
POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n // event payload\n}\n
WebSocket example:
{\n \"topic\": \"{topic}\",\n \"wallet_id\": \"{wallet_id}\",\n \"payload\": {\n // event payload\n }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key
header. As there is only a single wallet, this provides sufficient authentication and authorization.
For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token
parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.
Example
GET /connections [headers=\"Authorization: Bearer {token}]\n
The Authorization
header is in addition to the Admin API key. So if the admin-api-key
is enabled (which should be enabled in production) both the Authorization
and the x-api-key
headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key
should be provided.
A token can be obtained in two ways. The first method is the token
parameter from the response of the create wallet (POST /multitenancy/wallet
) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token
) endpoint.
This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token
as well as other useful information like your wallet id
will be returned to you.
Example
new_tenant='{\n \"image_url\": \"https://aries.ca/images/sample.png\",\n \"key_management_mode\": \"managed\",\n \"label\": \"example-label-02\",\n \"wallet_dispatch_type\": \"default\",\n \"wallet_key\": \"example-encryption-key-02\",\n \"wallet_name\": \"example-name-02\",\n \"wallet_type\": \"askar\",\n \"wallet_webhook_urls\": [\n \"https://example.com/webhook\"\n ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
Response
{\n \"settings\": {\n \"wallet.type\": \"askar\",\n \"wallet.name\": \"example-name-02\",\n \"wallet.webhook_urls\": [\n \"https://example.com/webhook\"\n ],\n \"wallet.dispatch_type\": \"default\",\n \"default_label\": \"example-label-02\",\n \"image_url\": \"https://aries.ca/images/sample.png\",\n \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n },\n \"key_management_mode\": \"managed\",\n \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"This method allows you to retrieve a tenant token
for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key
and the wallet_id
of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.
Example
curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d { \"wallet_key\": \"example-encryption-key-02\" }\n
Response
{\n \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
In unmanaged mode, the get token endpoint also requires the wallet_key
parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.
{\n \"wallet_id\": \"wallet_id\",\n // \"wallet_key\" in only present in unmanaged mode\n \"wallet_key\": \"wallet_key\"\n}\n
In unmanaged mode, sending the wallet_key
to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.
For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret
param is added to the ACA-Py agent that will be used for JWT creation and verification.
When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize
button at the top to set the correct authentication headers. Make sure to also include the Bearer
part in the input field. This won't be automatically added.
After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.
"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"The following properties can be updated: image_url
, label
, wallet_dispatch_type
, and wallet_webhook_urls
for tenants of a multitenancy wallet. To update these properties you will PUT
a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID}
admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.
Example
update_tenant='{\n \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n \"label\": \"example-label-02-updated\",\n \"wallet_webhook_urls\": [\n \"https://example.com/webhook/updated\"\n ]\n}'\n
echo $update_tenant | curl -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
Response
{\n \"settings\": {\n \"wallet.type\": \"askar\",\n \"wallet.name\": \"example-name-02\",\n \"wallet.webhook_urls\": [\n \"https://example.com/webhook/updated\"\n ],\n \"wallet.dispatch_type\": \"default\",\n \"default_label\": \"example-label-02-updated\",\n \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n },\n \"key_management_mode\": \"managed\",\n \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n
An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error
"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"The following information is required to delete a tenant:
Example
curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n
Response
{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:
Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_rolePOST /multitenancy/wallet
Added extra_settings
dict field to request schema. extra_settings
can be configured in the request body as below:
Example Request
{\n \"wallet_name\": \" ... \",\n \"default_label\": \" ... \",\n \"wallet_type\": \" ... \",\n \"wallet_key\": \" ... \",\n \"key_management_mode\": \"managed\",\n \"wallet_webhook_urls\": [],\n \"wallet_dispatch_type\": \"base\",\n \"extra_settings\": {\n \"ACAPY_LOG_LEVEL\": \"INFO\",\n \"ACAPY_INVITE_PUBLIC\": true,\n \"public-invites\": true\n },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
PUT /multitenancy/wallet/{wallet_id}
Added extra_settings
dict field to request schema.
Example Request
{\n \"wallet_webhook_urls\": [ ... ],\n \"wallet_dispatch_type\": \"default\",\n \"label\": \" ... \",\n \"image_url\": \" ... \",\n \"extra_settings\": {\n \"ACAPY_LOG_LEVEL\": \"INFO\",\n \"ACAPY_INVITE_PUBLIC\": true,\n \"ACAPY_PUBLIC_INVITES\": false\n },\n }\n
echo $update_tenant | curl -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: ACA-Py Plug-Ins","text":"ACA-Py plugins enable standardized extensibility without overloading the core ACA-Py code base. Plugins may be features that you create specific to your deployment, or that you deploy from the ACA-Py Plugins \"Store\". Visit the Plugins Store to find all of the open source plugins that have been contributed.
"},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-they-work","title":"What's in a Plug-In and How Does They Work?","text":"Plug-ins are loaded on ACA-Py startup based on the following parameters:
--plugin
- identifies the plug-in library to load--block-plugin
- identifies plug-ins (including built-ins) that are not to be loaded--plugin-config
- identify a configuration parameter for a plug-in--plugin-config-value
- identify a value for a plug-in configurationThe --plug-in
parameter specifies a package that is loaded by ACA-Py at runtime, and extends ACA-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.
The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py
routes.py
(to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup
package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py
file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).
You can discover which plug-ins are installed in an ACA-Py instance by calling (in the \"server\" section) the GET /plugins
endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config
to inspect the ACA-Py configuration, which will include the configuration for the external plug-ins.)
If a setup method is provided, it will be called. If not, the message_types.py
and routes.py
will be explicitly loaded.
This would be in the package/module __init__.py
:
async def setup(context: InjectionContext):\n pass\n
TODO I couldn't find an implementation of a custom setup
in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.
When loading a plug-in, if there is a message_types.py
available, ACA-Py will check the following attributes to initialize the protocol(s):
MESSAGE_TYPES
- identifies message types supported by the protocolCONTROLLERS
- identifies protocol controllersIf routes.py
is available, then ACA-Py will call the following functions to initialize the Admin endpoints:
register()
- registers routes for the new Admin endpointsregister_events()
- registers an events this package will listen for/respond toIf definition.py
is available, ACA-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):
versions = [\n {\n \"major_version\": 1,\n \"minimum_minor_version\": 0,\n \"current_minor_version\": 0,\n \"path\": \"v1_0\",\n },\n {\n \"major_version\": 2,\n \"minimum_minor_version\": 0,\n \"current_minor_version\": 0,\n \"path\": \"v2_0\",\n },\n]\n
The attributes are:
major_version
- specifies the protocol major versioncurrent_minor_version
- specifies the protocol minor versionminimum_minor_version
- specifies the minimum supported version (if a lower version is installed in another agent)path
- specifies the sub-path within the package for this versionThe load sequence for a plug-in (the \"Startup\" class depends on how ACA-Py is running - upgrade
, provision
or start
):
sequenceDiagram\n participant Startup\n Note right of Startup: Configuration is loaded on startup<br/>from ACA-Py config params\n Startup->>+ArgParse: configure\n ArgParse->>settings: [\"external_plugins\"]\n ArgParse->>settings: [\"blocked_plugins\"]\n\n Startup->>+Conductor: setup()\n Note right of Conductor: Each configured plug-in is validated and loaded\n Conductor->>DefaultContext: build_context()\n DefaultContext->>DefaultContext: load_plugins()\n DefaultContext->>+PluginRegistry: register_package() (for built-in protocols)\n PluginRegistry->>PluginRegistry: register_plugin() (for each sub-package)\n DefaultContext->>PluginRegistry: register_plugin() (for non-protocol built-ins)\n loop for each external plug-in\n DefaultContext->>PluginRegistry: register_plugin()\n alt if a setup method is provided\n PluginRegistry->>ExternalPlugIn: has setup\n else if routes and/or message_types are provided\n PluginRegistry->>ExternalPlugIn: has routes\n PluginRegistry->>ExternalPlugIn: has message_types\n end\n opt if definition is provided\n PluginRegistry->>ExternalPlugIn: definition()\n end\n end\n DefaultContext->>PluginRegistry: init_context()\n loop for each external plug-in\n alt if a setup method is provided\n PluginRegistry->>ExternalPlugIn: setup()\n else if a setup method is NOT provided\n PluginRegistry->>PluginRegistry: load_protocols()\n PluginRegistry->>PluginRegistry: load_protocol_version()\n PluginRegistry->>ProtocolRegistry: register_message_types()\n PluginRegistry->>ProtocolRegistry: register_controllers()\n end\n PluginRegistry->>PluginRegistry: register_protocol_events()\n end\n\n Conductor->>Conductor: load_transports()\n\n Note right of Conductor: If the admin server is enabled, plug-in routes are added\n Conductor->>AdminServer: create admin server if enabled\n\n Startup->>Conductor: start()\n Conductor->>Conductor: start_transports()\n Conductor->>AdminServer: start()\n\n Note right of Startup: the following represents an<br/>admin server api request\n Startup->>AdminServer: setup_context() (called on each request)\n AdminServer->>PluginRegistry: register_admin_routes()\n loop for each external plug-in\n PluginRegistry->>ExternalPlugIn: routes.register() (to register endpoints)\n end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"When developing a new plug-in:
definition.py
file.message_types.py
file.routes.py
file.setup.py
file to initialize the custom functionality. No guidance is currently available for this option.Most ACA-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.
"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"TBD
"},{"location":"features/PlugIns/#aca-py-plug-ins-repository","title":"ACA-Py Plug-ins Repository","text":"Checkout the \"Plugins\" tab in the ACA-Py Plugins \"Store\" to find a list of plugins that might be useful in your deployment. Instructions are included for how you can contribute your plugin to the list.
"},{"location":"features/PlugIns/#references","title":"References","text":"The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)
Configuration params:
Loading plug-ins:
Versioning for plug-ins:
In the past, ACA-Py has used \"unqualified\" DIDs by convention established early on in the Aries ecosystem, before the concept of Peer DIDs, or DIDs that existed only between peers and were not (necessarily) published to a distributed ledger, fully matured. These \"unqualified\" DIDs were effectively Indy Nyms that had not been published to an Indy network. Key material and service endpoints were communicated by embedding the DID Document for the \"DID\" in DID Exchange request and response messages.
For those familiar with the DID Core Specification, it is a stretch to refer to these unqualified DIDs as DIDs. Usage of these DIDs will be phased out, as dictated by Aries RFC 0793: Unqualified DID Transition. These DIDs will be phased out in favor of the did:peer
DID Method. ACA-Py's support for this method and it's use in DID Exchange and DID Rotation is dictated below.
When using DID Exchange as initiated by an Out-of-Band invitation:
POST /out-of-band/create-invitation
accepts two parameters (in addition to others):use_did_method
: a DID Method (options: did:peer:2
did:peer:4
) indicating that a DID of that type is created (if necessary), and used in the invitation. If a DID of the type has to be created, it is flagged as the \"invitation\" DID and used in all future invitations so that connection reuse is the default behaviour.did:peer:4
.use_did
: a complete DID, which will be used for the invitation being established. This supports the edge case of an entity wanting to use a new DID for every invitation. It is the responsibility of the controller to create the DID before passing it in.use_did_method=\"did:peer:4\"
is the default, which is created and (re)used.didexchange/1.1
. Optionally, didexchage/1.0
may also be provided, thus enabling backwards compatibility with agents that do not yet support didexchage/1.0
and use of unqualified DIDs.When receiving an OOB invitation or creating a DID Exchange request to a known Public DID:
POST /didexchange/create-request
and POST /didexchange/{conn_id}/accept-invitation
accepts two parameters (in addition to others):use_did_method
: a DID Method (options: did:peer:2
did:peer:4
) indicating that a DID of that type should be created and used for the connection.did:peer:4
.use_did
: a complete DID, which will be used for the connection being established. This supports the edge case of an entity wanting to use the same DID for more than one connection. It is the responsibility of the controller to create the DID before passing it in.did:peer:4
is created and DID Exchange 1.1 is always used.auto-accept
is used with DID Exchange, then an unqualified DID is created if DID Exchange 1.0 is being used, and a DID Peer 4 is used if DID Exchange 1.1 is used.With these changes, an existing ACA-Py installation using unqualified DIDs can upgrade to use qualified DIDs:
use_did
or use_did_method
parameter on the POST /out-of-band/create-invitation
, POST /didexchange/create-request
. and POST /didexchange/{conn_id}/accept_invitation
endpoints and specifying did:peer:2
or did_peer:4
.auto-accept
the connection.As part of the transition to qualified DIDs, existing connections may be updated to qualified DIDs using the DID Rotate protocol. This is not strictly required; since DIDComm v1 depends on recipient keys for correlating a received message back to a connection, the DID itself is mostly ignored. However, as we transition to DIDComm v2 or if it is desired to update the keys associated with a connection, DID Rotate may be used to update keys and service endpoints.
The steps to do so are:
POST /wallet/did/create
(or through the endpoints provided by a plugged in DID Method, if relevant).did:peer:4
.POST /did-rotate/{conn_id}/rotate
providing the created DID as the to_did
in the body of the Admin API request.did_rotate
webhook will be emitted indicating success.This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.
This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.
In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.
"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss
, iat
, exp
, and cnf
are always visible.
The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:
{\n \"birthdate\": \"1940-01-01\",\n \"address\": {\n \"street_address\": \"123 Main St\",\n \"locality\": \"Anytown\",\n \"region\": \"Anystate\",\n \"country\": \"US\",\n },\n \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\" The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.
"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"The issuer can decide to treat the address
claim in the above example payload as a block that can either be disclosed completely or not at all.
The issuer lists out all the claims inside \"address\" in the non_sd_list
, but not address
itself:
non_sd_list = [\n \"address.street_address\",\n \"address.locality\",\n \"address.region\",\n \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"The issuer may instead decide to make the address
claim contents selectively disclosable individually.
The issuer lists only \"address\" in the non_sd_list
.
non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"The issuer may also decide to make the address
claim contents selectively disclosable recursively, i.e., the address
claim is made selectively disclosable as well as its sub-claims.
The issuer lists neither address
nor the subclaims of address
in the non_sd_list
, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list
need not be defined explicitly.
/wallet/sd-jwt/sign
endpoint","text":"{\n \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n \"headers\": {},\n \"payload\": {\n \"sub\": \"user_42\",\n \"given_name\": \"John\",\n \"family_name\": \"Doe\",\n \"email\": \"johndoe@example.com\",\n \"phone_number\": \"+1-202-555-0101\",\n \"phone_number_verified\": true,\n \"address\": {\n \"street_address\": \"123 Main St\",\n \"locality\": \"Anytown\",\n \"region\": \"Anystate\",\n \"country\": \"US\"\n },\n \"birthdate\": \"1940-01-01\",\n \"updated_at\": 1570000000,\n \"nationalities\": [\"US\", \"DE\", \"SA\"],\n \"iss\": \"https://example.com/issuer\",\n \"iat\": 1683000000,\n \"exp\": 1883000000\n },\n \"non_sd_list\": [\n \"given_name\",\n \"family_name\",\n \"nationalities\"\n ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n
The sd_jwt_sign()
method:
non_sd_list
compared against the list of JSON paths for all claims to create the list of JSON paths for selectively disclosable claimssd_list
so that the claims deepest in the structure are handled firstsd_list
to find each selectively disclosable claim and wrap it in the SDObj
defined by the sd-jwt Python library and removes/replaces the original entrySDJWTIssuerACAPy.issue()
method:SDJWTIssuerACAPy._create_signed_jws()
, which is redefined in order to use the ACA-Py jwt_sign
method and which creates the JWT/wallet/sd-jwt/verify
endpoint","text":"Using the output from the /wallet/sd-jwt/sign
example above, we have decided to only reveal two of the selectively disclosable claims (user
and updated_at
) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.
{\n \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"Note that attributes in the non_sd_list
(given_name
, family_name
, and nationalities
), as well as essential verification data (iss
, iat
, exp
) are visible directly within the payload. The disclosures include only the values for the user
and updated_at
claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"]
list.
{\n \"headers\": {\n \"typ\": \"JWT\",\n \"alg\": \"EdDSA\",\n \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n },\n \"payload\": {\n \"_sd\": [\n \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n ],\n \"given_name\": \"John\",\n \"family_name\": \"Doe\",\n \"nationalities\": [\n {\n \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n },\n {\n \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n },\n {\n \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n }\n ],\n \"iss\": \"https://example.com/issuer\",\n \"iat\": 1683000000,\n \"exp\": 1883000000,\n \"_sd_alg\": \"sha-256\"\n },\n \"valid\": true,\n \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n \"disclosures\": [\n [\n \"xvDX00fjZferiNiPod51qQ\",\n \"updated_at\",\n 1570000000\n ],\n [\n \"X99s3_LixBcor_hntREZcg\",\n \"sub\",\n \"user_42\"\n ]\n ]\n}\n
The sd_jwt_verify()
method:
SDJWTVerifierACAPy._verify_sd_jwt
, which is redefined in order to use the ACA-Py jwt_verify
method, and which returns the verified JWTThis document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main
branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on OpenWallet Foundation Discord or through an issue in this repo.
Last Update: 2024-10-08, Release 1.0.1
The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.
"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other decentralized trust Frameworks and Agents.
AIP Version Supported Notes AIP 1.0 Fully supported. Deprecation notices published AIP 2.0 Fully supported.A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.
"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at https://ghcr.io/openwallet-foundation/acapy. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using theEd25519Signature2018
, BbsBlsSignature2020
and BbsBlsSignatureProof2020
signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Deprecated Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov
DID published on an Indy network. did:sov
did:web
Resolution only did:key
did:peer
Algorithms 2
/3
and 4
Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar
startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds
startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type
will eventually be the same as askar
when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Removed in ACA-Py Release 1.0.0rc5 Existing deployments using the Indy SDK MUST transition to Aries Askar and related components as soon as possible. See the Indy SDK to Askar Migration Guide for guidance.
"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Invitations using peer dids supporting connection reuse Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"All RFCs listed in AIP 1.0 are fully supported in ACA-Py, but deprecation and removal of some of the protocols has begun. The following table provides notes about the implementation of specific RFCs.
RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol DEPRECATED In the next release, the protocol will be removed. The protocol will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible. 0036-issue-credential-v1.0 DEPRECATED In the next release, the protocol will be removed. The protocol will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible. 0037-present-proof-v1.0 DEPRECATED In the next release, the protocol will be removed. It will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.
RFC Supported Notes Fully Supported"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.
The running agent provides a Swagger User Interface
that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers
using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript
wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.
ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json
or from the browser Swagger User Interface
directly.
The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec
. This tool starts ACA-Py, retrieves the swagger.json
file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json
language output. The output of this tool enables comparison with the checked-in open-api/swagger.json
and open-api/openapi.json
, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation
is turned off via the open-api/openAPIJSON.config
file, so warning messages are printed for non-conformance, but the json
is still output. Most of the warnings reported by generate-open-api-spec
relate to missing operationId
fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId
annotations via tags.
The generate-open-api-spec
tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript
. It is recommended that the generate-open-api-spec
is run prior to each release, and the resulting open-api/openapi.json
file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.
There are inevitably differences around best practice
for method naming based on coding language and organization standards.
Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId
fields. This allows the greatest flexibility in conforming to external naming requirements.
Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.
The OpenAPI Tools was found to offer some nice features when generating Typescript
. It creates separate files for each class and allows the use of a .openapi-generator-ignore
file to override generation if there is a spec file issue that needs to be maintained manually.
If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter
or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.
Another suggestion for code generation is to keep the modelPropertyNaming
set to original
when generating code. Although it is tempting to try and enable marshalling into standard naming formats such as camelCase
, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model
shows when looking at the ACA-Py Swagger UI
in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshalling correct in all circumstances when changing model name format.
This document outlines a new functionality within Aries Agent that facilitates the issuance of credentials and presentations in compliance with the W3C standard.
"},{"location":"features/W3cCredentials/#table-of-contents","title":"Table of Contents","text":"did:key
The introduction of VC-DI credentials in ACA-Py facilitates the issuance of credentials and presentations in adherence to the W3C standard.
"},{"location":"features/W3cCredentials/#prerequisites","title":"Prerequisites","text":"Before utilizing this feature, it is essential to have the following:
"},{"location":"features/W3cCredentials/#verifiable-credentials-data-model","title":"Verifiable Credentials Data Model","text":"A basic understanding of the Verifiable Credentials Data Model is required. Resources for reference include:
Familiarity with the Verifiable Presentations Data Model is necessary. Relevant resources can be found at:
Understanding the DIF Presentation Format is recommended. Access resources at:
To prepare for credential issuance, the following steps must be taken:
"},{"location":"features/W3cCredentials/#vc-di-context","title":"VC-DI Context","text":"Ensure that every property key in the document is mappable to an IRI. This requires either the property key to be an IRI by default or to have the shorthand property mapped in the @context
of the document.
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\",\n {\n \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n }\n ]\n}\n
"},{"location":"features/W3cCredentials/#signature-suite","title":"Signature Suite","text":"Select a signature suite for use. VC-DI format currently supports EdDSA signature suites for issuing credentials.
Ed25519Signature2020
Choose a DID method for issuing the credential. VC-DI format currently supports the did:key
method.
did:key
","text":"A did:key
did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.
You can create a did:key
using the /wallet/did/create
endpoint with the following body.
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"ed25519\"\n }\n}\n
"},{"location":"features/W3cCredentials/#issue-a-credential","title":"Issue a Credential","text":"The issuance of W3C credentials is facilitated through the /issue-credential-2.0/send
endpoint. This process adheres to the formats described in RFC 0809 VC-DI and utilizes didcomm
for communication between agents.
To issue a W3C credential, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\",\n {\n \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n }\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n}\n
The format to change credential can be seen in the Demo Instruction
/issue-credential-2.0/send
endpoint to issue the credential.{\n \"auto_issue\": true,\n \"auto_remove\": false,\n \"comment\": \"Issuing a test credential\",\n \"credential_preview\": {\n \"@type\": \"https://didcomm.org/issue-credential/2.0/credential-preview\",\n \"attributes\": [\n {\"name\": \"name\", \"value\": \"John Doe\"}\n ]\n },\n \"filter\": {\n \"format\": {\n \"cred_def_id\": \"FMB5MqzuhR...\"\n }\n },\n \"trace\": false\n}\n
{\n \"state\": \"credential_issued\",\n \"credential_id\": \"12345\",\n \"thread_id\": \"abcde\",\n \"role\": \"issuer\"\n}\n
"},{"location":"features/W3cCredentials/#verify-a-credential","title":"Verify a Credential","text":"To verify a credential, follow these steps:
{\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof/send-request
endpoint.{\n \"presentation\": {\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
{\n \"verified\": true,\n \"presentation\": {\n \"type\": \"VerifiablePresentation\",\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ],\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"authentication\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n}\n
"},{"location":"features/W3cCredentials/#present-proof","title":"Present Proof","text":""},{"location":"features/W3cCredentials/#requesting-proof","title":"Requesting Proof","text":"To request proof, follow these steps:
{\n \"presentation_definition\": {\n \"id\": \"example-presentation-definition\",\n \"input_descriptors\": [\n {\n \"id\": \"example-input-descriptor\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials/v1\"\n }\n ],\n \"constraints\": {\n \"fields\": [\n {\n \"path\": [\"$.credentialSubject.name\"],\n \"filter\": {\n \"type\": \"string\",\n \"pattern\": \"John Doe\"\n }\n }\n ]\n }\n }\n ]\n }\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"comment\": \"Requesting proof of name\",\n \"presentation_request\": {\n \"presentation_definition\": {\n \"id\": \"example-presentation-definition\",\n \"input_descriptors\": [\n {\n \"id\": \"example-input-descriptor\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials/v1\"\n }\n ],\n \"constraints\": {\n \"fields\": [\n {\n \"path\": [\"$.credentialSubject.name\"],\n \"filter\": {\n \"type\": \"string\",\n \"pattern\": \"John Doe\"\n }\n }\n ]\n }\n }\n ]\n }\n }\n}\n
{\n \"state\": \"presentation_received\",\n \"thread_id\": \"abcde\",\n \"role\": \"verifier\"\n}\n
"},{"location":"features/W3cCredentials/#presenting-proof","title":"Presenting Proof","text":"To present proof, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"presentation\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n },\n \"comment\": \"Presenting proof of name\"\n}\n
{\n \"state\": \"presentation_sent\",\n \"thread_id\": \"abcde\",\n \"role\": \"prover\"\n}\n
"},{"location":"features/W3cCredentials/#verifying-proof","title":"Verifying Proof","text":"To verify presented proof, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"presentation\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
{\n \"verified\": true,\n \"presentation\": {\n \"type\": \"VerifiablePresentation\",\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
"},{"location":"features/W3cCredentials/#appendix","title":"Appendix","text":""},{"location":"features/W3cCredentials/#glossary-of-terms","title":"Glossary of Terms","text":"The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer
and will use VS Code
to illustrate.
By no means is ACA-Py limited to these tools; they are merely examples.
For information on running demos and tests using provided shell scripts, see DevReadMe readme.
"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"The primary use case for this devcontainer
is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.
There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose
all within this container.
The .devcontainer
folder contains the devcontainer.json
file which defines this container. We are using a Dockerfile
and post-install.sh
to build and configure the container run image. The Dockerfile
is simple but in place for simplifying image enhancements (ex. adding poetry
to the image). The post-install.sh
will install some additional development libraries (including for BDD support).
What are Development Containers?
A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.
see https://containers.dev.
In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:
To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:
Dev Containers: Open Folder in Container...
File|Open Folder...
, you should be prompted to Reopen in Container
.NOTE follow any prompts to install Python Extension
or reload window for Pylance
when first building the container.
ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window
as some extensions seem to require this in order to work as expected.
When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.12 image (bash shell) and loading it with all the ACA-Py requirements. We also load a few Visual Studio settings (for running Pytests and formatting with Ruff).
"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"The Python libraries / dependencies are installed using poetry
. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run ruff check .
). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.
In VS Code, open a Terminal, you should be able to run the following commands:
python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\npoetry --version\n
The first command should show you that acapy_agent
module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit
installed) and Pull Requests.
When running ruff check .
in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)
- that's ok. If there are actual ruff errors, you should see something like:
error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"We have added Ruff extensions. Although we have added launch settings for both ruff
, you can also use the extension commands from the command palette.
ruff (format) - acapy_agent
More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window
to ensure the extensions are loaded correctly.
Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n
If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.
git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":"To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json
and settings.json
, please cut and paste what you want/need.
cp -R .vscode-sample .vscode\n
This will add a launch.json
, settings.json
and multiple ACA-Py configuration files for developing with different scenarios.
Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.
For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url
config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url
config. If you want to use a non localhost server then you will need to change the url.
./run_demo faber --endorser-role author
to see all the steps to become and endorser../run_demo faber --endorser-role author
to see all the steps to become and author. You need to uncomment the configurations for automating the connection to endorser.To run your ACA-Py code in debug mode, go to the Run and Debug
view, select the agent(s) you want to start and click Start Debugging (F5)
.
This will start your source code as a running ACA-Py instance, all configuration is in the *.yml
files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000
). You could change this or another ledger such as http://test.bcovrin.vonx.io
. These are purposefully, very simple configurations.
For example, open aries_cloudagent/admin/server.py
and set a breakpoint in async def status_handler(self, request: web.BaseRequest):
, then call GET /status
in the Admin Console and hit your breakpoint.
Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests
will scan and find the tests.
See Python Testing for more details, and Test Commands for usage.
WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json
that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File
to start debugging.
WARNING: the project configuration found in pyproject.toml
include performing ruff
checks when we run pytest
. Including ruff
does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini
when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests
but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.
At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer
and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal
. This isn't a panacea but should get you going in the right direction and provide you with some development tools.
This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own ACA-Py agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an ACA-Py agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.
Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.
Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.
"},{"location":"gettingStarted/ACA-PyAgentArchitecture/","title":"ACA-Py Internals: Agent and Controller","text":"This section talks in particular about the architecture of ACA-Py. An instance of an ACA-Py agent is actually made up of to two parts - the agent itself and a controller.
The agent handles all of the core Aries/non-Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.
Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.
As such, the agent is just a configured dependency in an ACA-Py deployment. Thus, the vast majority of ACA-Py developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of ACA-Py maintainers will focus on adding and maintaining the agent dependency.
Want more details about the agent and controller internals? Take a look at the ACA_Py deployment model document.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyBasics/","title":"What is ACA-Py?","text":"ACA-Py is a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for trusted, decentralized, peer-to-peer interactions. It includes a shared secure storage and a key management service for clients, as well as communication protocols for trusted interaction between agents.
An ACA-Py agent (such as the one in this repository):
The some of the concepts and features that make up the ACA-Py project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyBigPicture/","title":"ACA-Py Agents in context: The Big Picture","text":"ACA-Py agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.
The agents in the picture shares many attributes:
While there can be many other agent setups, the picture above shows the most common ones - mobile wallets for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.
Misleading in the picture is that (almost) all agents connect directly to the verifiable data repository. In this picture it's the Sovrin ledger, but that could be any ledger (e.g. set of nodes running ledger software) or non-ledger based verifiable data repositories -- such as web servers. That implies most agents embed a verifiable data registry client (usually, a DID Resolver) that makes calls to one or more types of verifiable data registries. Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the verifiable data registry - they do it directly. Super small IoT devices might be an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the verifiable data registry.
The three most common purposes of cloud agents are verifiable credential issuers, verifiers and \"mediators\" -- agents that route messages to mobile wallets that lack a persistent endpoint. For the latter, rather than messages going directly to mobile wallet (which is often impossible - for example sending to a mobile wallet), messages intended for the agent are routed through a mediator who hold the messages until the agent picks up its messages.
We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how ledgers or the various protocols work to be able to build your first ACA-Py-based application. Later in this guide we'll covering the starting point you do need to know.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/","title":"Developer Demos and Samples of ACA-Py Agent","text":"Here are some demos that developers can use to get up to speed on ACA-Py. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#open-api-demo","title":"Open API demo","text":"This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.
Collaborating Agents OpenAPI Demo
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.
Python-based Alice/Faber Demo
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#indicio-developer-demo","title":"Indicio Developer Demo","text":"Minimal Aca-Py demo that can be used by developers to isolate and test features:
Indicio Aca-Py Minimal Example
"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between ACA-Py Agents","text":"Use an ACA-Py issuer/verifier to establish a connection with a compatible mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"To be completed.
"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?
Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:
Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:
That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.
From experience, we\u2019ve added to two extra features to deal with unexpected conditions:
The following are the ACA-Py steps and APIs involved in handling credential revocation.
To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.
Include the command line parameter --tails-server-base-url <indy-tails-server url>
Publish credential definition
Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.
POST /credential-definitions\n{\n \"schema_id\": schema_id,\n \"support_revocation\": true,\n # Only needed if support_revocation is true. Defaults to 100\n \"revocation_registry_size\": size_int,\n \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n \"credential_definition_id\": \"credential_definition_id\"\n}\n
Issue credential
This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.
POST /issue-credential/send-offer\n{\n \"cred_def_id\": credential_definition_id,\n \"revoc_reg_id\": revocation_registry_id\n \"auto_remove\": False, # We need the credential exchange record when revoking\n ...\n}\nResponse\n{\n \"credential_exchange_id\": credential_exchange_id\n}\n
Revoking credential
POST /revocation/revoke\n{\n \"rev_reg_id\": <revocation_registry_id>\n \"cred_rev_id\": <credential_revocation_id>,\n \"publish\": <true|false>\n}\n
If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations
to publish pending revocations in batches. Revocation are not written to ledger until this is called.
When asking for proof, specify the time span when the credential is NOT revoked
POST /present-proof/send-request\n {\n \"connection_id\": ...,\n \"proof_request\": {\n \"requested_attributes\": [\n {\n \"name\": ...\n \"restrictions\": ...,\n ...\n \"non_revoked\": # Optional, override the global one when specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n },\n ...\n ],\n \"requested_predicates\": [\n {\n \"name\": ...\n ...\n \"non_revoked\": # Optional, override the global one when specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n },\n ...\n ],\n \"non_revoked\": # Optional, only check revocation if specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n }\n }\n
ACA-Py supports Revocation Notification v1.0.
Note: The optional ~please_ack
is not currently supported.
To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:
notify
- A boolean value indicating whether or not a notification should be sent. If the argument --notify-revocation
is used on startup, this value defaults to true
. Otherwise, it will default to false
. This value overrides the --notify-revocation
flag; the value of notify
always takes precedence.connection_id
- Connection ID for the connection of the credential holder. This is required when notify
is true
.thread_id
- Message Thread ID of the credential exchange message that resulted in the credential now being revoked. This is required when notify
is true
comment
- An optional comment presented to the credential holder as part of the revocation notification. This field might contain the reason for revocation or some other human readable information about the revocation.Your request might look something like:
POST /revocation/revoke\n{\n \"rev_reg_id\": <revocation_registry_id>\n \"cred_rev_id\": <credential_revocation_id>,\n \"publish\": <true|false>,\n \"notify\": true,\n \"connection_id\": <connection id>,\n \"thread_id\": <thread id>,\n \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"On receipt of a revocation notification, an event with topic acapy::revocation-notification::received
and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.
If the argument --monitor-revocation-notification
is used on startup, a webhook with the topic revocation-notification
and a payload containing the thread ID and comment is emitted to registered webhook urls.
NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.
The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.
However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.
There are several endpoints that must be called, and they must be called in this order:
Create revoc registry POST /revocation/create-registry
you need to provide the credential definition id and the size of the registry
Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}
here you need to provide the full URI that will be written to the ledger, for example:
{\n \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition
if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser
Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file
the tails server will check that the registry definition is already written to the ledger
Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry
if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser
From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init
, generated
, posted
, active
, full
, decommissioned
. When issuing revocable credentials, the work is done with the active
registry record. There are always 2 active
registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active
registry. When rotating, all registry records (except records in init
state) are decommissioned
and a new pair of active
registry records are created.
Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate
It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id}
after rotation and before issuance (if possible).
ACA-Py Agents can communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. ACA-Py agents use the did:peer DID method, which uses DIDs that are not published to a public verifiable data registry, but only shared privately between the communicating parties - usually just two agents.
Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:
Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.
Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.
Developers building ACA-Py agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/DIDCommRoutingExample/","title":"DIOComm Routing - an example","text":"In this example, we'll walk through an example of complex DIDComm routing, outlining some of the possibilities that can be implemented. Do realize that the vast majority of the work is already done for you if you are just using ACA-Py. You have to define the setup your agents will use, and ACA-Py will take care of all the messy details described below.
We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.
What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?
"},{"location":"gettingStarted/DIDCommRoutingExample/#the-scenario","title":"The Scenario","text":"Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca
), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.
A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:
That's a lot more than just the Bob and Alice relationship we usually think about!
"},{"location":"gettingStarted/DIDCommRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):
services
of type did-communication
, including:serviceEndpoint
recipientKeys
array of referenced keys for the ultimate target(s) of the messageroutingKeys
array of referenced keys for the mediatorsLet's look at the did-communication
service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:
The serviceEndpoint
that Bob tells Alice about is the endpoint for the Agency.
The recipientKeys
entry is a key reference for Bob's iPhone specifically for Alice.
The routingKeys
entries is a reference to the public key for the Routing Agent.
Bob and his Routing Agent:
serviceEndpoint
is empty because Bob's iPhone has no endpoint. See the note below for more on this.recipientKeys
entry is a key reference for Bob's iPhone specifically for the Routing Agent.The routingKeys
array is empty.
Bob and Agency:
serviceEndpoint
is the endpoint for Bob's Routing Agent.recipientKeys
entry is a key reference for Bob's iPhone specifically for the Agency.The routingKeys
is a single entry for the key reference for the Routing Agent key.
Bob's Routing Agent and Agency:
serviceEndpoint
is the endpoint for Bob's Routing Agent.recipientKeys
entry is a key reference for Bob's Routing Agent specifically for the Agency.routingKeys
array is empty.The null serviceEndpoint
for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.
Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".
We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:
did-communication
service endpoint is set to the Agency public DID andNote: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.
With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request
message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.
At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.
Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:
DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type.
Link: Message Types
As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.
"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/","title":"What should I work on? Options for ACA-Py/Indy Developers","text":"Now that you know the basics of the ACA-Py/Indy eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.
This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node
(the Indy public ledger) or the indy-sdk
. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.
In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in the ACA-Py repository.
If you want to build a mobile agent, there are open source options available, including Bifold Wallet, which is built on Credo-TS. Both are OpenWallet Projects.
As a developer building applications that use/embed ACA-Py agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.
Note that if building apps is what you want to do, you don't need to do a deep dive into the inner workings of ACA-Py, ledgers or mobile wallets. You need to know the concepts, but it's not a requirement that you know the code base intimately.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#contributing-to-aca-py","title":"Contributing to ACA-Py","text":"Of course as you build applications using ACA-Py, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"ACA-Py currently supports a handful of public verifiable data registries and verifiable credentials exchange. A project goals to be \"ledger\"-agnostic, and to support a range of verifiable data registries. We're making it easier and easier to support other verifiable data registries, and would welcome assistance in adding new ones.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"Although controllers for an ACA-Py instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the ACA-Py demo, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader ACA-Py community so that this and other code bases improve.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#working-at-the-cryptographic-layer","title":"Working at the Cryptographic Layer","text":"Finally, at the deepest level, and core to all of the projects is the cryptography underpinning ACA-Py. If you are a cryptographer, that's where you want to be - and we want you there.
"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"NOTE: If you are developer building apps on top of ACA-Py and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. ACA-Py takes care of those details for you. The introduction linked here should be sufficient.
If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).
Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - year old! We've got much more relevant examples later in this guide.
As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and ACA-Py.
"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:
https://agents-R-Us.ca
) that they are \"hidden in a crowd\".Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.
The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).
"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:
Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.
Link: DIDDoc conventions for inbound routing
"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.
Link: Mediators and Relays
"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"The DIDComm encryption handling is handling within the ACA-Py agent, and not really something a developer building applications using an agent needs to worry about. Further, within an ACA-Py agent, the handling of the encryption is left to various cryptographic libraries to handle. To encrypt a message, the agent code calls a pack()
function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack()
function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.
Much thought has also gone into repudiable and non-repudiable messaging, as described here.
"},{"location":"gettingStarted/YourOwnACA-PyAgent/","title":"Creating Your Own Aries Agent","text":"Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.
"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"ACA-Py supports message tracing, according to the Tracing RFC.
Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.
Tracing is configured globally for the agent.
"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"The following options can be specified when starting the aca-py agent:
--trace Generate tracing events.\n --trace-target <trace-target>\n Target for trace events (\"log\", \"message\", or http\n endpoint).\n --trace-tag <trace-tag>\n Tag to be included when logging events.\n --trace-label <trace-label>\n Label (agent name) used logging events.\n
The --trace
option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log
).
Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...}
in the JSON payload to the API call (for credential and proof exchanges).
The run_demo
script supports the following parameters and environment variables.
Environment variables:
TRACE_ENABLED Flag to enable tracing\n\nTRACE_TARGET_URL Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG Tag to be included in all logged trace events\n
Parameters:
--trace-log Enables tracing to the standard log output\n (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n
When running the Faber controller, tracing can be enabled using the T
menu option:
Faber | Connected\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n\n[1/2/3/T/X]\n
When Exchange Tracing
is ON
, all exchanges will include tracing.
You can use the ELK
stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url
option, which requires the WEBHOOK_URL
environment variable. Configure an end point running on the docker host system, port 8888, use the following:
WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/BDDTests/","title":"Integration Tests for ACA-Py using Behave","text":"Integration tests for ACA-Py are implemented using Behave functional tests to drive ACA-Py agents based on the alice/faber demo framework.
If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness (AATH) tests before you submit your pull requests. Note that the relevant AATH tests are now run as part of the tests run when submitting a code PR for ACA-Py.
"},{"location":"testing/BDDTests/#getting-started","title":"Getting Started","text":"To run the ACA-Py Behave tests, open a bash shell run the following:
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone \"https://github.com/openwallet-foundation/acapy\"\ncd acapy/demo\n./run_bdd -t ~@taa_required\n
Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).
Note also that some tests require a ledger with Indy the \"TAA\" (Transaction Author Agreement) concept enabled, how to run these tests will be described later.
By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:
# run the above commands, up to cd acapy/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n
To run the tests against the back-end askar
libraries (as opposed to indy-sdk) run the following:
BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n
(Note that wallet-type
is currently the only extra argument supported.)
You can run individual tests by specifying the tag(s):
./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/BDDTests/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"To run a local von-network with TAA enabled,run the following:
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n
You can then run the TAA-enabled tests as follows:
./run_bdd -t @taa_required\n
or:
BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n
The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)
To override the default port settings:
AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tag>\n
(Note that since the test run multiple agents you require up to 60 available ports.)
"},{"location":"testing/BDDTests/#note-on-bbs-signatures","title":"Note on BBS Signatures","text":"ACA-Py does not come installed with the bbs
library by default therefore integration tests involving BBS signatures (tagged with @BBS) will fail unless excluded.
You can exclude BBS tests from running with the tag ~@BBS
:
run_bdd -t ~@BBS\n
If you want to run all tests including BBS tests you should include the --all-extras
flag:
run_bdd --all-extras\n
Note: The bbs
library may not install on ARM (i.e. aarch64 or arm64) architecture therefore YMMV with testing BBS Signatures on ARM based devices.
ACA-Py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running ACA-Py agent (or in the case of AATH, against any compatible Aries agent), however the ACA-Py integration tests focus on ACA-Py specific features.
AATH:
As of around the publication of ACA-Py 1.0.0 (Summer 2024), the ACA-Py CI/CD Pipeline for code PRs includes running a useful subset of AATH tests.
ACA-Py integration tests:
ACA-Py integration tests use the same configuration approach as AATH, documented here.
In addition to support for external schemas, credential data etc, the ACA-Py integration tests support configuration of the ACA-Py agents that are used to run the test. For example:
Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n Given \"3\" agents\n | name | role | capabilities |\n | Acme | issuer | <Acme_capabilities> |\n | Faber | verifier | <Acme_capabilities> |\n | Bob | prover | <Bob_capabilities> |\n And \"<issuer>\" and \"Bob\" have an existing connection\n And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n ...\n\n Examples:\n | issuer | Acme_capabilities | Bob_capabilities | Schema_name | Credential_data | Proof_request |\n | Acme | --public-did | | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n | Faber | --public-did --mediator | --mediator | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n
In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.
The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.
"},{"location":"testing/BDDTests/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All ACA-Py Agents Under Test","text":"You can specify parameters that are applied to all ACA-Py agents using the ACAPY_ARG_FILE
environment variable, for example:
ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n
... will apply the parameters in the postgres-indy-args.yml
file (which just happens to configure a postgres wallet) to all agents under test.
Or the following:
ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n
... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).
Any ACA-Py argument can be included in the yml file, and order-of-precedence applies (see https://pypi.org/project/ConfigArgParse/).
"},{"location":"testing/BDDTests/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"ACA-Py integration tests support the following environment-driven configuration:
LEDGER_URL
- specify the ledger urlTAILS_NETWORK
- specify the docker network the tails server is running onPUBLIC_TAILS_URL
- specify the public url of the tails serverACAPY_ARG_FILE
- specify global ACA-Py parameters (see above)Behave tests are tagged using the same standard tags as used in AATH.
To run a specific set of ACA-Py integration tests (or exclude specific tests):
./run_bdd -t tag1 -t ~tag2\n
(All command line parameters are passed to the behave
command, so all parameters supported by behave can be used.)
This video is a presentation by ACA-Py developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.
"},{"location":"testing/IntegrationTests/","title":"Integration Test Plan","text":"Integration testing in ACA-Py consists of 3 different levels or types.
Interoperability is extremely important in the decentralized trust/SSI community. for example, when implementing or changing features that are included in the Aries Interop Profile the developer should try to add tests to this test suite.
These tests are contained in a separate repo AATH. They use the gherkin syntax and a http back channel. Changes to the tests need to be added and merged into this repo before they will be reflected in the automatic testing workflows. There has been a lot of work to make developing and debugging tests easier. See (AATH Dev Containers)[https://github.com/hyperledger/aries-agent-test-harness/blob/main/AATH_DEV_CONTAINERS.md#dev-containers-in-aath].
The tests will then be ran for PR's and scheduled workflows for ACA-Py \u2194 ACA-Py agents. These tests are important because having them allows the AATH project to more easily test Credo-TS \u2194 ACA-Py scenarios and ensure interoperability with mobile agents interacting with ACA-Py agents.
"},{"location":"testing/IntegrationTests/#aca-py-specific-bdd-tests","title":"ACA-Py specific BDD tests","text":"These tests leverage the demo agent and also use gherkin syntax and a back channel. See README.
These tests are another tool for leveraging the demo agent and the gherkin syntax. They should not be used to test features that involve the interop profile, as they can not be used to test against other frameworks. None of the tests that are covered by the AATH tests will be ran automatically. They are here because some developers may prefer the testing strategy and can be useful for explicit testing steps and protocols not included in the interop profile.
"},{"location":"testing/IntegrationTests/#scenario-testing","title":"Scenario testing","text":"These tests utilize the minimal example agent produced by Indicio. They exist in the scenarios
directory. They are very useful for running specific test plans and checking webhooks.
ACA-Py supports multiple configurations of logging.
"},{"location":"testing/Logging/#log-level","title":"Log level","text":"ACA-Py's logging is based on python's logging lib. Log levels DEBUG
, INFO
and WARNING
are available. Other log levels fall back to WARNING
.
Supports writing of log messages to a file with wallet_id
as the tenant identifier for each. To enable this, both multitenant mode (--multitenant
) and writing to log file option (--log-file
) are required. If both --multitenant
and --log-file
are not passed when starting up ACA-Py, then it will use default_logging_config.ini
config (backward compatible) and not log at a per tenant level.
--log-level
- The log level to log on std out--log-file
- Enables writing of logs to file. The provided value becomes path to a file to log to. If no value or empty string is provided then it will try to get the path from the config file--log-config
- Specifies a custom logging configuration fileExample:
./bin/aca-py start --log-level debug --log-file acapy.log --log-config acapy_agent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./acapy_agent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"The log level can be configured using the environment variable ACAPY_LOG_LEVEL
. The log file can be set by ACAPY_LOG_FILE
. The log config can be set by ACAPY_LOG_CONFIG
.
Example:
ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#aca-py-config-file","title":"ACA-Py Config File","text":"Following parameters can be used in a configuration file like this.
log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n
Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.
"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"The path to config file is provided via --log-config
.
Find an example in default_logging_config.ini.
You can find more detail description in the logging documentation.
For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler
and StreamHandler
handlers. Custom TimedRotatingFileMultiProcessHandler
handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name
, when
, interval
and backupCount
can be passed as args=('acapy.log', 'd', 7, 1,)
(also shown below). Note: backupCount
of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here
[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n
For DictConfig
(dict
logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini
file.
version: 1\nformatters:\n default:\n format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n console:\n class: logging.StreamHandler\n level: DEBUG\n formatter: default\n stream: ext://sys.stderr\n rotating_file:\n class: logging.handlers.TimedRotatingFileMultiProcessHandler\n level: DEBUG\n filename: 'acapy.log'\n when: 'd'\n interval: 7\n backupCount: 1\n formatter: default\nroot:\n level: INFO\n handlers:\n - console\n - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting ACA-Py","text":"This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.
Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.
"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":"The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.
"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:
https:/localhost:9000
) accessible? If so, can you click on and see the Genesis File?LEDGER_URL=http://test.bcovrin.vonx.io
. For example, when running the Alice-Faber demo in the demo folder, you can run (for example), the Faber agent using the command: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber
Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.
"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, ACA-Py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (PR #1804 (merged) mitigates but probably doesn't completely eliminate this from happening).
For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:
To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:
/revocation/registry/<id>/issued
- counts of the number of issued/revoked within a registry/revocation/registry/<id>/issued/details
- details of all credentials issued/revoked within a registry/revocation/registry/<id>/issued/indy_recs
- calculated rev_reg_delta from the ledger/revocation/registry/<id>/fix-revocation-entry-state
- publish an update to the RevReg state on the ledger to bring it into alignment with what is in the ACA-Py instance.apply_ledger_update
) to control whether the ledger entry actually gets published so, if you are so inclined, you can call the endpoint to see what the transaction would be, before you actually try to do a ledger update. This will return:rev_reg_delta
- same as the \".../indy_recs\" endpointaccum_calculated
- transaction to write to ledgeraccum_fixed
- If apply_ledger_update
, the transaction actually written to the ledgerNote that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.
We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.
We add an integration test that demonstrates/tests this issue here.
To run the scenario either manually or using the integration tests, you can do the following:
./manage start --taa-sample --logs
./manage start --logs
./run_demo faber --revocation --taa-accept
, and then you can run through all the transactions using the Swagger page../run_bdd -t @taa_required
The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.
This video is a presentation of the material covered in this document.
"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":"./scripts/run_tests
./scripts/run_tests aries_cloudagent/protocols/out_of_band/v1_0/tests
Note: The bbs
library is not installed with ACA-Py by default, therefore unit tests involving BBS Signatures are disabled. To run BBS tests add the --all-extras
flag:
./scripts/run_tests --all-extras\n
Note: The bbs
library may not install on ARM (i.e. aarch64 or arm64) architecture therefore YMMV with testing BBS Signatures on ARM based devices.
Example: acapy_agent/core/tests/test_event_bus.py
@pytest.fixture\ndef event_bus():\n yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n event = Event(topic=\"anything\", payload=\"payload\")\n yield event\n\nclass MockProcessor:\n def __init__(self):\n self.profile = None\n self.event = None\n\n async def __call__(self, profile, event):\n self.profile = profile\n self.event = event\n\n\n@pytest.fixture\ndef processor():\n yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n \"\"\"Test subscribe and unsubscribe.\"\"\"\n event_bus.subscribe(re.compile(\".*\"), processor)\n assert event_bus.topic_patterns_to_subscribers\n assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n event_bus.unsubscribe(re.compile(\".*\"), processor)\n assert not event_bus.topic_patterns_to_subscribers\n
From aries_cloudagent/core/event_bus.py
class EventBus:\n def __init__(self):\n self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n if pattern not in self.topic_patterns_to_subscribers:\n self.topic_patterns_to_subscribers[pattern] = []\n self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n if pattern in self.topic_patterns_to_subscribers:\n try:\n index = self.topic_patterns_to_subscribers[pattern].index(processor)\n except ValueError:\n return\n del self.topic_patterns_to_subscribers[pattern][index]\n if not self.topic_patterns_to_subscribers[pattern]:\n del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n \"\"\"Test subscriber receives event.\"\"\"\n event_bus.subscribe(re.compile(\".*\"), processor)\n await event_bus.notify(profile, event)\n assert processor.profile == profile\n assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n partials = []\n for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n match = pattern.match(event.topic)\n\n if not match:\n continue\n\n for subscriber in subscribers:\n partials.append(\n partial(\n subscriber,\n profile,\n event.with_metadata(EventMetadata(pattern, match)),\n )\n )\n\n for processor in partials:\n try:\n await processor()\n except Exception:\n LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"From: acapy_agent/protocols/didexchange/v1_0/tests/test.manager.py
class TestDidExchangeManager(AsyncTestCase, TestConfig):\n async def setUp(self):\n self.responder = MockResponder()\n\n self.oob_mock = async_mock.MagicMock(\n clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n )\n\n self.route_manager = async_mock.MagicMock(RouteManager)\n ...\n self.profile = InMemoryProfile.test_profile(\n {\n \"default_endpoint\": \"http://aries.ca/endpoint\",\n \"default_label\": \"This guy\",\n \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n \"debug.auto_accept_invites\": True,\n \"debug.auto_accept_requests\": True,\n \"multitenant.enabled\": True,\n \"wallet.id\": True,\n },\n bind={\n BaseResponder: self.responder,\n OobMessageProcessor: self.oob_mock,\n RouteManager: self.route_manager,\n ...\n },\n )\n ...\n\n async def test_receive_invitation_no_auto_accept(self):\n async with self.profile.session() as session:\n mediation_record = MediationRecord(\n role=MediationRecord.ROLE_CLIENT,\n state=MediationRecord.STATE_GRANTED,\n connection_id=self.test_mediator_conn_id,\n routing_keys=self.test_mediator_routing_keys,\n endpoint=self.test_mediator_endpoint,\n )\n await mediation_record.save(session)\n with async_mock.patch.object(\n self.multitenant_mgr, \"get_default_mediator\"\n ) as mock_get_default_mediator:\n mock_get_default_mediator.return_value = mediation_record\n invi_rec = await self.oob_manager.create_invitation(\n my_endpoint=\"testendpoint\",\n hs_protos=[HSProto.RFC23],\n )\n\n invitee_record = await self.manager.receive_invitation(\n invi_rec.invitation,\n auto_accept=False,\n )\n assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n self,\n invitation: OOBInvitationMessage,\n their_public_did: Optional[str] = None,\n auto_accept: Optional[bool] = None,\n alias: Optional[str] = None,\n mediation_id: Optional[str] = None,\n) -> ConnRecord:\n ...\n accept = (\n ConnRecord.ACCEPT_AUTO\n if (\n auto_accept\n or (\n auto_accept is None\n and self.profile.settings.get(\"debug.auto_accept_invites\")\n )\n )\n else ConnRecord.ACCEPT_MANUAL\n )\n service_item = invitation.services[0]\n # Create connection record\n conn_rec = ConnRecord(\n invitation_key=(\n DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n if isinstance(service_item, OOBService)\n else None\n ),\n invitation_msg_id=invitation._id,\n their_label=invitation.label,\n their_role=ConnRecord.Role.RESPONDER.rfc23,\n state=ConnRecord.State.INVITATION.rfc23,\n accept=accept,\n alias=alias,\n their_public_did=their_public_did,\n connection_protocol=DIDX_PROTO,\n )\n\n async with self.profile.session() as session:\n await conn_rec.save(\n session,\n reason=\"Created new connection record from invitation\",\n log_params={\n \"invitation\": invitation,\n \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n },\n )\n\n # Save the invitation for later processing\n ...\n\n return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":" with self.assertRaises(DIDXManagerError) as ctx:\n ...\n assert \" ... error ...\" in str(ctx.exception)\n
function.assert_called_once_with(parameters)
function.assert_called_once()
pytest.mark setup in setup.cfg
can be attributed at function or class level. Example, @pytest.mark.askar
Code coverage
\ud83d\udea8 ACA-Py is transitioning to the OpenWallet Foundation (OWF)! \ud83d\udea8
We\u2019re excited to announce that the ACA-Py project has moved to the OWF's GitHub organization as the new \"acapy\" project.
For details on what this means for ACA-Py users, including steps for updating deployments, please follow the updates in GitHub Issue #3250. We'll keep you informed about how to update your deployment to reflect this change. Stay tuned!
An easy to use enterprise wallet for building decentralized trust services using any language that supports sending/receiving HTTP requests.
ACA-Py Plugins have their own store! Visit https://plugins.aca-py.org to find ready-to-use functionality to add to your ACA-Py deployment, and to learn how to build your own plugins.
"},{"location":"#overview","title":"Overview","text":"ACA-Py is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using a variety of verifiable credential formats and protocols. ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.
ACA-Py includes support for the concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py\u2019s supported features include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.
To use ACA-Py you create a business logic \"controller\" that talks to an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the various protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type protocols.
This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.
"},{"location":"#lts-releases","title":"LTS Releases","text":"The ACA-Py community provides periodic releases with new features and improvements. Certain releases are designated by the ACA-Py maintainers as long-term support (LTS) releases and listed in this document. Critical bugs and important (as determined by the ACA-Py Maintainers) fixes are backported to the active LTS releases. Each LTS release will be supported with patches for 9 months following the designation of the next LTS Release. For more details see the LTS strategy.
Current LTS releases are:
Unless specified in the Breaking Changes section of the ACA-Py CHANGELOG, all LTS patch releases will be able to be deployed without an upgrade process from its prior release. Minor/Major release upgrades steps (if any) of ACA-Py are tested and documented in the ACA-Py CHANGELOG per release and in the project documents published at https://aca-py.org from the markdown files in this repository.
ACA-Py releases and release notes can be found on the GitHub releases page.
"},{"location":"#multi-tenant","title":"Multi-Tenant","text":"ACA-Py supports \"multi-tenant\" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an \"issuer-as-a-service\", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a \"cloud wallet\" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.
"},{"location":"#mediator-service","title":"Mediator Service","text":"Startup options allow the use of an ACA-Py as a DIDComm mediator using core DIDComm protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to DIDComm agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio, PBC. Learn more about deploying a mediator here. See the Aries Mediator Service for a \"best practices\" configuration of an Aries mediator.
"},{"location":"#indy-transaction-endorsing","title":"Indy Transaction Endorsing","text":"ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.
"},{"location":"#scaled-deployments","title":"Scaled Deployments","text":"ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.
"},{"location":"#vc-api-endpoints","title":"VC-API Endpoints","text":"A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.
"},{"location":"#example-uses","title":"Example Uses","text":"The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:
For those new to SSI, Wallets, and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.
The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.
Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You\u2019ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.
"},{"location":"#understanding-the-architecture","title":"Understanding the Architecture","text":"There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.
You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the ACA-Py Plugins repository. Check them out -- it might already have the very plugin you need!
"},{"location":"#installation-and-usage","title":"Installation and Usage","text":"Use the \"install and go\" page for developers if you are comfortable with decentralized trust concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the repository /demo
folder there is a full set of demos for developers to use in getting up to speed quickly. Start with the Traction Workshop to go through a complete ACA-Py-based Issuer-Holder-Verifier flow in about 20 minutes. Next, the Alice-Faber Demo is a great way for developers try a zero-install example of how to use the ACA-Py API to operate a couple of Agents. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.
If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging. If you are unfamiliar with poetry please see our cheat sheet
"},{"location":"#about-the-aca-py-admin-api","title":"About the ACA-Py Admin API","text":"The overview of ACA-Py\u2019s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.
An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.
Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.
"},{"location":"#troubleshooting","title":"Troubleshooting","text":"There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the \"aca-py\" channel on the OpenWallet Foundation Discord chat server (invitation here).
"},{"location":"#credit","title":"Credit","text":"The initial implementation of ACA-Py was developed by the Government of British Columbia\u2019s Digital Trust Team in Canada. To learn more about what\u2019s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.
See the MAINTAINERS.md file for how to find a list of the current ACA-Py maintainers, and guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.
"},{"location":"#contributing","title":"Contributing","text":"Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing \u2014\u00a0guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.
"},{"location":"#license","title":"License","text":"Apache License Version 2.0
"},{"location":"CHANGELOG/","title":"Aries Cloud Agent Python Changelog","text":""},{"location":"CHANGELOG/#110rc1","title":"1.1.0rc1","text":""},{"location":"CHANGELOG/#october-15-2024","title":"October 15, 2024","text":"Release 1.1.0 is the first release of ACA-Py from the OpenWallet Foundation (OWF). The only reason for the release is to test out all of the release publishing actions now that we have moved the repo to its new home (https://github.com/openwallet-foundation/acapy). Almost all of the changes in the release are related to the move.
The move triggered some big changes for those with existing ACA-Py deployments resulting from the change in the GitHub organization (from Hyperledger to OWF) and source code name (from aries_cloudagent
to acapy_agent
). See the Release 1.1.0 breaking changes for the details.
For up to date details on what the repo move means for ACA-Py users, including steps for updating deployments, please follow the updates in GitHub Issue #3250. We'll keep you informed about the approach, timeline, and progress of the move. Stay tuned!
"},{"location":"CHANGELOG/#110rc1-deprecation-notices","title":"1.1.0rc1 Deprecation Notices","text":"The same deprecation notices from the 1.0.1 release about AIP 1.0 protocols still apply. The protocols remain in the 1.1.0 release, but will be moved out of the core and into plugins soon. Please review these notifications carefully!
"},{"location":"CHANGELOG/#110rc1-breaking-changes","title":"1.1.0rc1 Breaking Changes","text":"The only (but significant) breaking changes in 1.1.0 are related to the GitHub organization and project name changes. Specific impacts are:
aries_cloudagent
to acapy_agent
,acapy_agent
name, andacapy_agent
as the name for release container image artifacts.Anyone deploying ACA-Py should use this release to update their existing deployments. Since there are no other changes to ACA-Py, any issues found should relate back to those changes.
Please note that if and when the current LTS releases (0.11 and 0.12) have new releases, they will continue to use the aries_cloudagent
source folder, the existing locations for the PyPi and GHCR container image artifacts.
Updates related to the move and rename of the repository from the Hyperledger to OpenWallet Foundation GitHub organization
Release management pull requests:
Dependabot PRs
Release 1.0.1 will be the last release of ACA-Py from the Hyperledger organization before the repository moves to the OpenWallet Foundation (OWF). Soon after this release, the ACA-Py project and this repository will move to the OWF's GitHub organization as the new \"acapy\" project.
For details on what this means for ACA-Py users, including steps for updating deployments, please follow the updates in GitHub Issue #3250. We'll keep you informed about the approach, timeline, and progress of the move. Stay tuned!
The 1.0.1 release contains mostly internal clean ups, technical debt elimination, and a revision to the integration testing approach, incorporating the Aries Agent Test Harness tests in the ACA-Py continuous integration testing process. There are substantial enhancements in the management of keys and their use with VC-DI proofs, and web-based DID methods like did:web
. See the Wallet and Key Handling
updates in the categorized PR list below.
There are several important deprecation notices in this release in preparation for the next ACA-Py release. Please review these notifications carefully!
In an attempt to shorten the categorized list of PRs in the release, rather than listing all of the dependabot
PRs in the release, we've included a link to a list of those PRs.
ACA-Py will soon be moved from the Hyperledger GitHub organization to that of the OpenWallet Foundation. As such, there will be changes in the names and locations of the artifacts produced -- the PyPi project and the container images in the GitHub Container Registry. We will retain the ability to publish LTS releases of ACA-Py for the current LTS versions (0.11, 0.12) in the current locations. For details, guidance, timing, and progress on the move, please monitor the description of GitHub Issue #3250 that will be maintained throughout the process.
In the next ACA-Py release, we will be dropping from the core ACA-Py repository the AIP 1.0 RFC 0160 Connections, [RFC 0037 Issue Credentials v1.0] and [RFC 0037 Present Proof v1.0] DIDComm protocols. Each of the protocols will be moved to the [ACA-Py Plugins] repo. All deployers that use those protocols SHOULD update to the AIP 2.0 versions of those protocols (RFC 0434 Out of Band+RFC 0023 DID Exchange, RFC 0453 Issue Credential v2.0 and RFC 0454 Present Proof v2.0, respectively). Once the protocols are removed from ACA-Py, anyone still using those protocols MUST adjust their configuration to load those protocols from the respective plugins.
There are no breaking changes in ACA-Py Release 1.0.1.
"},{"location":"CHANGELOG/#101-categorized-list-of-pull-requests","title":"1.0.1 Categorized List of Pull Requests","text":"Wallet and Key Handling Updates
Credential Exchange Updates
OpenAPI Updates
Documentation and GHA Test Updates
Dependencies and Internal Fixes/Updates:
aries-cloudagent-bbs
Docker image #3175 rblaine95Release management pull requests:
Dependabot PRs
Release 1.0.0 is finally here! While Aries Cloud Agent Python has been used in production for several years, the maintainers have decided it is finally time to put a \"1.0\" tag on the project. The 1.0.0 release itself includes well over 100 PRs merged since Release 0.12.1. The vast majority of that work was in hardening the product in preparation for this 1.0.0 release. While there are a number of new features and a new Long Term Support (LTS) policy, the majority of the focus has been on eliminating technical debt and improving the underlying implementation. The full list of PRs in this release can be found below. here are the highlights of the release:
With the focus of the pull requests for this release on stabilizing the implementation, there were a few breaking changes:
bbs
) does not support the ARM architecture, and its inclusion in the default ACA-Py artifacts mean that developers using ARM-based hardware (such as Apple M1 Macs or later) cannot run ACA-Py \"out-of-the-box\". We feel that providing a better developer experience by supporting the ARM architecture is more important than BBS Signature support at this time. As such, we have removed the BBS dependency from the base ACA-Py artifacts and made it an add-on that those using ACA-Py with BBS must take extra steps to build into their own artifacts, as documented here.LTS Support Policy:
DIDComm and Connection Establishment updates/fixes:
Admin API, Startup, OpenAPI/Swagger Updates and Improvements:
Test and Demo updates:
Credential Exchange updates and fixes:
Upgrade Updates and Improvements:
Release management pull requests:
Documentation, code formatting, publishing process updates:
Dependencies and Internal Updates:
Dependabot PRs:
A patch release to add the verification of a linkage between an inbound message and its associated connection (if any) before processing the message. Also adds some additional cleanup/fix PRs from the main branch (see list below) that might be useful for deployments currently using Release 0.12.1 or 0.12.0.
"},{"location":"CHANGELOG/#0122-breaking-changes","title":"0.12.2 Breaking Changes","text":"There are no breaking changes in this release.
"},{"location":"CHANGELOG/#0122-categorized-list-of-pull-requests","title":"0.12.2 Categorized List of Pull Requests","text":"main
branch:Release 0.12.1 is a small patch to cleanup some edge case issues in the handling of Out of Band invitations, revocation notification webhooks, and connection querying uncovered after the 0.12.0 release. Fixes and improvements were also made to the generation of ACA-Py's OpenAPI specifications.
"},{"location":"CHANGELOG/#0121-breaking-changes","title":"0.12.1 Breaking Changes","text":"There are no breaking changes in this release.
"},{"location":"CHANGELOG/#0121-categorized-list-of-pull-requests","title":"0.12.1 Categorized List of Pull Requests","text":"Out of Band Invitations and Connection Establishment updates/fixes:
OpenAPI/Swagger updates, fixes and cleanups:
Test and Demo updates:
Credential Exchange updates and fixes:
Endorsement of Indy Transactions fixes:
Documentation publishing process updates:
Dependencies and Internal Updates:
Release management pull requests:
Release 0.12.0 is a large release with many new capabilities, feature improvements, upgrades, and bug fixes. Importantly, this release completes the ACA-Py implementation of Aries Interop Profile v2.0, and enables the elimination of unqualified DIDs. While only deprecated for now, all deployments of ACA-Py SHOULD move to using only fully qualified DIDs as soon as possible.
Much progress has been made on did:peer
support in this release, with the handling of inbound DID Peer 1 added, and inbound and outbound support for DID Peer 2 and 4. Much attention was also paid to making sure that the Peer DID and DID Exchange capabilities match those of Credo-TS (formerly Aries Framework JavaScript). The completion of that work eliminates the remaining places where \"unqualified\" DIDs were being used, and to enable the \"connection reuse\" feature in the Out of Band protocol when using DID Peer 2 and 4 DIDs in invitations. See the document Qualified DIDs for details about how to control the use of DID Peer 2 or 4 in an ACA-Py deployment, and how to eliminate the use of unqualified DIDs. Support for DID Exchange v1.1 has been added to ACA-Py, with support for DID Exchange v1.0 retained, and we've added support for DID Rotation.
Work continues towards supporting ledger agnostic AnonCreds, and the new Hyperledger AnonCreds Rust library. Some of that work is in this release, the rest will be in the next release.
Attention was given in the release to simplifying the handling of JSON-LD Data Integrity Verifiable Credentials.
An important change in this release is the re-organization of the ACA-Py documentation, moving the vast majority of the documents to the folders within the docs
folder -- a long overdue change that will allow us to soon publish the documents on https://aca-py.org directly from the ACA-Py repository, rather than from the separate aries-acapy-docs currently being used.
A big developer improvement is a revamping of the test handling to eliminate ~2500 warnings that were previously generated in the test suite. Nice job @ff137!
"},{"location":"CHANGELOG/#0120-breaking-changes","title":"0.12.0 Breaking Changes","text":"A deployment of this release that uses DID Peer 2 and 4 invitations may encounter problems interacting with agents deployed using older Aries protocols. Led by the Aries Working Group, the Aries community is encouraging the upgrade of all ecosystem deployments to accept all commonly used qualified DIDs, including DID Peer 2 and 4. See the document Qualified DIDs for more details about the transition to using only qualified DIDs. If deployments you interact with are still using unqualified DIDs, please encourage them to upgrade as soon as possible.
Specifically for those upgrading their ACA-Py instance that create Out of Band invitations with more than one handshake_protocol
, the protocol for the connection has been removed. See [Issue #2879] contains the details of this subtle breaking change.
New deprecation notices were added to ACA-Py on startup and in the OpenAPI/Swagger interface. Those added are listed below. As well, we anticipate 0.12.0 being the last ACA-Py release to include support for the previously deprecated Indy SDK.
did:sov:...
as a Protocol Doc URIhttps://didcomm.org/
.DID Handling and Connection Establishment Updates/Fixes
DID Peer and DID Resolver Updates and Fixes
AnonCreds and Ledger Agnostic AnonCreds RS Changes
Hyperledger Indy ledger related updates and fixes
JSON-LD Verifiable Credential/DIF Presentation Exchange updates
Credential Exchange (Issue, Present) Updates
Multitenancy Updates and Fixes
Other Fixes, Demo, DevContainer and Documentation Fixes
Dependencies and Internal Updates
CI/CD, Testing, and Developer Tools/Productivity Updates
Release management pull requests
A patch release to add a fix that ensures that sufficient webhook information is sent to an ACA-Py controller that is executing the AIP 2.0 Present Proof 2.0 Protocol.
"},{"location":"CHANGELOG/#0113-breaking-changes","title":"0.11.3 Breaking Changes","text":"There are no breaking changes in this release.
"},{"location":"CHANGELOG/#0113-categorized-list-of-pull-requests","title":"0.11.3 Categorized List of Pull Requests","text":"main
branch:A patch release to add the verification of a linkage between an inbound message and its associated connection (if any) before processing the message.
"},{"location":"CHANGELOG/#0112-breaking-changes","title":"0.11.2 Breaking Changes","text":"There are no breaking changes in this release.
"},{"location":"CHANGELOG/#0112-categorized-list-of-pull-requests","title":"0.11.2 Categorized List of Pull Requests","text":"main
branch:A patch release to update the aiohttp
library such that a reported serious vulnerability is addressed such that a crafted payload delivered to aiohttp
can put it in an infinite loop, which can be used for a low cost denial of service attack. CVE-2024-30251 describes the issue.
There are no breaking changes in this release. The only changed is the updated aiohttp
dependency.
Release 0.11.0 is a relatively large release of new features, fixes, and internal updates. 0.11.0 is planned to be the last significant update before we begin the transition to using the ledger agnostic AnonCreds Rust in a release that is expected to bring Admin/Controller API changes. We plan to do patches to the 0.11.x branch while the transition is made to using [Anoncreds Rust].
An important addition to ACA-Py is support for signing and verifying SD-JWT verifiable credentials. We expect this to be the first of the changes to extend ACA-Py to support OpenID4VC protocols.
This release and Release 0.10.5 contain a high priority fix to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof
in the Verifiable Presentation was not included when determining the verification value (true
or false
) of the overall presentation. A forthcoming security advisory will cover the details. Anyone using JSON-LD presentations is recommended to upgrade to one of these versions of ACA-Py as soon as possible.
In the CI/CD realm, substantial changes were applied to the source base in switching from:
pip
to Poetry for packaging and dependency management,asynctest
to IsolatedAsyncioTestCase
and AsyncMock
objects now included in Python's builtin unittest
package for unit testing.These are necessary and important modernization changes, with the latter two triggering many (largely mechanical) changes to the codebase.
"},{"location":"CHANGELOG/#0110-breaking-changes","title":"0.11.0 Breaking Changes","text":"In addition to the impacts of the change for developers in switching from pip
to Poetry, the only significant breaking change is the (overdue) transition of ACA-Py to always use the new DIDComm message type prefix, changing the DID Message prefix from the old hardcoded did:sov:BzCbsNYhMrjHiqZDTUASHg;spec
to the new hardcoded https://didcomm.org
value, and using the new DIDComm MIME type in place of the old. The vast majority (all?) Aries deployments have long since been updated to accept both values, so this change just forces the use of the newer value in sending messages. In updating this, we retained the old configuration parameters most deployments were using (--emit-new-didcomm-prefix
and --emit-new-didcomm-mime-type
) but updated the code to set the configuration parameters to true
even if the parameters were not set. See [PR #2517].
The JSON-LD verifiable credential handling of JSON-LD contexts has been updated to pre-load the base contexts into the repository code so they are not fetched at run time. This is a security best practice for JSON-LD, and prevents errors in production when, from time to time, the JSON-LD contexts are unavailable because of outages of the web servers where they are hosted. See [PR #2587].
A Problem Report message is now sent when a request for a credential is received and there is no associated Credential Exchange Record. This may happen, for example, if an issuer decides to delete a Credential Exchange Record that has not be answered for a long time, and the holder responds after the delete. See [PR #2577].
"},{"location":"CHANGELOG/#0110-categorized-list-of-pull-requests","title":"0.11.0 Categorized List of Pull Requests","text":"Release 0.10.5 is a high priority patch release to correct an issue with the handling of the JSON-LD presentation verifications, where the status of the verification of the presentation.proof
in the Verifiable Presentation was not included when determining the verification value (true
or false
) of the overall presentation. A forthcoming security advisory will cover the details.
Anyone using JSON-LD presentations is recommended to upgrade to this version of ACA-Py as soon as possible.
"},{"location":"CHANGELOG/#0105-categorized-list-of-pull-requests","title":"0.10.5 Categorized List of Pull Requests","text":"Release 0.10.4 is a patch release to correct an issue with the handling of did:key
routing keys in some mediator scenarios, notably with the use of [Aries Framework Kotlin]. See the details in the PR and [Issue #2531 Routing for agents behind a aca-py based mediator is broken].
Thanks to codespree for raising the issue and providing the fix.
Aries Framework Kotlin
"},{"location":"CHANGELOG/#0104-categorized-list-of-pull-requests","title":"0.10.4 Categorized List of Pull Requests","text":"Release 0.10.3 is a patch release to add an upgrade process for very old versions of Aries Cloud Agent Python (circa 0.5.2). If you have a long time deployment of an issuer that uses revocation, this release could correct internal data (tags in secure storage) related to revocation registries. Details of the about the triggering problem can be found in [Issue #2485].
The upgrade is applied by running the following command for the ACA-Py instance to be upgraded:
./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg
Release 0.10.2 is a patch release for 0.10.1 that addresses three specific regressions found in deploying Release 0.10.1. The regressions are to fix:
http
and ws
(websocket) service endpoint with the same ID cannot message that agent. A scenario is an ACA-Py issuer connecting to an Endorser with both http
and ws
service endpoints. The updates made in 0.10.1 to improve ACA-Py DID resolution did not account for this scenario and needed a tweak to work ([Issue #2474], [PR #2475]).Release 0.10.1 contains a breaking change, an important fix for a regression introduced in 0.8.2 that impacts certain deployments, and a number of fixes and updates. Included in the updates is a significant internal reorganization of the DID and connection management code that was done to enable more flexible uses of different DID Methods, such as being able to use did:web
DIDs for DIDComm messaging connections. The work also paves the way for coming updates related to support for did:peer
DIDs for DIDComm. For details on the change see [PR #2409], which includes some of the best pull request documentation ever created.
Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.
The regression fix is for ACA-Py deployments that use multi-use invitations but do NOT use the --auto-accept-connection-requests
flag/processing. A change in 0.8.2 (PR [#2223]) suppressed an extra webhook event firing during the processing after receiving a connection request. An unexpected side effect of that change was that the subsequent webhook event also did not fire, and as a result, the controller did not get any event signalling a new connection request had been received via the multi-use invitation. The update in this release ensures the proper event fires and the controller receives the webhook.
See below for the breaking changes and a categorized list of the pull requests included in this release.
Updates in the CI/CD area include adding the publishing of a nightly
container image that includes any changes in the main branch since the last nightly
was published. This allows getting the \"latest and greatest\" code via a container image vs. having to install ACA-Py from the repository. In addition, Snyk scanning was added to the CI pipeline, and Indy SDK tests were removed from the pipeline.
[#2352] is a breaking change related to the storage of presentation exchange records in ACA-Py. In previous releases, presentation exchange protocol state data records were retained in ACA-Py secure storage after the completion of protocol instances. With this release the default behavior changes to deleting those records by default, unless the ----preserve-exchange-records
flag is set in the configuration. This extends the use of that flag that previously applied only to issue credential records. The extension matches the initial intention of the flag--that it cover both issue credential and present proof exchanges. The \"best practices\" for ACA-Py is that the controller (business logic) store any long-lasting business information needed for the service that is using the Aries Agent, and ACA-Py storage should be used only for data necessary for the operation of the agent. In particular, protocol state data should be held in ACA-Py only as long as the protocol is running (as it is needed by ACA-Py), and once a protocol instance completes, the controller should extract and store the business information from the protocol state before it is deleted from ACA-Py storage.
Release 0.10.1 has the same contents as 0.10.0. An error on PyPi prevented the 0.10.0 release from being properly uploaded because of an existing file of the same name. We immediately released 0.10.1 as a replacement.
"},{"location":"CHANGELOG/#090","title":"0.9.0","text":""},{"location":"CHANGELOG/#july-24-2023","title":"July 24, 2023","text":"Release 0.9.0 is an important upgrade that changes (PR [#2302]) the dependency on the now archived Hyperledger Ursa project to its updated, improved replacement, AnonCreds CL-Signatures. This important change is ONLY available when using Aries Askar as the wallet type, which brings in both [Indy VDR] and the CL-Signatures via the latest version of CredX from the indy-shared-rs repository. The update is NOT available to those that are using the Indy SDK. All new deployments of ACA-Py SHOULD use Aries Askar. Further, we strongly recommend that all deployments using the Indy SDK with ACA-Py upgrade their installation to use Aries Askar and the related components using the migration scripts available. An Indy SDK to Askar migration document added to the aca-py.org documentation site, and a deprecation warning added to the ACA-Py startup.
The second big change in this release is that we have upgraded the primary Python version from 3.6 to 3.9 (PR [#2247]). In this case, primary means that Python 3.9 is used to run the unit and integration tests on all Pull Requests. We also do nightly runs of the main branch using Python 3.10. As of this release we have dropped Python 3.6, 3.7 and 3.8, and introduced new dependencies that are not supported in those versions of Python. For those that use the published ACA-Py container images, the upgrade should be easily handled. If you are pulling ACA-Py into your own image, or a non-containerized environment, this is a breaking change that you will need to address.
Please see the next section for all breaking changes, and the subsequent section for a categorized list of all pull requests in this release.
"},{"location":"CHANGELOG/#breaking-changes","title":"Breaking Changes","text":"In addition to the breaking Python 3.6 to 3.9 upgrade, there are two other breaking changes that may impact some deployments.
[#2034] allows for additional flexibility in using public DIDs in invitations, and adds a restriction that \"implicit\" invitations must be proactively enabled using a flag (--requests-through-public-did
). Previously, such requests would always be accepted if --auto-accept
was enabled, which could lead to unexpected connections being established.
[#2170] is a change to improve message handling in the face of delivery errors when using a persistent queue implementation such as the ACA-Py Redis Plugin. If you are using the Redis plugin, you MUST upgrade to Redis Plugin Release 0.1.0 in conjunction with deploying this ACA-Py release. For those using their own persistent queue solution, see the PR [#2170] comments for information about changes you might need to make to your deployment.
"},{"location":"CHANGELOG/#categorized-list-of-pull-requests","title":"Categorized List of Pull Requests","text":"Release 0.8.2 contains a number of minor fixes and updates to ACA-Py, including the correction of a regression in Release 0.8.0 related to the use of plugins (see [#2255]). Highlights include making it easier to use tracing in a development environment to collect detailed performance information about what is going in within ACA-Py.
This release pulls in indy-shared-rs Release 3.3 which fixes a serious issue in AnonCreds verification, as described in issue [#2036], where the verification of a presentation with multiple revocable credentials fails when using Aries Askar and the other shared components. This issue occurs only when using Aries Askar and indy-credx Release 3.3.
An important new feature in this release is the ability to set some instance configuration settings at the tenant level of a multi-tenant deployment. See PR [#2233].
There are no breaking changes in this release.
"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_1","title":"Categorized List of Pull Requests","text":"Version 0.8.1 is an urgent update to Release 0.8.0 to address an inability to execute the upgrade
command. The upgrade
command is needed for 0.8.0 Pull Request [#2116] - \"UPGRADE: Fix multi-use invitation performance\", which is useful for (at least) deployments of ACA-Py as a mediator. In the release, the upgrade process is revamped, and documented in Upgrading ACA-Py.
Key points about upgrading for those with production, pre-0.8.1 ACA-Py deployments:
upgrade
command. To date, there has been no need for this feature.Recent changes to Aries Askar have resulted in Askar supporting Postgres version 11 and greater. If you are on Postgres 10 or earlier and want to upgrade to use Askar, you must migrate your database to Postgres 10.
We have also noted that in some container orchestration environments such as Red Hat's OpenShift and possibly other Kubernetes distributions, Askar using Postgres versions greater than 14 do not install correctly. Please monitor [Issue #2199] for an update to this limitation. We have found that Postgres 15 does install correctly in other environments (such as in docker compose
setups).
upgrade
Command0.8.0 is a breaking change that contains all updates since release 0.7.5. It extends the previously tagged 1.0.0-rc1
release because it is not clear when the 1.0.0 release will be finalized. Many of the PRs in this release were previously included in the 1.0.0-rc1
release. The categorized list of PRs separates those that are new from those in the 1.0.0-rc1
release candidate.
There are not a lot of new Aries Framework features in this release, as the focus has been on cleanup and optimization. The biggest addition is the inclusion with ACA-Py of a universal resolver interface, allowing an instance to have both local resolvers for some DID Methods and a call out to an external universal resolver for other DID Methods. Another significant new capability is full support for Hyperledger Indy transaction endorsement for Authors and Endorsers. A new repo aries-endorser-service has been created that is a pre-configured instance of ACA-Py for use as an Endorser service.
A recently completed feature that is outside of ACA-Py is a script to migrate existing ACA-Py storage from Indy SDK format to Aries Askar format. This enables existing deployments to switch to using the newer Aries Askar components. For details see the converter in the aries-acapy-tools repository.
"},{"location":"CHANGELOG/#container-publishing-updated","title":"Container Publishing Updated","text":"With this release, a new automated process publishes container images in the Hyperledger container image repository. New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the Hyperledger Package Repository under aries-cloudagent-python and a link to the packages added to the repositories main page (under \"Packages\"). Additional information about the container image publication process can be found in the document Container Images and Github Actions.
The ACA-Py container images are based on Python 3.6 and 3.9 slim-bullseye
images, and are designed to support linux/386 (x86)
, linux/amd64 (x64)
, and linux/arm64
. However, for this release, the publication of multi-architecture containers is disabled. We are working to enable that through the updating of some dependencies that lack that capability. There are two flavors of image built for each Python version. One contains only the Indy/Aries Shared Libraries only (Aries Askar, Indy VDR and Indy Shared RS, supporting only the use of --wallet-type askar
). The other (labelled indy
) contains the Indy/Aries shared libraries and the Indy SDK (considered deprecated). For new deployments, we recommend using the Python 3.9 Shared Library images. For existing deployments, we recommend migrating to those images.
Those currently using the container images published by BC Gov on Docker Hub should change to use those published to the Hyperledger Package Repository under aries-cloudagent-python.
"},{"location":"CHANGELOG/#breaking-changes-and-upgrades","title":"Breaking Changes and Upgrades","text":""},{"location":"CHANGELOG/#pr-2034-implicit-connections","title":"PR #2034 -- Implicit connections","text":"The break impacts existing deployments that support implicit connections, those initiated by another agent using a Public DID for this instance instead of an explicit invitation. Such deployments need to add the configuration parameter --requests-through-public-did
to continue to support that feature. The use case is that an ACA-Py instance publishes a public DID on a ledger with a DIDComm service
in the DIDDoc. Other agents resolve that DID, and attempt to establish a connection with the ACA-Py instance using the service
endpoint. This is called an \"implicit\" connection in RFC 0023 DID Exchange.
Updates the handling of \"unrevealed attributes\" during verification of AnonCreds presentations, allowing them to be used in a presentation, with additional data that can be checked if for unrevealed attributes. As few implementations of Aries wallets support unrevealed attributes in an AnonCreds presentation, this is unlikely to impact any deployments.
"},{"location":"CHANGELOG/#pr-2145-update-webhook-message-to-terse-form-by-default-added-startup-flag-debug-webhooks-for-full-form","title":"PR #2145 - Update webhook message to terse form by default, added startup flag --debug-webhooks for full form","text":"The default behavior in ACA-Py has been to keep the full text of all messages in the protocol state object, and include the full protocol state object in the webhooks sent to the controller. When the messages include an object that is very large in all the messages, the webhook may become too big to be passed via HTTP. For example, issuing a credential with a photo as one of the claims may result in a number of copies of the photo in the protocol state object and hence, very large webhooks. This change reduces the size of the webhook message by eliminating redundant data in the protocol state of the \"Issue Credential\" message as the default, and adds a new parameter to use the old behavior.
"},{"location":"CHANGELOG/#upgrade-pr-2116-upgrade-fix-multi-use-invitation-performance","title":"UPGRADE PR #2116 - UPGRADE: Fix multi-use invitation performance","text":"The way that multiuse invitations in previous versions of ACA-Py caused performance to degrade over time. An update was made to add state into the tag names that eliminated the need to scan the tags when querying storage for the invitation.
If you are using multiuse invitations in your existing (pre-0.8.0
deployment of ACA-Py, you can run an upgrade
to apply this change. To run upgrade from previous versions, use the following command using the 0.8.0
version of ACA-Py, adding you wallet settings:
aca-py upgrade <other wallet config settings> --from-version=v0.7.5 --upgrade-config-path ./upgrade.yml
Verifiable credential, presentation and revocation handling updates
Out of Band (OOB) and DID Exchange / Connection Handling / Mediator
--mediator-invitation
with OOB invitation + cleanup #1970 (shaangill025)DID Registration and Resolution related updates
Hyperledger Indy Endorser/Author Transaction Handling
Admin API Additions
Startup Command Line / Environment / YAML Parameter Updates
Internal Aries framework data handling updates
Unit, Integration, and Aries Agent Test Harness Test updates
Dependency, Python version, GitHub Actions and Container Image Changes
Demo and Documentation Updates
Release management pull requests
0.7.5 is a patch release to deal primarily to add PR #1881 DID Exchange in ACA-Py 0.7.4 with explicit invitations and without auto-accept broken. A couple of other PRs were added to the release, as listed below, and in Milestone 0.7.5.
"},{"location":"CHANGELOG/#list-of-pull-requests","title":"List of Pull Requests","text":"Existing multitenant JWTs invalidated when a new JWT is generated: If you have a pre-existing implementation with existing Admin API authorization JWTs, invoking the endpoint to get a JWT now invalidates the existing JWT. Previously an identical JWT would be created. Please see this comment on PR #1725 for more details.
0.7.4 is a significant release focused on stability and production deployments. As the \"patch\" release number indicates, there were no breaking changes in the Admin API, but a huge volume of updates and improvements. Highlights of this release include:
In addition, there are a significant number of general enhancements, bug fixes, documentation updates and code management improvements.
This release is a reflection of the many groups stressing ACA-Py in production environments, reporting issues and the resulting solutions. We also have a very large number of contributors to ACA-Py, with this release having PRs from 22 different individuals. A big thank you to all of those using ACA-Py, raising issues and providing solutions.
"},{"location":"CHANGELOG/#major-enhancements","title":"Major Enhancements","text":"A lot of work has been put into this release related to performance and load testing, with significant updates being made to the key \"shared component\" ACA-Py dependencies (Aries Askar, Indy VDR) and Indy Shared RS (including CredX). We now recommend using those components (by using --wallet-type askar
in the ACA-Py startup parameters) for new ACA-Py deployments. A wallet migration tool from indy-sdk storage to Askar storage is still needed before migrating existing deployment to Askar. A big thanks to those creating/reporting on stress test scenarios, and especially the team at LISSI for creating the aries-cloudagent-loadgenerator to make load testing so easy! And of course to the core ACA-Py team for addressing the findings.
The largest enhancement is in the area of the endorsing of Hyperledger Indy ledger transactions, enabling an instance of ACA-Py to act as an Endorser for Indy authors needing endorsements to write objects to an Indy ledger. We're working on an Aries Endorser Service based on the new capabilities in ACA-Py, an Endorser to be easily operated by an organization, ideally with a controller starter kit supporting a basic human and automated approvals business workflow. Contributions welcome!
A focus towards the end of the 0.7.4 development and release cycle was on the handling of AnonCreds revocation in ACA-Py. Most important, a production issue was uncovered where by an ACA-Py issuer's local Revocation Registry data could get out of sync with what was published on an Indy ledger, resulting in an inability to publish new RevRegEntry transactions -- making new revocations impossible. As a result, we have added some new endpoints to enable an update to the RevReg storage such that RevRegEntry transactions can again be published to the ledger. Other changes were added related to revocation in general and in the handling of tails files in particular.
The team has worked a lot on evolving the persistent queue (PQ) approach available in ACA-Py. We have landed on a design for the queues for inbound and outbound messages using a default in-memory implementation, and the ability to replace the default method with implementations created via an ACA-Py plugin. There are two concrete, out-of-the-box external persistent queuing solutions available for Redis and Kafka. Those ACA-Py persistent queue implementation repositories will soon be migrated to the Aries project within the Hyperledger Foundation's GitHub organization. Anyone else can implement their own queuing plugin as long as it uses the same interface.
Several new ways to control ACA-Py configurations were added, including new startup parameters, Admin API parameters to control instances of protocols, and additional web hook notifications.
A number of fixes were made to the Credential Exchange protocols, both for V1 and V2, and for both AnonCreds and W3C format VCs. Nothing new was added and there no changes in the APIs.
As well there were a number of internal fixes, dependency updates, documentation and demo changes, developer tools and release management updates. All the usual stuff needed for a healthy, growing codebase.
"},{"location":"CHANGELOG/#categorized-list-of-pull-requests_4","title":"Categorized List of Pull Requests","text":"Hyperledger Indy Endorser related updates:
Additions to the startup parameters, Admin API and Web Hooks
Persistent Queues
Credential Revocation and Tails File Handling
Issue Credential, Present Proof updates/fixes
Mediator updates and fixes
Multitenacy updates and fixes
Dependencies and internal code updates/fixes
transport_id
variable assignment back to outbound enqueue method #1776 (amanji)Documentation and Demo Updates
Code management and contributor/developer support updates
Release management pull requests
This release includes some new AIP 2.0 features out (Revocation Notification and Discover Features 2.0), a major new feature for those using Indy ledger (multi-ledger support), a new \"version upgrade\" process that automates updating data in secure storage required after a new release, and a fix for a critical bug in some mediator scenarios. The release also includes several new pieces of documentation (upgrade processing, storage database information and logging) and some other documentation updates that make the ACA-Py Read The Docs site useful again. And of course, some recent bug fixes and cleanups are included.
There is a BREAKING CHANGE for those deploying ACA-Py with an external outbound queue implementation (see PR #1501). As far as we know, there is only one organization that has such an implementation and they were involved in the creation of this PR, so we are not making this release a minor or major update. However, anyone else using an external queue should be aware of the impact of this PR that is included in the release.
For those that have an existing deployment of ACA-Py with long-lasting connection records, an upgrade is needed to use RFC 434 Out of Band and the \"reuse connection\" as the invitee. In PR #1453 (details below) a performance improvement was made when finding a connection for reuse. The new approach (adding a tag to the connection to enable searching) applies only to connections made using this ACA-Py release and later, and \"as-is\" connections made using earlier releases of ACA-Py will not be found as reuse candidates. A new \"Upgrade deployment\" capability (#1557, described below) must be executed to update your deployment to add tags for all existing connections.
The Supported RFCs document has been updated to reflect the addition of the AIP 2.0 RFCs for which support was added.
The following is an annotated list of PRs in the release, including a link to each PR.
--version
command line option #1589A mostly maintenance release with some key updates and cleanups based on community deployments and discovery. With usage in the field increasing, we're cleaning up edge cases and issues related to volume deployments.
The most significant new feature for users of Indy ledgers is a simplified approach for transaction authors getting their transactions signed by an endorser. Transaction author controllers now do almost nothing other than configuring their instance to use an Endorser, and ACA-Py takes care of the rest. Documentation of that feature is here.
A relatively minor maintenance release to address issues found since the 0.7.0 Release. Includes some cleanups of JSON-LD Verifiable Credentials and Verifiable Presentations
inject_or
method to dynamic injection framework to resolve typing ambiguity (#1376)Another significant release, this version adds support for multiple new protocols, credential formats, and extension methods.
This is a significant release of ACA-Py with several new features, as well as changes to the internal architecture in order to set the groundwork for using the new shared component libraries: indy-vdr, indy-credx, and aries-askar.
"},{"location":"CHANGELOG/#mediator-support","title":"Mediator support","text":"While ACA-Py had previous support for a basic routing protocol, this was never fully developed or used in practice. Starting with this release, inbound and outbound connections can be established through a mediator agent using the Aries Mediator Coordination Protocol. This work was initially contributed by Adam Burdett and Daniel Bluhm of Indicio on behalf of SICPA. Read more about mediation support.
"},{"location":"CHANGELOG/#multi-tenancy-support","title":"Multi-Tenancy support","text":"Started by BMW and completed by Animo Solutions and Anon Solutions on behalf of SICPA, this feature allows for a single ACA-Py instance to host multiple wallet instances. This can greatly reduce the resources required when many identities are being handled. Read more about multi-tenancy support.
"},{"location":"CHANGELOG/#new-connection-protocols","title":"New connection protocol(s)","text":"In addition to the Aries 0160 Connections RFC, ACA-Py now supports the Aries DID Exchange Protocol for connection establishment and reuse, as well as the Aries Out-of-Band Protocol for representing connection invitations and other pre-connection requests.
"},{"location":"CHANGELOG/#issue-credential-v2","title":"Issue-Credential v2","text":"This release includes an initial implementation of the Aries Issue Credential v2 protocol.
"},{"location":"CHANGELOG/#notable-changes-for-administrators","title":"Notable changes for administrators","text":"There are several new endpoints available for controllers as well as new startup parameters related to the multi-tenancy and mediator features, see the feature description pages above in order to make use of these features. Additional admin endpoints are introduced for the DID Exchange, Issue Credential v2, and Out-of-Band protocols.
When running aca-py start
, a new wallet will no longer be created unless the --auto-provision
argument is provided. It is recommended to always use aca-py provision
to initialize the wallet rather than relying on automatic behaviour, as this removes the need for repeatedly providing the wallet seed value (if any). This is a breaking change from previous versions.
When running aca-py provision
, an existing wallet will not be removed and re-created unless the --recreate-wallet
argument is provided. This is a breaking change from previous versions.
The logic around revocation intervals has been tightened up in accordance with Present Proof Best Practices.
The following are breaking changes to the internal APIs which may impact Python code extensions.
Manager classes generally accept a Profile
instance, where previously they accepted a RequestContext
.
Admin request handlers now receive an AdminRequestContext
as app[\"context\"]
. The current profile is available as app[\"context\"].profile
. The admin server now generates a unique context instance per request in order to facilitate multi-tenancy, rather than reusing the same instance for each handler.
In order to inject the BaseStorage
or BaseWallet
interfaces, a ProfileSession
must be used. Other interfaces can be injected at the Profile
or ProfileSession
level. This is obtained by awaiting profile.session()
for the current Profile
instance, or (preferably) using it as an async context manager:
python= async with profile.session() as session: storage = session.inject(BaseStorage)
inject
method of a context is no longer async
.https://didcomm.org
message type prefix (currently opt-in via the --emit-new-didcomm-prefix
flag) #705, #713accept-request
API method #715, #716names
#706create-proof
API method #700get-nym-role
action #671did:key:
handling in out-of-band protocol support #639names
and attribute-value specifications in present-proof protocol #587/credential/{id}
admin route #474>
, <
, <=
in addition to >=
) #457run_docker
/run_demo
scripts for Windows #357present-proof/create-request
admin endpoint for creating connectionless presentation requests #356connections/create-static
admin endpoint for creating static connections #354initiator
to connection record queries to ensure uniqueness in the case of a self-connection #161This is the first PyPI release. The history begins with the transfer of aca-py from bcgov to hyperledger.
Hyperledger is a collaborative project at The Linux Foundation. It is an open-source and open community project where participants choose to work together, and in that process experience differences in language, location, nationality, and experience. In such a diverse environment, misunderstandings and disagreements happen, which in most cases can be resolved informally. In rare cases, however, behavior can intimidate, harass, or otherwise disrupt one or more people in the community, which Hyperledger will not tolerate.
A Code of Conduct is useful to define accepted and acceptable behaviors and to promote high standards of professional practice. It also provides a benchmark for self evaluation and acts as a vehicle for better identity of the organization.
This code (CoC) applies to any member of the Hyperledger community \u2013 developers, participants in meetings, teleconferences, mailing lists, conferences or functions, etc. Note that this code complements rather than replaces legal rights and obligations pertaining to any particular situation.
"},{"location":"CODE_OF_CONDUCT/#statement-of-intent","title":"Statement of Intent","text":"Hyperledger is committed to maintain a positive work environment. This commitment calls for a workplace where participants at all levels behave according to the rules of the following code. A foundational concept of this code is that we all share responsibility for our work environment.
"},{"location":"CODE_OF_CONDUCT/#code","title":"Code","text":"Treat each other with respect, professionalism, fairness, and sensitivity to our many differences and strengths, including in situations of high pressure and urgency.
Never harass or bully anyone verbally, physically or sexually.
Never discriminate on the basis of personal characteristics or group membership.
Communicate constructively and avoid demeaning or insulting behavior or language.
Seek, accept, and offer objective work criticism, and acknowledge properly the contributions of others.
Be honest about your own qualifications, and about any circumstances that might lead to conflicts of interest.
Respect the privacy of others and the confidentiality of data you access.
With respect to cultural differences, be conservative in what you do and liberal in what you accept from others, but not to the point of accepting disrespectful, unprofessional or unfair or unwelcome behavior or advances.
Promote the rules of this Code and take action (especially if you are in a leadership position) to bring the discussion back to a more civil level whenever inappropriate behaviors are observed.
Stay on topic: Make sure that you are posting to the correct channel and avoid off-topic discussions. Remember when you update an issue or respond to an email you are potentially sending to a large number of people.
Step down considerately: Members of every project come and go, and the Hyperledger is no different. When you leave or disengage from the project, in whole or in part, we ask that you do so in a way that minimizes disruption to the project. This means you should tell people you are leaving and take the proper steps to ensure that others can pick up where you left off.
is acting in a way that reduces another person's dignity, sense of self-worth or respect within the community.
"},{"location":"CODE_OF_CONDUCT/#discrimination","title":"Discrimination","text":"is the prejudicial treatment of an individual based on criteria such as: physical appearance, race, ethnic origin, genetic differences, national or social origin, name, religion, gender, sexual orientation, family or health situation, pregnancy, disability, age, education, wealth, domicile, political view, morals, employment, or union activity.
"},{"location":"CODE_OF_CONDUCT/#insulting-behavior","title":"Insulting Behavior","text":"is treating another person with scorn or disrespect.
"},{"location":"CODE_OF_CONDUCT/#acknowledgement","title":"Acknowledgement","text":"is a record of the origin(s) and author(s) of a contribution.
"},{"location":"CODE_OF_CONDUCT/#harassment","title":"Harassment","text":"is any conduct, verbal or physical, that has the intent or effect of interfering with an individual, or that creates an intimidating, hostile, or offensive environment.
"},{"location":"CODE_OF_CONDUCT/#leadership-position","title":"Leadership Position","text":"includes group Chairs, project maintainers, staff members, and Board members.
"},{"location":"CODE_OF_CONDUCT/#participant","title":"Participant","text":"includes the following persons:
is the genuine consideration you have for someone (if only because of their status as participant in Hyperledger, like yourself), and that you show by treating them in a polite and kind way.
"},{"location":"CODE_OF_CONDUCT/#sexual-harassment","title":"Sexual Harassment","text":"includes visual displays of degrading sexual images, sexually suggestive conduct, offensive remarks of a sexual nature, requests for sexual favors, unwelcome physical contact, and sexual assault.
"},{"location":"CODE_OF_CONDUCT/#unwelcome-behavior","title":"Unwelcome Behavior","text":"Hard to define? Some questions to ask yourself are:
includes requests for sexual favors, and other verbal or physical conduct of a sexual nature, where:
is a tendency of individuals or groups to use persistent aggressive or unreasonable behavior (e.g. verbal or written abuse, offensive conduct or any interference which undermines or impedes work) against a co-worker or any professional relations.
"},{"location":"CODE_OF_CONDUCT/#work-environment","title":"Work Environment","text":"is the set of all available means of collaboration, including, but not limited to messages to mailing lists, private correspondence, Web pages, chat channels, phone and video teleconferences, and any kind of face-to-face meetings or discussions.
"},{"location":"CODE_OF_CONDUCT/#incident-procedure","title":"Incident Procedure","text":"To report incidents or to appeal reports of incidents, send email to Mike Dolan (mdolan@linuxfoundation.org) or Angela Brown (angela@linuxfoundation.org). Please include any available relevant information, including links to any publicly accessible material relating to the matter. Every effort will be taken to ensure a safe and collegial environment in which to collaborate on matters relating to the Project. In order to protect the community, the Project reserves the right to take appropriate action, potentially including the removal of an individual from any and all participation in the project. The Project will work towards an equitable resolution in the event of a misunderstanding.
"},{"location":"CODE_OF_CONDUCT/#credits","title":"Credits","text":"This code is based on the W3C\u2019s Code of Ethics and Professional Conduct with some additions from the Cloud Foundry\u2018s Code of Conduct.
"},{"location":"CONTRIBUTING/","title":"How to contribute","text":"You are encouraged to contribute to the repository by forking and submitting a pull request.
For significant changes, please open an issue first to discuss the proposed changes to avoid re-work.
(If you are new to GitHub, you might start with a basic tutorial and check out a more detailed guide to pull requests.)
Pull requests will be evaluated by the repository guardians on a schedule and if deemed beneficial will be committed to the main
branch. Pull requests should have a descriptive name, include a summary of all changes made in the pull request description, and include unit tests that provide good coverage of the feature or fix. A Continuous Integration (CI) pipeline is executed on all PRs before review and contributors are expected to address all CI issues identified. Where appropriate, PRs that impact the end-user and developer demos in the repo should include updates or extensions to those demos to cover the new capabilities.
If you would like to propose a significant change, please open an issue first to discuss the work with the community.
Contributions are made pursuant to the Developer's Certificate of Origin, available at https://developercertificate.org, and licensed under the Apache License, version 2.0 (Apache-2.0).
"},{"location":"CONTRIBUTING/#development-tools","title":"Development Tools","text":""},{"location":"CONTRIBUTING/#pre-commit","title":"Pre-commit","text":"A configuration for pre-commit is included in this repository. This is an optional tool to help contributors commit code that follows the formatting requirements enforced by the CI pipeline. Additionally, it can be used to help contributors write descriptive commit messages that can be parsed by changelog generators.
On each commit, pre-commit hooks will run that verify the committed code complies and formats with ruff. To install the ruff checks:
pre-commit install\n
To install the commit message linter:
pre-commit install --hook-type commit-msg\n
"},{"location":"LTS-Strategy/","title":"ACA-Py LTS Strategy","text":"This document defines the Long-term support (LTS) release strategy for ACA-Py. This document is inspired from the Hyperledger Fabric Release Strategy.
Long-term support definition from wikipedia.org:
Long-term support (LTS) is a product lifecycle management policy in which a stable release of computer software is maintained for a longer period of time than the standard edition.
LTS applies the tenets of reliability engineering to the software development process and software release life cycle. Long-term support extends the period of software maintenance; it also alters the type and frequency of software updates (patches) to reduce the risk, expense, and disruption of software deployment, while promoting the dependability of the software.
"},{"location":"LTS-Strategy/#motivation","title":"Motivation","text":"Many of those using ACA-Py rely upon the Docker images which are published nightly and the releases. These images contain the project dependencies/libraries which need constant security vulnerability monitoring and patching.
This is one of the factors which motivated setting up the LTS releases which requires the docker images to be scanned regularly and patching them for vulnerabilities.
In addition to this, administrators can expect the following of a LTS release:
Similarly, there are benefits to ACA-Py maintainers, code contributors, and the wider community:
ACA-Py uses the semver pattern of major, minor and patch releases major.minor.patch
e.g. 0.10.5, 0.11.1, 0.12.0, 0.12.1, 1.0.0, 1.0.1 etc. Prior to the 1.0.0 release of ACA-Py, \"major\" releases triggered only a \"minor\" version update, as permitted by the semver handling of the 0
major version indicator.
Because a new major release typically has large new features that may not yet be tried by the user community, and because deployments may lag in support of the new release, it is not expected that a new major release (such as 1.0.0
) will immediately be designated as a LTS release. Eventually, each major release (0.x.x, 1.x.x, 2.x.x etc.) will have at least one minor release designated by the ACA-Py maintainers as an \"LTS release.\"
After an LTS release is designated, succeeding patch releases will occur as normal. When the ACA_Py maintainers decide that a new major or minor release is required, an \"LTS\" git branch for the most recent patch of the LTS line will be created -- likely named <minor>.lts
(e.g., 0.11.lts
, 1.1.lts
). Subsequent patches to that designated LTS release will occur from that branch -- often cherry-picked from the main
branch. There is no predefined timing for next minor/major version, with the decision based on semantic versioning considerations, such as whether API changes are needed, or deprecated capabilities need to be removed. Other considerations may also apply, for example significant upgrade steps may motivate a shift to a new major version.
If a major release is not delivered for an extended period of time, the maintainers may designate a later minor release as the next LTS release, for example if 1.1
is the latest LTS release and there is no need to increment to 2.0
for several quarters, the maintainers may decide to designate 1.3
as an LTS release.
For LTS releases, 3rd digit patch releases will be provided for bug and security fixes approximately every three months based on the fixes (or lack thereof) to be applied. In order to ensure the stability of the LTS release and reduce the risk of functional regressions and bugs, significant new features and other changes occurring on the main
branch, and released in later minor or major versions will not be included in LTS patch releases.
When a new LTS release is designated, an \"end-of-life\" date will be set as being 9 months later for the prior LTS release. The overlap period is intended to provide users a time window to upgrade their deployments. Users can expect LTS patch releases to address critical bugs and other fixes through that end-of-life date. If there are multiple, active LTS branches, ACA-Py maintainers will determine which fixes are backported to which of those branches.
"},{"location":"LTS-Strategy/#lts-to-lts-compatibility","title":"LTS to LTS Compatibility","text":"Features related to ACA-Py capabilities are documented in the Supported RFCs and features, in the ACA-Py ChangeLog, and in documents updated and added as part of each ACA-Py Release. LTS to LTS compatibility can be determined from reviewing those sources.
"},{"location":"LTS-Strategy/#upgrade-testing","title":"Upgrade Testing","text":"The ACA-Py project expects to test and provide guidance on all major/minor upgrades (e.g. 0.11 to 0.12). Other upgrade paths will not be tested and are not guaranteed to work. Consult the ChangeLog and its pointers to release-to-release upgrade information for guidance.
"},{"location":"LTS-Strategy/#prior-art-and-alternatives","title":"Prior art and alternatives","text":"While many open source projects provide LTS releases, there is no industry standard for LTS release approach. Projects use many different variants of LTS approaches to best suit their project's particular needs.
This release strategy was based on the following open source projects:
The Maintainers of this repo, defined as GitHub users with escalated privileges in the repo, are managed in the Hyperledger \"governance\" repo's access-control.yaml file. Consult that to see:
The actions covered below for becoming and removing are made manifest through PRs to that file.
"},{"location":"MAINTAINERS/#the-duties-of-a-maintainer","title":"The Duties of a Maintainer","text":"Maintainers are expected to perform the following duties for this repository. The duties are listed in more or less priority order:
This community welcomes contributions. Interested contributors are encouraged to progress to become maintainers. To become a maintainer the following steps occur, roughly in order.
Being a maintainer is not a status symbol or a title to be carried indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:
The process to move a maintainer from active to emeritus status is comparable to the process for adding a maintainer, outlined above. In the case of voluntary resignation, the Pull Request can be merged following a maintainer issue approval. If the removal is for any other reason, the following steps SHOULD be followed:
Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.
"},{"location":"Managing-ACA-Py-Doc-Site/","title":"Managing the ACA-Py Documentation Site","text":"The ACA-Py documentation site is a MkDocs Material site generated from the Markdown files in this repository. Whenever the main
branch is updated or a release branch is (possibly temporarily) created, the publish-docs GitHub Action is fired, generating and publishing the documentation for the updated/created branch. The generation process generates the static set of HTML pages for the version in a folder in the gh-pages
branch of this repo. The static pages for each (other than the main
branch) version are not updated after creation. From time to time, some \"extra\" maintenance on the versions are needed and this document describes those activities.
The generation process includes the following steps as part of the GitHub Action and mkdocs configuration.
When the GitHub Action fires, it runs a container that carries out the following steps:
main
or docs-v<version>
(e.g docs-v1.1.0rc1
).docs
folder, and to update links that need to be changed to work on the generated site. This allows us to have links working using the GitHub UI and on the generated site.mike
that generates the mkdocs HTML pages and then captures and commits them into the gh-pages
branch of the repository. It also adds (if needed) a reference to the new version in the site's \"versions\" dropdown, enabling users to pick the version of the docs they want to look at. The process uses the mkdocs.yml configuration file in generating the site.When preparing for a release (or even just a PR to main
) you can test the documentation site on your local clone using the following steps. The steps assume that you have installed mkdocs
on your system. Guidance for that can be found in the MkDocs Material documentation.
mkdocs
. Watch for warnings of missing documents and broken links in the startup messages. See the notes below for dealing with those issues.clean
. This should undo the changes done by the script. You should check that there no unexpected files changed that you don't want committed into the repo.If there are missing documents, it may be that they are new Markdown files that have not yet been added to the mkdocs.yml navigation. Update that file to add the new files, and push the changes to the repository in a pull request. There are a few files listed below that we don't generate into the documentation site, and they can be ignored.
assets/README.md
design/AnoncredsW3CCompatibility.md
design/UpgradeViaApi.md
features/W3cCredentials.md
If there are broken links, it is likely because there is a Markdown link that works using the GitHub UI (e.g. a relative link to a file in the repo) but doesn't on the generated site. In general there are two ways to fix these:
sed
commands so that the link differs in the GitHub UI and the generated site -- working in both. A pain, but sometimes needed...Documentation is added to the site for release candidates (RCs). When those release candidates are replaced, we want to remove their documentation version from the documentation site. In the current GitHub Action, the version documentation is created but never deleted, so the process to remove the documentation for the RC is manual. It would be nice to create a mechanism in the GitHub Action to do this automatically, but its not there yet.
To delete the documentation version, do the following:
git checkout -b gh-pages --track upstream/gh-pages
(or use whatever local branch name you want)git status
and make sure there are no changes in the branch -- e.g., new files that shouldn't be added to the gh-pages
branch. If there are any -- delete the files so they are not added.rm -rf 1.1.0rc1
versions.json
file and remove the reference to the RC release in the file.gh-pages
branch (don't PR them into main
!!).The automatic generation process from ACA-Py started with release 0.12.0. Unfortunately, we declared release 0.11.x to be an Long Term Support version and so we still need to add 0.11.x version documentation to the generated site. Here's the (lousy) process to do this. Typically, swcurran will do this and no one else needs to worry about it. But for completeness, here is the process:
git checkout -b gh-pages --track upstream/gh-pages
to checkout a local copy of the generated pages from that repo.git checkout -b gh-pages --track upstream/gh-pages
to create a local branch from which you will push a PR.versions.json
file to add the 0.11.x reference into the file.gh-pages
branch (don't PR them into main
!!).Ugly! The LTS for 0.11 ends in January 2025 and this process can be dropped.
"},{"location":"PUBLISHING/","title":"How to Publish a New Version","text":"The code to be published should be in the main
branch. Make sure that all the PRs to go in the release are merged, and decide on the release tag. Should it be a release candidate or the final tag, and should it be a major, minor or patch release, per semver rules.
Once ready to do a release, create a local branch that includes the following updates:
Create a local PR branch from an updated main
branch, e.g. \"1.1.0rc1\".
See if there are any Document Site mkdocs
changes needed. Run the script ./scripts/prepmkdocs.sh; mkdocs
. Watch the log, noting particularly if there are new documentation files that are in the docs folder and not referenced in the mkdocs navigation. If there is, update the mkdocs.yml
file as necessary. On completion of the testing, run the script ./scripts/prepmkdocs.sh clean
to undo the temporary changes to the docs. Be sure to do the last clean
step -- DO NOT MERGE THE TEMPORARY DOC CHANGES. For more details see the Managing the ACA-Py Documentation Site document.
Update the CHANGELOG.md to add the new release. Only create a new section when working on the first release candidate for a new release. When transitioning from one release candidate to the next, or to an official release, just update the title and date of the change log section.
Collect the details of the merged PRs included in this release -- a list of PR title, number, link to PR, author's github ID, and a link to the author's github account. Do not include dependabot
PRs. For those, we put a live link for the date range of the release (guidance below).
To generate the list, run the script genChangeLog.sh
command (requires you have gh and jq installed), with the date of the day before the last release. The day before is picked to make sure you get all of the changes. The script generates the list of all PRs, minus the dependabot ones, merged since the last release in the required markdown format for the ChangeLog. At the end of the list is some markdown for putting a link into the ChangeLog to see the dependabot PRs merged in the release.
Note: The output of the script is roughly what you need for the ChangeLog, but use your discretion in getting the list right, and making sure the dates for the dependabot PRs is correct. For example, when doing a follow up to an RC release, the date range in the dependabot link should be the day before the last non-RC release, which won't be generated correctly in this release.
From the root of the repository folder, run:
./scripts/genChangeLog.sh <date>\n
Leave off the date argument to get usage information.
The output should look like this -- and what you see in CHANGELOG.md:
- Only change interop testing fork on pull requests [\\#3218](https://github.com/openwallet-foundation/acapy/pull/3218) [jamshale](https://github.com/jamshale)\n - Remove the RC from the versions table [\\#3213](https://github.com/openwallet-foundation/acapy/pull/3213) [swcurran](https://github.com/swcurran)\n - Feature multikey management [\\#3246](https://github.com/openwallet-foundation/acapy/pull/3246) [PatStLouis](https://github.com/PatStLouis)\n
Once you have the list of PRs:
dependabot
PRs without listing them all, add to the end of the categorized list of PRs the two dependabot
lines of the script output (after the list of PRs). The text will look like this:- Dependabot PRs\n - [List of Dependabot PRs in this release](https://github.com/openwallet-foundation/acapy/pulls?q=is%3Apr+is%3Amerged+merged%3A2024-08-16..2024-09-16+author%3Aapp%2Fdependabot+)\n
dependabot
URL to make sure the full period between the previous non-RC release to the date of the non-RC release you are preparing.Include a PR in the list for this soon-to-be PR, initially with the \"next to be issued\" number for PRs/Issues. At the end output of the script is the highest numbered PR and issue. Your PR will be one higher than the highest of those two numbers. Note that you still might have to correct the number after you create the PR if someone sneaks an issue or PR in before you submit your PR.
Check to see if there are any other PRs that should be included in the release.
Update the ReadTheDocs in the /docs
folder by following the instructions in the ./UpdateRTD.md
file. That will likely add a number of new and modified files to the PR. Eliminate all of the errors in the generation process, either by mocking external dependencies or by fixing ACA-Py code. If necessary, create an issue with the errors and assign it to the appropriate developer. Experience has demonstrated to use that documentation generation errors should be fixed in the code.
Search across the repository for the previous version number and update it everywhere that makes sense. The CHANGELOG.md entry for the previous release is a likely exception, and the pyproject.toml
in the root MUST be updated. You can skip (although it won't hurt) to update the files in the open-api
folder as they will be automagically updated by the next step in publishing. The incremented version number MUST adhere to the Semantic Versioning Specification based on the changes since the last published release. For Release Candidates, the form of the tag is \"0.11.0rc2\". As of release 0.11.0
we have dropped the previously used -
in the release candidate version string to better follow the semver rules.
Regenerate openapi.json and swagger.json by running scripts/generate-open-api-spec
from within the acapy_agent
folder.
Command: cd acapy_agent;../scripts/generate-open-api-spec;cd ..
Folders may not be cleaned up by the script, so the following can be run, likely with sudo
-- rm -rf open-api/.build
. The folder is .gitignore
d, so there is not a danger they will be pushed, even if they are not deleted.
Double check all of these steps above, and then submit a PR from the branch. Add this new PR to CHANGELOG.md so that all the PRs are included. If there are still further changes to be merged, mark the PR as \"Draft\", repeat ALL of the steps again, and then mark this PR as ready and then wait until it is merged. It's embarrassing when you have to do a whole new release just because you missed something silly...I know!
Immediately after it is merged, create a new GitHub tag representing the version. The tag name and title of the release should be the same as the version in pyproject.toml. Use the \"Generate Release Notes\" capability to get a sequential listing of the PRs in the release, to complement the manually curated Changelog. Verify on PyPi that the version is published.
New images for the release are automatically published by the GitHubAction Workflows: publish.yml and publish-indy.yml. The actions are triggered when a release is tagged, so no manual action is needed. The images are published in the OpenWallet Foundation Package Repository under acapy and a link to the packages added to the repositories main page (under \"Packages\").
Additional information about the container image publication process can be found in the document Container Images and Github Actions.
In addition, the published documentation site https://aca-py.org should be automatically updated to include the new release via the publish-docs GitHub Action. Additional information about that process and some related maintenance activities that are needed from time to time can be found in the [Updating the ACA-Py Documentation Site] document.
When a new release is tagged, create a new branch at the same commit with the branch name in the format docs-v<version>
, for example, docs-v1.1.0rc1
. The creation of the branch triggers the execution of the publish-docs GitHub Action which generates the documentation for the new release, publishing it at https://aca-py.org. The GitHub Action also executes when the main
branch is updated via a merge, publishing an update to the main
branch documentation. Additional information about that documentation publishing process and some related maintenance activities that are needed from time to time can be found in the Managing the ACA-Py Documentation Site document.
Update the ACA-Py Read The Docs site by logging into Read The Docs administration site, building a new \"latest\" (main branch) and activating and building the new release by version ID. Appropriate permissions are required to publish the new documentation version.
If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.
There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.
The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.
The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.
"},{"location":"UpdateRTD/","title":"Managing ACA-PyRead The Docs
Documentation","text":"This document describes how to maintain the Read The Docs
documentation that is generated from the ACA-Py code base. As the structure of the ACA-Py code evolves, the RTD files need to be regenerated and possibly updated, as described here.
To test generate and view the RTD documentation locally, you must install Sphinx and the Sphinx RTD theme. Follow the instructions on the respective pages to install and verify the installation on your system.
"},{"location":"UpdateRTD/#generate-module-files","title":"Generate Module Files","text":"To rebuild the project and settings from scratch (you'll need to move the generated index file up a level):
rm -rf generated; sphinx-apidoc -f -M -o ./generated ../acapy_agent/ $(find ../acapy_agent/ -name '*tests*')
Note that the find
command that is used to exclude any of the test
python files from the RTD documentation.
Check the git status
in your repo to see if the generator updates, adds or removes any existing RTD modules.
To auto-generate the module documentation locally run:
sphinx-build -b html -a -E -c ./ ./ ./_build\n
Once generated, go into the _build
folder and open index.html
in a browser. Note that the _build
is .gitignore
'd and so will not be part of a git push.
This is the hard part; looking for errors in docstrings added by devs. Some tips:
No module named 'async_timeout'
) can be solved by adding the module to the list of autodoc_mock_imports
in the conf.py
file in the ACA-Py docs
folder.docs/README.md
Other than that, please investigate and fix things that you find. If there are fixes, it's usually to adhere to the rules around processing docstrings, and especially around JSON samples.
"},{"location":"UpdateRTD/#checking-for-missing-modules","title":"Checking for missing modules","text":"The file index.rst
in the ACA-Py docs
folder drive the RTD generation. It picks up all the modules in the source code, starting from the root ../acapy_agent
folder. However, some modules are not picked up automatically from the root and have to be manually added to index.rst
. To do that:
ls generated | grep \"acapy_agent.[a-z]*.rst\"
If any are missing, you likely need to add them to the index.rst
file in the toctree
section of the file. You will see there are already several instances of that, notably \"connections\" and \"protocols\".
The RTD documentation is not currently auto-generated, so a manual re-generation of the documentation is still required.
TODO: Automate this when new tags are applied to the repository.
"},{"location":"aca-py.org/","title":"Welcome!","text":"Welcome to the ACA-Py documentation site! On this site you will find documentation for all of the recent releases of ACA-Py -- starting from LTS release 0.11.0.
[!NOTE] ACA-Py has recently moved to the OpenWallet Foundation. ACA-Py used to be called \"Aries Cloud Agent Python\", but in the move to OWF, we dropped the \"Aries\" part, and made the acronym the name. So ACA-Py it is!
All of the documentation here is extracted from the ACA-Py repository. If you want to contribute to the documentation, please start there.
Ready to go? Scan the tabs in the page header to find the documentation you need now!
"},{"location":"aca-py.org/#code-internals-documentation","title":"Code Internals Documentation","text":"In addition to this documentation site, the ACA-Py community also maintains an ACA-Py internals documentation site. The internals documentation consists of the docstrings
extracted from the ACA-Py Python code and covers all of the (non-test) modules in the codebase. Check it out on the ACA-Py ReadTheDocs site. As with this site, the ReadTheDocs documentation is version specific.
Got questions?
#aca-py
channel.Put any assets (images, source for images, videos, etc.) in this folder to be referenced in the various documents for this repo.
"},{"location":"assets/#plantuml-source-and-images","title":"Plantuml Source and Images","text":"Plantuml diagrams are stored in this folder in source form in files ending in .puml
and are generated manually using the ./genPlantuml
script. The script uses a docker image from docker-hub and can be run without downloading any dependencies.
If you don't want to use the script, download plantuml and a command line utility and use that for the plantuml generation. I preferred not having any dependencies used (other than docker) and couldn't find a nice way to run plantuml headless from a command line.
"},{"location":"assets/#to-do","title":"To Do","text":"It would be better to use a local Dockerfile
vs. one found on Docker Hub. The one I did find was simple and straight forward.
I couldn't tell if the svg generation was working so just went with png. Not sure which would be better.
"},{"location":"demo/","title":"ACA-Py Demos","text":"There are several demos available for ACA-Py mostly (but not only) aimed at developers learning how to deploy an instance of the agent and an ACA-Py controller to implement an application.
"},{"location":"demo/#table-of-contents","title":"Table of Contents","text":"The Alice/Faber demo is the (in)famous first verifiable credentials demo. Alice, a former student of Faber College (\"Knowledge is Good\"), connects with the College, is issued a credential about her degree and then is asked by the College for a proof. There are a variety of ways of running the demo. The easiest is in your browser using a site (\"Play with VON\") that let's you run docker containers without installing anything. Alternatively, you can run locally on docker (our recommendation), or using python on your local machine. Each approach is covered below.
"},{"location":"demo/#running-in-a-browser","title":"Running in a Browser","text":"In your browser, go to the docker playground service Play with Docker. On the title screen, click \"Start\". On the next screen, click (in the left menu) \"+Add a new instance\". That will start up a terminal in your browser. Run the following commands to start the Faber agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n
Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n
Alice's agent is now running.
Jump to the Follow the Script section below for further instructions.
"},{"location":"demo/#running-in-docker","title":"Running in Docker","text":"Running the demo in docker requires having a von-network
(a Hyperledger Indy public ledger sandbox) instance running in docker locally. See the VON Network Tutorial for guidance on starting and stopping your own local Hyperledger Indy instance.
Open three bash
shells. For Windows users, git-bash
is highly recommended. bash is the default shell in Linux and Mac terminal sessions. For Mac users on the newer M\u00bd/3 Apple Silicon devices, make sure that you install Apple's Rosetta 2 software, using these installation instructions from Apple, and this even more useful guidance on how to install Rosetta 2 from the command line which amounts to running this MacOS command: softwareupdate --install-rosetta
.
In the first terminal window, start von-network
by following the Building and Starting instructions.
In the second terminal, change directory into demo
directory of your clone of the ACA-Py repository. Start the faber
agent by issuing the following command:
./run_demo faber\n
In the third terminal, change directory into demo
directory of your clone of the ACA-Py repository. Start the alice
agent by issuing the following command:
./run_demo alice\n
Jump to the Follow the Script section below for further instructions.
"},{"location":"demo/#running-locally","title":"Running Locally","text":"The following is an approach to to running the Alice and Faber demo using Python3 running on a bare machine. There are other ways to run the components, but this covers the general approach.
We don't recommend this approach if you are just trying this demo, as you will likely run into issues with the specific setup of your machine.
"},{"location":"demo/#installing-prerequisites","title":"Installing Prerequisites","text":"We assume you have a running Python 3 environment. To install the prerequisites specific to running the agent/controller examples in your Python environment, run the following command from this repo's demo
folder. The precise command to run may vary based on your Python environment setup.
pip3 install -r demo/requirements.txt\n
While that process will include the installation of the Indy python prerequisite, you still have to build and install the libindy
code for your platform. Follow the installation instructions in the indy-sdk repo for your platform.
Start a local von-network
Hyperledger Indy network running in Docker by following the VON Network Building and Starting instructions.
We strongly recommend you use Docker for the local Indy network until you really, really need to know the details of running an Indy Node instance on a bare machine.
"},{"location":"demo/#genesis-file-handling","title":"Genesis File handling","text":"Assuming you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section. If you started the Indy ledger without using VON Network, this information might be helpful.
An Aries agent (or other client) connecting to an Indy ledger must know the contents of the genesis
file for the ledger. The genesis file lets the agent/client know the IP addresses of the initial nodes of the ledger, and the agent/client sends ledger requests to those IP addresses. When using the indy-sdk
ledger, look for the instructions in that repo for how to find/update the ledger genesis file, and note the path to that file on your local system.
The environment variable GENESIS_FILE
is used to let the Aries demo agents know the location of the genesis file. Use the path to that file as value of the GENESIS_FILE
environment variable in the instructions below. You might want to copy that file to be local to the demo so the path is shorter.
The demo uses the postgres database the wallet persistence. Use the Docker Hub certified postgres image to start up a postgres instance to be used for the wallet storage:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres -c 'log_statement=all' -c 'logging_collector=on' -c 'log_destination=stderr'\n
"},{"location":"demo/#optional-run-a-von-network-ledger-browser","title":"Optional: Run a von-network ledger browser","text":"If you followed our advice and are using a VON Network instance of Hyperledger Indy, you can ignore this section, as you already have a Ledger browser running, accessible on http://localhost:9000.
If you started the Indy ledger without using VON Network, and you want to be able to browse your local ledger as you run the demo, clone the von-network repo, go into the root of the cloned instance and run the following command, replacing the /path/to/local-genesis.txt
with a path to the same genesis file as was used in starting the ledger.
GENESIS_FILE=/path/to/local-genesis.txt PORT=9000 REGISTER_NEW_DIDS=true python -m server.server\n
"},{"location":"demo/#run-the-alice-and-faber-controllersagents","title":"Run the Alice and Faber Controllers/Agents","text":"With the rest of the pieces running, you can run the Alice and Faber controllers and agents. To do so, cd
into the demo
folder your clone of this repo in two terminal windows.
If you are using a VON Network instance of Hyperledger, run the following commands:
DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n
If you started the Indy ledger without using VON Network, use the following commands, replacing the /path/to/local-genesis.txt
with the one for your configuration.
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.faber --port 8020\n
GENESIS_FILE=/path/to/local-genesis.txt DEFAULT_POSTGRES=true python3 -m runners.alice --port 8030\n
Note that Alice and Faber will each use 5 ports, e.g., using the parameter ... --port 8020
actually uses ports 8020 through 8024. Feel free to use different ports if you want.
Everything running? See the Follow the Script section below for further instructions.
If the demo fails with an error that references the genesis file, a timeout connecting to the Indy Pool, or an Indy 307
error, it's likely a problem with the genesis file handling. Things to check:
indy-sdk
. Is it running properly?/path/to/local-genesis.txt
file correct in your start commands?307
error.With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:
Faber:
(1) Issue Credential\n (1a) Set Credential Type (indy)\n (2) Send Proof Request\n (3) Send Message\n (4) Create New Invitation\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n
Alice:
(3) Send Message\n (4) Input New Invitation\n (X) Exit?\n
"},{"location":"demo/#exchanging-messages","title":"Exchanging Messages","text":"Feel free to use the \"3\" option to send messages back and forth between the agents. Fun, eh? Those are secure, end-to-end encrypted messages.
"},{"location":"demo/#issuing-and-proving-credentials","title":"Issuing and Proving Credentials","text":"When ready to test the credentials exchange protocols, go to the Faber prompt, enter \"1\" to send a credential, and then \"2\" to request a proof.
You don't need to do anything with Alice's agent - her agent is implemented to automatically receive credentials and respond to proof requests.
Note there is an option \"2a\" to initiate a connectionless proof - you can execute this option but it will only work end-to-end when connecting to Faber from a mobile agent.
"},{"location":"demo/#additional-options-in-the-alicefaber-demo","title":"Additional Options in the Alice/Faber demo","text":"You can enable support for various ACA-Py features by providing additional command-line arguments when starting up alice
or faber
.
Note that when the controller starts up the agent, it prints out the ACA-Py startup command with all parameters - you can inspect this command to see what parameters are provided in each case. For more details on the parameters, just start ACA-Py with the --help
parameter, for example:
./scripts/run_docker start --help\n
"},{"location":"demo/#revocation","title":"Revocation","text":"To enable support for revoking credentials, run the faber
demo with the --revocation
option:
./run_demo faber --revocation\n
Note that you don't specify this option with alice
because it's only applicable for the credential issuer
(who has to enable revocation when creating a credential definition, and explicitly revoke credentials as appropriate; alice doesn't have to do anything special when revocation is enabled).
You need to run an AnonCreds revocation registry tails server in order to support revocation - the details are described in the Alice gets a Phone demo instructions.
Faber will setup support for revocation automatically, and you will see an extra option in faber's menu to revoke a credential:
(1) Issue Credential\n (1a) Set Credential Type (indy)\n (2) Send Proof Request\n (3) Send Message\n (4) Create New Invitation\n (5) Revoke Credential\n (6) Publish Revocations\n (7) Rotate Revocation Registry\n (8) List Revocation Registries\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n ```\n\nWhen you issue a credential, make a note of the `Revocation registry ID` and `Credential revocation ID`:\n
Faber | Revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Faber | Credential revocation ID: 1 When you revoke a credential you will need to provide those values:\n
[\u00bd/\u00be/\u215a/\u215e/T/X] 5 Enter revocation registry ID: WGmUNAdH2ZfeGvacFoMVVP:4:WGmUNAdH2ZfeGvacFoMVVP:3:CL:38:Faber.Agent.degree_schema:CL_ACCUM:15ca49ed-1250-4608-9e8f-c0d52d7260c3 Enter credential revocation ID: 1 Publish now? [Y/N]: y
Note that you need to Publish the revocation information to the ledger. Once you've revoked a credential any proof which uses this credential will fail to verify. \n\nRotating the revocation registry will decommission any \"ready\" registry records and create 2 new registry records. You can view in the logs as the records are created and transition to 'active'. There should always be 2 'active' revocation registries - one working and one for hot-swap. Note that revocation information can still be published from decommissioned registries.\n\nYou can also list the created registries, filtering by current state: 'init', 'generated', 'posted', 'active', 'full', 'decommissioned'.\n\n### DID Exchange\n\nYou can enable DID Exchange using the `--did-exchange` parameter for the `alice` and `faber` demos.\n\nThis will use the new DID Exchange protocol when establishing connections between the agents, rather than the older Connection protocol. There is no other affect on the operation of the agents.\n\nWith DID Exchange, you can also enable use of the inviter's public DID for invitations, multi-use invitations, connection re-use, and use of qualified DIDs:\n\n- `--public-did-connections` - use the inviter's public DID in invitations, and allow use of implicit invitations\n- `--reuse-connections` - support connection re-use (invitee will reuse an existing connection if it uses the same DID as in the new invitation)\n- `--multi-use-invitations` - inviter will issue multi-use invitations\n- `--emit-did-peer-4` - participants will prefer use of did:peer:4 for their pairwise connection DIDs\n- `--emit-did-peer-2` - participants will prefer use of did:peer:2 for their pairwise connection DIDs\n\n### Endorser\n\nThis is described in [Endorser.md](Endorser.md)\n\n### Mediation\n\nTo enable mediation, run the `alice` or `faber` demo with the `--mediation` option:\n\n```bash\n./run_demo faber --mediation\n
This will start up a \"mediator\" agent with Alice or Faber and automatically set the alice/faber connection to use the mediator.
"},{"location":"demo/#multi-ledger","title":"Multi-ledger","text":"To enable multiple ledger mode, run the alice
or faber
demo with the --multi-ledger
option:
./run_demo faber --multi-ledger\n
The configuration file for setting up multiple ledgers (for the demo) can be found at ./demo/multiple_ledger_config.yml
.
To enable support for multi-tenancy, run the alice
or faber
demo with the --multitenant
option:
./run_demo faber --multitenant\n
(This option can be used with both (or either) alice
and/or faber
.)
You will see an additional menu option to create new sub-wallets (or they can be considered to be \"virtual agents\").
Faber:
(1) Issue Credential\n (1a) Set Credential Type (indy)\n (2) Send Proof Request\n (3) Send Message\n (4) Create New Invitation\n (W) Create and/or Enable Wallet\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n
Alice:
(3) Send Message\n (4) Input New Invitation\n (W) Create and/or Enable Wallet\n (X) Exit?\n
When you create a new wallet, you just need to provide the wallet name. (If you provide the name of an existing wallet then the controller will \"activate\" that wallet and make it the current wallet.)
[1/2/3/4/W/T/X] w\n\nEnter wallet name: new_wallet_12\n\nFaber | Register or switch to wallet new_wallet_12\nFaber | Created new profile\nFaber | Profile backend: indy\nFaber | Profile name: new_wallet_12\nFaber | No public DID\n... etc\n
Note that faber
will create a public DID for this wallet, and will create a schema and credential definition.
Once you have created a new wallet, you must establish a connection between alice
and faber
(remember that this is a new \"virtual agent\" and doesn't know anything about connections established for other \"agents\").
In faber, create a new invitation:
[1/2/3/4/W/T/X] 4\n\n(... creates a new invitation ...)\n
In alice, accept the invitation:
[1/2/3/4/W/T/X] 4\n\n(... enter the new invitation string ...)\n
You can inspect the additional multi-tenancy admin API's (i.e. the \"agency API\" by opening either agent's swagger page in your browser:
Show me a screenshot - multi-tenancy via admin APINote that with multi-tenancy enabled:
Documentation on ACA-Py's multi-tenancy support can be found here.
"},{"location":"demo/#multi-tenancy-with-mediation","title":"Multi-tenancy with Mediation!!!","text":"There are two options for configuring mediation with multi-tenancy, documented here.
This demo implements option #2 - each sub-wallet is configured with a separate connection to the mediator.
Run the demo (Alice or Faber) specifying both options:
./run_demo faber --multitenant --mediation\n
This works exactly as the vanilla multi-tenancy, except that all connections are mediated.
"},{"location":"demo/#other-environment-settings","title":"Other Environment Settings","text":"The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)
To override the default port settings:
AGENT_PORT_OVERRIDE=8010 ./run_demo faber\n
(The agent requires up to 10 available ports.)
To pass extra arguments to the agent (for example):
DEMO_EXTRA_AGENT_ARGS=\"[\\\"--emit-did-peer-2\\\"]\" ./run_demo faber --did-exchange --reuse-connections\n
Additionally, separating the build and run functionalities in the script allows for smoother development and debugging processes. With the mounting of volumes from the host into the Docker container, code changes can be automatically reloaded without the need to repeatedly build the demo.
Build Command:
./demo/run_demo build alice --wallet-type askar-anoncreds --events\n
Run Command:
./demo/run_demo run alice --wallet-type askar-anoncreds --events\n
"},{"location":"demo/#learning-about-the-alicefaber-code","title":"Learning about the Alice/Faber code","text":"These Alice and Faber scripts (in the demo/runners
folder) implement the controller and run the agent as a sub-process (see the documentation for aca-py
). The controller publishes a REST service to receive web hook callbacks from their agent. Note that this architecture, running the agent as a sub-process, is a variation on the documented architecture of running the controller and agent as separate processes/containers.
The controllers for this demo can be found in the alice.py and faber.py files. Alice and Faber are instances of the agent class found in agent.py.
"},{"location":"demo/#openapi-swagger-demo","title":"OpenAPI (Swagger) Demo","text":"Developing an ACA-Py controller is much like developing a web app that uses a REST API. As you develop, you will want an easy way to test out the behaviour of the API. That's where the industry-standard OpenAPI (aka Swagger) UI comes in. ACA-Py (optionally) exposes an OpenAPI UI in ACA-Py that you can use to learn the ins and outs of the API. This ACA-Py OpenAPI demo shows how you can use the OpenAPI UI with an ACA-Py agent by walking through the connecting, issuing a credential, and presenting a proof sequence.
"},{"location":"demo/#performance-demo","title":"Performance Demo","text":"Another example in the demo/runners
folder is performance.py, that is used to test out the performance of interacting agents. The script starts up agents for Alice and Faber, initializes them, and then runs through an interaction some number of times. In this case, Faber issues a credential to Alice 300 times.
To run the demo, make sure that you shut down any running Alice/Faber agents. Then, follow the same steps to start the Alice/Faber demo, but:
faber
) with performance
.alice
) at all.The script starts both agents, runs the performance test, spits out performance results and shuts down the agents. Note that this is just one demonstration of how performance metrics tracking can be done with ACA-Py.
A second version of the performance test can be run by adding the parameter --routing
to the invocation above. The parameter triggers the example to run with Alice using a routing agent such that all messages pass through the routing agent between Alice and Faber. This is a good, simple example of how routing can be implemented with DIDComm agents.
You can also run the demo against a postgres database using the following:
./run_demo performance --arg-file demo/postgres-indy-args.yml\n
(Obviously you need to be running a postgres database - the command to start postgres is in the yml file provided above.)
You can tweak the number of credentials issued using the --count
and --batch
parameters, and you can run against an Askar database using the --wallet-type askar
option (or run using indy-sdk using --wallet-type indy
).
An example full set of options is:
./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type askar\n
Or:
./run_demo performance --arg-file demo/postgres-indy-args.yml -c 10000 -b 10 --wallet-type indy\n
"},{"location":"demo/#coding-challenge-adding-acme","title":"Coding Challenge: Adding ACME","text":"Now that you have a solid foundation in using ACA-Py, time for a coding challenge. In this challenge, we extend the Alice-Faber command line demo by adding in ACME Corp, a place where Alice wants to work. The demo adds:
The framework for the code is in the acme.py file, but the code is incomplete. Using the knowledge you gained from running demo and viewing the alice.py and faber.py code, fill in the blanks for the code. When you are ready to test your work:
All done? Checkout how we added the missing code segments here.
"},{"location":"demo/ACA-Py-Workshop/","title":"ACA-Py and AnonCreds Workshop Using Traction Sandbox","text":""},{"location":"demo/ACA-Py-Workshop/#introduction","title":"Introduction","text":"Welcome! This workshop contains a sequence of four labs that gets you from nothing to issuing, receiving, holding, requesting, presenting, and verifying AnonCreds Verifiable Credentials--no technical experience required! If you just walk through the steps exactly as laid out, it only takes about 20 minutes to complete the whole process. Of course, we hope you get curious, experiment, and learn a lot more about the information provided in the labs.
To run the labs, you\u2019ll need an ACA-Py agent to be able to issue and verify verifiable credentials. For that, we're providing your with your very own tenant in a BC Gov \"sandbox\" deployment of an open source tool called Traction, a managed, production-ready, multi-tenant decentralized trust agent built on ACA-Py. Sandbox in this context means that you can do whatever you want with your tenant agent, but we make no promises about the stability of the environment (but it\u2019s pretty robust, so chances are, things will work...), **and on the 1st and 15th of each month, we\u2019ll reset the entire sandbox and all your work will be gone \u2014 poof! **Keep that in mind, as you use the Traction sandbox. We recommend you keep a notebook at your side, tracking the important learnings you want to remember. As you create code that uses your sandbox agent make sure you create simple-to-update configurations so that after a reset, you can create a new tenant agent, recreate the objects you need (each of which will have new identifiers), update your configuration, and off you go.
The four labs in this workshop are laid out as follows:
Once you are done the labs, there are suggestions for next steps for developers, such as experimenting with the Traction/ACA-Py
Jump in!
"},{"location":"demo/ACA-Py-Workshop/#lab-1-getting-a-traction-tenant-agent-and-mobile-wallet","title":"Lab 1: Getting a Traction Tenant Agent and Mobile Wallet","text":"Let\u2019s start by getting your two agents \u2014 an Aries Mobile Wallet and an Aries Issuer/Verifier agent.
"},{"location":"demo/ACA-Py-Workshop/#lab-1-steps-to-follow","title":"Lab 1: Steps to Follow","text":"Action
in the Endorser section.active
.active
, it's possible that your wallet was not able to message back to your Traction Tenant. Check your wallet internet connection.That's it--you should be ready to start issuing and receiving verifiable credentials.
"},{"location":"demo/ACA-Py-Workshop/#lab-2-getting-ready-to-be-an-issuer","title":"Lab 2: Getting Ready To Be An Issuer","text":"In this lab we will use our Traction Tenant agent to create and publish an AnonCreds Schema object (or two), and then use that Schema to create and publish a Credential Definition. All of the AnonCreds objects will be published on the BCovrin (pronounced \u201cBe Sovereign\u201d) Test network. For those new to AnonCreds:
claims
) in a credential. An issuer often publishes their own schema, but they may also use one published by someone else. For example, a group of universities all might use the schema published by the \"Association of Universities and Colleges\" to which they belong.CredDef
) is published by the issuer, linking together Issuer's DID with the schema upon which the credentials will be issued, and containing the public key material needed to verify presentations of the credential. Revocation Registries are also linked to the Credential Definition, enabling an issuer to revoke credentials when necessary.Schema Id
with the value H7W22uhD4ueQdGaGeiCgaM:2:student id:1.0.0
.>
) link, and then the subsequent >
to \u201cView Raw Content.\"Completed all the steps? Great! Feel free to create a second Schema and Cred Def, ideally one related to your first. That way you can try out a presentation request that pulls data from both credentials! When you create the second schema, use the \"Create Schema\" button, and add the claims you want to have in your new type of credential.
"},{"location":"demo/ACA-Py-Workshop/#lab-3-issuing-credentials-to-a-mobile-wallet","title":"Lab 3: Issuing Credentials to a Mobile Wallet","text":"In this lab we will use our Traction Tenant agent to issue instances of the credentials we created in Lab 2 to our Mobile Wallet we downloaded in Lab 1.
"},{"location":"demo/ACA-Py-Workshop/#lab-3-steps-to-follow","title":"Lab 3: Steps to Follow","text":"YYYYMMDD
, e.g., 20231001
. You cannot use a string date format, such as \u201cYYYY-MM-DD\u201d if you want to use the attribute for predicate checking -- the value must be an integer.That\u2019s it! Pretty easy, eh? Of course, in a real issuer, the data would (very, very) likely not be hand-entered, but instead come from a backend system. Traction has an HTTP API (protected by the same Wallet ID and Key) that can be used from an application, to do things like this automatically. The Traction API embeds the ACA-Py API, so everything you can do in \u201cplain ACA-Py\u201d can also be done in Traction.
"},{"location":"demo/ACA-Py-Workshop/#lab-4-requesting-and-sending-presentations","title":"Lab 4: Requesting and Sending Presentations","text":"In this lab we will use our Traction Tenant agent as a verifier, requesting presentations, and your mobile Wallet as the holder responding with presentations that satisfy the requests. The user interface is a little rougher for this lab (you\u2019ll be dealing with JSON), but it should still be easy enough to do.
"},{"location":"demo/ACA-Py-Workshop/#lab-4-steps-to-follow","title":"Lab 4: Steps to Follow","text":"p_value
should be a relevant date \u2014 e.g., 19 (or whatever) years ago today for \u201colder than\u201d, and today for \u201cnot expired\u201d, both in the YYYYMMDD
format (the integer form of the date).p_type
should be >=
for the \u201colder than\u201d, and =<
for \u201cnot expired\u201d. See the table below for the form of the expression form.That completes this lab \u2014 although feel free to continue to play with all of the steps (setup, issuing and presenting). You should have a pretty solid handle on exactly what you can and can\u2019t do with AnonCreds!
"},{"location":"demo/ACA-Py-Workshop/#whats-next","title":"What's Next","text":"The following are a couple of things that you might want to do next--if you are a developer. Unlike the labs you have just completed, these \"next steps\" are geared towards developers, providing details about building the use of verifiable credentials (issuing, verifying) into your own application.
Want to use Traction in your own environment? Feel free! It's open source, and comes with Helm Charts for easy deployment in container-orchestrated environments. Contributions back to the project are always welcome!
"},{"location":"demo/ACA-Py-Workshop/#whats-next-the-aca-py-openapi","title":"What\u2019s Next: The ACA-Py OpenAPI","text":"Are you going to build an app that uses Traction or an instance of ACA-Py? If so, your next step is to try out the ACA-Py OpenAPI (aka Swagger)\u2014by hand at first, and then from your application. This is a VERY high level overview, assuming a developer is following this, and knows a bunch about Aries protocols, using HTTP APIs, and using OpenAPI interfaces.
To access and use your Tenant's OpenAPI (aka Swagger) interface:
The ACA-Py/Traction API is pretty large, but it is reasonably well organized, and you should recognize from the Traction API a lot of the items. Try some of the \u201cGET\u201d endpoints to see if you recognize the items.
We\u2019re still working on a good demo for the OpenAPI from Traction, but this one from ACA-Py is a good outline of the process. It doesn't use your Traction Tenant, but you should get the idea about the sequence of calls to make to accomplish Aries-type activities. For example, see if you can carry out the steps to do the Lab 4 with your mobile agent by invoking the right sequence of OpenAPI calls.
"},{"location":"demo/ACA-Py-Workshop/#whats-next-experiment-with-an-issuer-web-app","title":"What's Next: Experiment With an Issuer Web App","text":"If you are challenged to use Traction or ACA-Py to become an issuer, you will likely be building API calls into your Line of Business web application. To get an idea of what that will entail, we're delighted to direct you to a very simple Web App that one of your predecessors on this same journey created (and contributed!) to learn more about using the Traction OpenAPI in a very simple Web App. Checkout this Traction Issuance Demo and try it out yourself, with your Sandbox tenant. Once you review the code, you should have an excellent idea of how you can add these same capabilities to your line of business application.
"},{"location":"demo/AcmeDemoWorkshop/","title":"Acme Controller Workshop","text":"In this workshop we will add some functionality to a third participant in the Alice/Faber drama - namely, Acme Inc. After completing her education at Faber College, Alice is going to apply for a job at Acme Inc. To do this she must provide proof of education (once she has completed the interview and other non-Indy tasks), and then Acme will issue her an employment credential.
Note that an updated Acme controller is available here: https://github.com/ianco/aries-cloudagent-python/tree/acme_workshop/demo if you just want to skip ahead ... There is also an alternate solution with some additional functionality available here: https://github.com/ianco/aries-cloudagent-python/tree/agent_workshop/demo
"},{"location":"demo/AcmeDemoWorkshop/#preview-of-the-acme-controller","title":"Preview of the Acme Controller","text":"There is already a skeleton of the Acme controller in place, you can run it as follows. (Note that beyond establishing a connection it doesn't actually do anything yet.)
To run the Acme controller template, first run Alice and Faber so that Alice can prove her education experience:
Open 2 bash shells, and in each run:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
In one shell run Faber:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber\n
... and in the second shell run Alice:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n
When Faber has produced an invitation, copy it over to Alice.
Then, in the Faber shell, select option 1
to issue a credential to Alice. (You can select option 2
if you like, to confirm via proof.)
Then, in the Faber shell, enter X
to exit the controller, and then run the Acme controller:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo acme\n
In the Alice shell, select option 4
(to enter a new invitation) and then copy over Acme's invitation once it's available.
Then, in the Acme shell, you can select option 2
and then option 1
, which don't do anything ... yet!!!
In the Acme code acme.py
we are going to add code to issue a proof request to Alice, and then validate the received proof.
First the following import statements and constants that we will need near the top of acme.py:
import random\n\nfrom datetime import date\nfrom uuid import uuid4\n
TAILS_FILE_COUNT = int(os.getenv(\"TAILS_FILE_COUNT\", 100))\nCRED_PREVIEW_TYPE = \"https://didcomm.org/issue-credential/2.0/credential-preview\"\n
Next locate the code that is triggered by option 2
:
elif option == \"2\":\n log_status(\"#20 Request proof of degree from alice\")\n # TODO presentation requests\n
Replace the # TODO
comment with the following code:
req_attrs = [\n {\n \"name\": \"name\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n },\n {\n \"name\": \"date\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n },\n {\n \"name\": \"degree\",\n \"restrictions\": [{\"schema_name\": \"degree schema\"}]\n }\n ]\n req_preds = []\n indy_proof_request = {\n \"name\": \"Proof of Education\",\n \"version\": \"1.0\",\n \"nonce\": str(uuid4().int),\n \"requested_attributes\": {\n f\"0_{req_attr['name']}_uuid\": req_attr\n for req_attr in req_attrs\n },\n \"requested_predicates\": {}\n }\n proof_request_web_request = {\n \"connection_id\": agent.connection_id,\n \"presentation_request\": {\"indy\": indy_proof_request},\n }\n # this sends the request to our agent, which forwards it to Alice\n # (based on the connection_id)\n await agent.admin_POST(\n \"/present-proof-2.0/send-request\",\n proof_request_web_request\n )\n
Now we need to handle receipt of the proof. Locate the code that handles received proofs (this is in a webhook callback):
if state == \"presentation-received\":\n # TODO handle received presentations\n pass\n
then replace the # TODO
comment and the pass
statement:
log_status(\"#27 Process the proof provided by X\")\n log_status(\"#28 Check if proof is valid\")\n proof = await self.admin_POST(\n f\"/present-proof-2.0/records/{pres_ex_id}/verify-presentation\"\n )\n self.log(\"Proof = \", proof[\"verified\"])\n\n # if presentation is a degree schema (proof of education),\n # check values received\n pres_req = message[\"by_format\"][\"pres_request\"][\"indy\"]\n pres = message[\"by_format\"][\"pres\"][\"indy\"]\n is_proof_of_education = (\n pres_req[\"name\"] == \"Proof of Education\"\n )\n if is_proof_of_education:\n log_status(\"#28.1 Received proof of education, check claims\")\n for (referent, attr_spec) in pres_req[\"requested_attributes\"].items():\n if referent in pres['requested_proof']['revealed_attrs']:\n self.log(\n f\"{attr_spec['name']}: \"\n f\"{pres['requested_proof']['revealed_attrs'][referent]['raw']}\"\n )\n else:\n self.log(\n f\"{attr_spec['name']}: \"\n \"(attribute not revealed)\"\n )\n for id_spec in pres[\"identifiers\"]:\n # just print out the schema/cred def id's of presented claims\n self.log(f\"schema_id: {id_spec['schema_id']}\")\n self.log(f\"cred_def_id {id_spec['cred_def_id']}\")\n # TODO placeholder for the next step\n else:\n # in case there are any other kinds of proofs received\n self.log(\"#28.1 Received \", pres_req[\"name\"])\n
Right now this just verifies the proof received and prints out the attributes it reveals, but in \"real life\" your application could do something useful with this information.
Now you can run the Faber/Alice/Acme script from the \"Preview of the Acme Controller\" section above, and you should see Acme receive a proof from Alice!
"},{"location":"demo/AcmeDemoWorkshop/#issuing-alice-a-work-credential","title":"Issuing Alice a Work Credential","text":"Now we can issue a work credential to Alice!
There are two options for this. We can (a) add code under option 1
to issue the credential, or (b) we can automatically issue this credential on receipt of the education proof.
We're going to do option (a), but you can try to implement option (b) as homework. You have most of the information you need from the proof response!
First though we need to register a schema and credential definition. Find this code:
# acme_schema_name = \"employee id schema\"\n # acme_schema_attrs = [\"employee_id\", \"name\", \"date\", \"position\"]\n await acme_agent.initialize(\n the_agent=agent,\n # schema_name=acme_schema_name,\n # schema_attrs=acme_schema_attrs,\n )\n\n # TODO publish schema and cred def\n
... and uncomment the code lines. Replace the # TODO
comment with the following code:
with log_timer(\"Publish schema and cred def duration:\"):\n # define schema\n version = format(\n \"%d.%d.%d\"\n % (\n random.randint(1, 101),\n random.randint(1, 101),\n random.randint(1, 101),\n )\n )\n # register schema and cred def\n (schema_id, cred_def_id) = await agent.register_schema_and_creddef(\n \"employee id schema\",\n version,\n [\"employee_id\", \"name\", \"date\", \"position\"],\n support_revocation=False,\n revocation_registry_size=TAILS_FILE_COUNT,\n )\n
For option (1) we want to replace the # TODO
comment here:
elif option == \"1\":\n log_status(\"#13 Issue credential offer to X\")\n # TODO credential offers\n
with the following code:
agent.cred_attrs[cred_def_id] = {\n \"employee_id\": \"ACME0009\",\n \"name\": \"Alice Smith\",\n \"date\": date.isoformat(date.today()),\n \"position\": \"CEO\"\n }\n cred_preview = {\n \"@type\": CRED_PREVIEW_TYPE,\n \"attributes\": [\n {\"name\": n, \"value\": v}\n for (n, v) in agent.cred_attrs[cred_def_id].items()\n ],\n }\n offer_request = {\n \"connection_id\": agent.connection_id,\n \"comment\": f\"Offer on cred def id {cred_def_id}\",\n \"credential_preview\": cred_preview,\n \"filter\": {\"indy\": {\"cred_def_id\": cred_def_id}},\n }\n await agent.admin_POST(\n \"/issue-credential-2.0/send-offer\", offer_request\n )\n
... and then locate the code that handles the credential request callback:
if state == \"request-received\":\n # TODO issue credentials based on offer preview in cred ex record\n pass\n
... and replace the # TODO
comment and pass
statement with the following code to issue the credential as Acme offered it:
# issue credentials based on offer preview in cred ex record\n if not message.get(\"auto_issue\"):\n await self.admin_POST(\n f\"/issue-credential-2.0/records/{cred_ex_id}/issue\",\n {\"comment\": f\"Issuing credential, exchange {cred_ex_id}\"},\n )\n
Now you can run the Faber/Alice/Acme steps again. You should be able to receive a proof and then issue a credential to Alice.
"},{"location":"demo/AliceGetsAPhone/","title":"Alice Gets a Mobile Agent!","text":"In this demo, we'll again use our familiar Faber ACA-Py agent to issue credentials to Alice, but this time Alice will use a mobile wallet. To do this we need to run the Faber agent on a publicly accessible port, and Alice will need a compatible mobile wallet. We'll provide pointers to where you can get them.
This demo also introduces revocation of credentials.
"},{"location":"demo/AliceGetsAPhone/#contents","title":"Contents","text":"faber
With Extra ParametersThis demo can be run on your local machine or on Play with Docker (PWD), and will demonstrate credential exchange and proof exchange as well as revocation with a mobile agent. Both approaches (running locally and on PWD) will be described, for the most part the commands are the same, but there are a couple of different parameters you need to provide when starting up.
If you are not familiar with how revocation is currently implemented in Hyperledger Indy, this article provides a good background on the technique. A challenge with revocation as it is currently implemented in Hyperledger Indy is the need for the prover (the agent creating the proof) to download tails files associated with the credentials it holds.
"},{"location":"demo/AliceGetsAPhone/#get-a-mobile-agent","title":"Get a mobile agent","text":"Of course for this, you need to have a mobile agent. To find, install and setup a compatible mobile agent, follow the instructions here.
"},{"location":"demo/AliceGetsAPhone/#running-locally-in-docker","title":"Running Locally in Docker","text":"Open a new bash shell and in a project directory run the following:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
We'll come back to this in a minute, when we start the faber
agent!
There are a couple of extra steps you need to take to prepare to run the Faber agent locally:
"},{"location":"demo/AliceGetsAPhone/#install-ngrok-and-jq","title":"Install ngrok and jq","text":"ngrok is used to expose public endpoints for services running locally on your computer.
jq is a json parser that is used to automatically detect the endpoints exposed by ngrok.
You can install ngrok from here
You can download jq releases here
"},{"location":"demo/AliceGetsAPhone/#expose-services-publicly-using-ngrok","title":"Expose services publicly using ngrok","text":"Note that this is only required when running docker on your local machine. When you run on PWD a public endpoint for your agent is exposed automatically.
Since the mobile agent will need some way to communicate with the agent running on your local machine in docker, we will need to create a publicly accessible url for some services on your machine. The easiest way to do this is with ngrok. Once ngrok is installed, create a tunnel to your local machine:
ngrok http 8020\n
This service is used for your local aca-py agent - it is the endpoint that is advertised for other Aries agents to connect to.
You will see something like this:
Forwarding http://abc123.ngrok.io -> http://localhost:8020\nForwarding https://abc123.ngrok.io -> http://localhost:8020\n
This creates a public url for ports 8020 on your local machine.
Note that an ngrok process is created automatically for your tails server.
Keep this process running as we'll come back to it in a moment.
"},{"location":"demo/AliceGetsAPhone/#running-in-play-with-docker","title":"Running in Play With Docker","text":"To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.
Open a new bash shell and in a project directory run the following:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
We'll come back to this in a minute, when we start the faber
agent!
For revocation to function, we need another component running that is used to store what are called tails files.
If you are not running with revocation enabled you can skip this step.
"},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell","title":"Running locally in a bash shell?","text":"Open a new bash shell, and in a project directory, run:
git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\n
This will run the required components for the tails server to function and make a tails server available on port 6543.
This will also automatically start an ngrok server that will expose a public url for your tails server - this is required to support mobile agents. The docker output will look something like this:
ngrok-tails-server_1 | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=\"command_line (http)\" addr=http://tails-server:6543 url=http://c5789aa0.ngrok.io\nngrok-tails-server_1 | t=2020-05-13T22:51:14+0000 lvl=info msg=\"started tunnel\" obj=tunnels name=command_line addr=http://tails-server:6543 url=https://c5789aa0.ngrok.io\n
Note the server name in the url=https://c5789aa0.ngrok.io
parameter (https://c5789aa0.ngrok.io
) - this is the external url for your tails server. Make sure you use the https
url!
Run the same steps on PWD as you would run locally (see above). Open a new shell (click on \"ADD NEW INSTANCE\") to run the tails server.
Note that with Play with Docker it can be challenging to capture the information you need from the log file as it scrolls by, you can try leaving off the --events
option when you run the Faber agent to reduce the quantity of information logged to the screen.
faber
With Extra Parameters","text":""},{"location":"demo/AliceGetsAPhone/#running-locally-in-a-bash-shell_1","title":"Running locally in a bash shell?","text":"If you are running in a local bash shell, navigate to the demo
directory in your fork/clone of the ACA-Py repository and run:
TAILS_NETWORK=docker_tails-server LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n
(Note that we have to start faber with --aip 10
for compatibility with mobile clients.)
The TAILS_NETWORK
parameter lets the demo script know how to connect to the tails server (which should be running in a separate shell on the same machine).
If you are running in Play with Docker, navigate to the demo
folder in the clone of ACA-Py and run the following:
PUBLIC_TAILS_URL=https://c4f7fbb85911.ngrok.io LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --aip 10 --revocation --events\n
The PUBLIC_TAILS_URL
parameter lets the demo script know how to connect to the tails server. This can be running in another PWD session, or even on your local machine - the ngrok endpoint is public and will map to the correct location.
Use the ngrok url for the tails server that you noted earlier.
*Note that you must use the https
url for the tails server endpoint.
*Note - you may want to leave off the --events
option when you run the Faber agent, if you are finding you are getting too much logging output.
The Preparing agent image...
step on the first run takes a bit of time, so while we wait, let's look at the details of the commands. Running Faber is similar to the instructions in the Aries OpenAPI Demo \"Play with Docker\" section, except:
TAILS_NETWORK
parameter tells the ./run_demo
script how to connect to the tails server and determine the public ngrok endpoint.PUBLIC_TAILS_URL
environment variable is the address of your tails server (must be https
).--revocation
parameter to the ./run-demo
script activates the ACA-Py revocation issuance.As part of its startup process, the agent will publish a revocation registry to the ledger.
Click here to view screenshot of the revocation registry on the ledger"},{"location":"demo/AliceGetsAPhone/#accept-the-invitation","title":"Accept the Invitation","text":"When the Faber agent starts up it automatically creates an invitation and generates a QR code on the screen. On your mobile app, select \"SCAN CODE\" (or equivalent) and point your camera at the generated QR code. The mobile agent should automatically capture the code and ask you to confirm the connection. Confirm it.
Click here to view screenshotThe mobile agent will give you feedback on the connection process, something like \"A connection was added to your wallet\".
Click here to view screenshot Click here to view screenshotSwitch your browser back to Play with Docker. You should see that the connection has been established, and there is a prompt for what actions you want to take, e.g. \"Issue Credential\", \"Send Proof Request\" and so on.
Tip: If your screen is too small to display the QR code (this can happen in Play With Docker because the shell is only given a small portion of the browser) you can copy the invitation url to a site like https://www.the-qrcode-generator.com/ to convert the invitation url into a QR code that you can scan. Make sure you select the URL
option, and copy the invitation_url
, which will look something like:
https://abfde260.ngrok.io?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZjI2ZjA2YTItNWU1Mi00YTA5LWEwMDctOTNkODBiZTYyNGJlIiwgInJlY2lwaWVudEtleXMiOiBbIjlQRFE2alNXMWZwZkM5UllRWGhCc3ZBaVJrQmVKRlVhVmI0QnRQSFdWbTFXIl0sICJsYWJlbCI6ICJGYWJlci5BZ2VudCIsICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cHM6Ly9hYmZkZTI2MC5uZ3Jvay5pbyJ9\n
Or this:
http://ip10-0-121-4-bquqo816b480a4bfn3kg-8020.direct.play-with-docker.com?c_i=eyJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9jb25uZWN0aW9ucy8xLjAvaW52aXRhdGlvbiIsICJAaWQiOiAiZWI2MTI4NDUtYmU1OC00YTNiLTk2MGUtZmE3NDUzMGEwNzkyIiwgInJlY2lwaWVudEtleXMiOiBbIkFacEdoMlpIOTJVNnRFRTlmYk13Z3BqQkp3TEUzRFJIY1dCbmg4Y2FqdzNiIl0sICJzZXJ2aWNlRW5kcG9pbnQiOiAiaHR0cDovL2lwMTAtMC0xMjEtNC1icXVxbzgxNmI0ODBhNGJmbjNrZy04MDIwLmRpcmVjdC5wbGF5LXdpdGgtdm9uLnZvbnguaW8iLCAibGFiZWwiOiAiRmFiZXIuQWdlbnQifQ==\n
Note that this will use the ngrok endpoint if you are running locally, or your PWD endpoint if you are running on PWD.
"},{"location":"demo/AliceGetsAPhone/#issue-a-credential","title":"Issue a Credential","text":"We will use the Faber console to issue a credential. This could be done using the Swagger API as we have done in the connection process. We'll leave that as an exercise to the user.
In the Faber console, select option 1
to send a credential to the mobile agent.
The Faber agent outputs details to the console; e.g.,
Faber | Credential: state = credential-issued, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\nFaber | Revocation registry ID: CMqNjZ8e59jDuBYcquce4D:4:CMqNjZ8e59jDuBYcquce4D:3:CL:50:faber.agent.degree_schema:CL_ACCUM:4f4fb2e4-3a59-45b1-8921-578d005a7ff6\nFaber | Credential revocation ID: 1\nFaber | Credential: state = done, cred_ex_id = ba3089d6-92da-4cb7-9062-7f24066b2a2a\n
The revocation registry id and credential revocation id only appear if revocation is active. If you are doing revocation, you to need the Revocation registry id
later, so we recommend that you copy it it now and paste it into a text file or some place that you can access later. If you don't write it down, you can get the Id from the Admin API using the GET /revocation/active-registry/{cred_def_id}
endpoint, and passing in the credential definition Id (which you can get from the GET /credential-definitions/created
endpoint).
The credential offer should automatically show up in the mobile agent. Accept the offered credential following the instructions provided by the mobile agent. That will look something like this:
Click here to view screenshot Click here to view screenshot Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#issue-a-presentation-request","title":"Issue a Presentation Request","text":"We will use the Faber console to ask mobile agent for a proof. This could be done using the Swagger API, but we'll leave that as an exercise to the user.
In the Faber console, select option 2
to send a proof request to the mobile agent.
The presentation (proof) request should automatically show up in the mobile agent. Follow the instructions provided by the mobile agent to prepare and send the proof back to Faber. That will look something like this:
Click here to view screenshot Click here to view screenshot Click here to view screenshotIf the mobile agent is able to successfully prepare and send the proof, you can go back to the Play with Docker terminal to see the status of the proof.
The process should \"just work\" for the non-revocation use case. If you are using revocation, your results may vary. As of writing this, we get failures on the wallet side with some mobile wallets, and on the Faber side with others (an error in the Indy SDK). As the results improve, we'll update this. Please let us know through GitHub issues if you have any problems running this.
"},{"location":"demo/AliceGetsAPhone/#review-the-proof","title":"Review the Proof","text":"In the Faber console window, the proof should be received as validated.
Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#revoke-the-credential-and-send-another-proof-request","title":"Revoke the Credential and Send Another Proof Request","text":"If you have enabled revocation, you can try revoking the credential and publishing its pending revoked status (faber
options 5
and 6
). For the revocation step, You will need the revocation registry identifier and the credential revocation identifier (which is 1 for the first credential you issued), as the Faber agent logged them to the console at credential issue.
Once that is done, try sending another proof request and see what happens! Experiment with immediate and pending publication. Note that immediate publication also publishes any pending revocations on its revocation registry.
Click here to view screenshot"},{"location":"demo/AliceGetsAPhone/#send-a-connectionless-proof-request","title":"Send a Connectionless Proof Request","text":"A connectionless proof request works the same way as a regular proof request, however it does not require a connection to be established between the Verifier and Holder/Prover.
This is supported in the Faber demo, however note that it will only work when running Faber on the Docker playground service Play with Docker. (This is because both the Faber agent and controller both need to be exposed to the mobile agent.)
If you have gone through the above steps, you can delete the Faber connection in your mobile agent (however do not delete the credential that Faber issued to you).
Then in the faber demo, select option 2a
- Faber will display a QR code which you can scan with your mobile agent. You will see the same proof request displayed in your mobile agent, which you can respond to.
Behind the scenes, the Faber controller delivers the proof request information (linked from the url encoded in the QR code) directly to your mobile agent, without establishing and agent-to-agent connection first. If you are interested in the underlying mechanics, you can review the faber.py
code in the repository.
That\u2019s the Faber-Mobile Alice demo. Feel free to play with the Swagger API and experiment further and figure out what an instance of a controller has to do to make things work.
"},{"location":"demo/AliceWantsAJsonCredential/","title":"How to Issue JSON-LD Credentials using ACA-Py","text":"ACA-Py has the capability to issue and verify both Indy and JSON-LD (W3C compliant) credentials.
The JSON-LD support is documented here - this document will provide some additional detail in how to use the demo and admin api to issue and prove JSON-LD credentials.
"},{"location":"demo/AliceWantsAJsonCredential/#setup-agents-to-issue-json-ld-credentials","title":"Setup Agents to Issue JSON-LD Credentials","text":"Clone this repository to a directory on your local:
git clone https://github.com/openwallet-foundation/acapy.git\ncd acapy/demo\n
Open up a second shell (so you have 2 shells open in the demo
directory) and in one shell:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --did-exchange --aip 20 --cred-type json-ld\n
... and in the other:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice\n
Note that you start the faber
agent with AIP2.0 options. (When you specify --cred-type json-ld
faber will set aip to 20
automatically, so the --aip
option is not strictly required). Note as well the use of the LEDGER_URL
. Technically, that should not be needed if we aren't doing anything with an Indy ledger-based credentials. However, there must be something in the way that the Faber and Alice controllers are starting up that requires access to a ledger.
Also note that the above will only work with the /issue-credential-2.0/create-offer
endpoint. If you want to use the /issue-credential-2.0/send
endpoint - which automates each step of the credential exchange - you will need to include the --no-auto
option when starting each of the alice and faber agents (since the alice and faber controllers also automatically respond to each step in the credential exchange).
(Alternately you can run run Alice and Faber agents locally, see the ./faber-local.sh
and ./alice-local.sh
scripts in the demo
directory.)
Copy the \"invitation\" json text from the Faber shell and paste into the Alice shell to establish a connection between the two agents.
(If you are running with --no-auto
you will also need to call the /connections/{conn_id}/accept-invitation
endpoint in alice's admin api swagger page.)
Now open up two browser windows to the Faber and Alice admin api swagger pages.
Using the Faber admin api, you have to create a DID with the appropriate:
Note that \"did:sov\" must be a public DID (i.e. registered on the ledger) but \"did:key\" is not.
For example, in Faber's swagger page call the /wallet/did/create
endpoint with the following payload:
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"bls12381g2\" // or ed25519\n }\n}\n
This will return something like:
{\n \"result\": {\n \"did\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"verkey\": \"mV6482Amu6wJH8NeMqH3QyTjh6JU6N58A8GcirMZG7Wx1uyerzrzerA2EjnhUTmjiSLAp6CkNdpkLJ1NTS73dtcra8WUDDBZ3o455EMrkPyAtzst16RdTMsGe3ctyTxxJav\",\n \"posture\": \"wallet_only\",\n \"key_type\": \"bls12381g2\",\n \"method\": \"key\"\n }\n}\n
You do not create a schema or cred def for a JSON-LD credential (these are only required for \"indy\" credentials).
You will need to create a DID as above for Alice as well (/wallet/did/create
etc ...).
Congratulations, you are now ready to start issuing JSON-LD credentials!
connection_id
into the examples below.issuer
).credentialSubject.id
- this is required for Alice to sign the proof (the credentialSubject.id
is not required, but then the provided presentation can't be verified).To issue a credential, use the /issue-credential-2.0/send-offer
endpoint. (You can also use the /issue-credential-2.0/send
) endpoint, if, as mentioned above, you have included the --no-auto
when starting both of the agents.)
You can test with this example payload (just replace the \"connection_id\", \"issuer\" key, \"credentialSubject.id\" and \"proofType\" with appropriate values:
{\n \"connection_id\": \"4fba2ce5-b411-4ecf-aa1b-ec66f3f6c903\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"degreeType\": \"Undergraduate\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Note that if you have the \"auto\" settings on, this is all you need to do. Otherwise you need to call the /send-request
, /store
, etc endpoints to complete the protocol.
To see the issued credential, call the /credentials/w3c
endpoint on Alice's admin api - this will return something like:
{\n \"results\": [\n {\n \"contexts\": [\n \"https://w3id.org/security/bbs/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://www.w3.org/2018/credentials/v1\"\n ],\n \"types\": [\n \"UniversityDegreeCredential\",\n \"VerifiableCredential\"\n ],\n \"schema_ids\": [],\n \"issuer_id\": \"did:key:zUC71KdwBhq1FioWh53VXmyFiGpewNcg8Ld42WrSChpMzzskRWwHZfG9TJ7hPj8wzmKNrek3rW4ZkXNiHAjVchSmTr9aNUQaArK3KSkTySzjEM73FuDV62bjdAHF7EMnZ27poCE\",\n \"subject_ids\": [],\n \"proof_types\": [\n \"BbsBlsSignature2020\"\n ],\n \"cred_value\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\",\n \"UniversityDegreeCredential\"\n ],\n \"issuer\": \"did:key:zUC71Kd...poCE\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"degreeType\": \"Undergraduate\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n },\n \"proof\": {\n \"type\": \"BbsBlsSignature2020\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:zUC71Kd...poCE#zUC71Kd...poCE\",\n \"created\": \"2021-05-19T16:19:44.458170\",\n \"proofValue\": \"g0weLyw2Q+niQ4pGfiXB...tL9C9ORhy9Q==\"\n }\n },\n \"cred_tags\": {},\n \"record_id\": \"365ab87b12f74b2db784fdd4db8419f5\"\n }\n ]\n}\n
If you don't see the credential in your wallet, look up the credential exchange record (in alice's admin api - /issue-credential-2.0/records
) and check the state. If the state is credential-received
, then the credential has been received but not stored, in this case just call the /store
endpoint for this credential exchange.
The above example uses the https://www.w3.org/2018/credentials/examples/v1
context, which should never be used in a real application.
To build credentials in real life, you first determine which attributes you need and then include the appropriate contexts.
"},{"location":"demo/AliceWantsAJsonCredential/#context-schemaorg","title":"Context schema.org","text":"You can use attributes defined on schema.org. Although this is NOT RECOMMENDED (included here for illustrative purposes only) - individual attributes can't be validated (see the comment later on).
You first include https://schema.org
in the @context
block of the credential as follows:
\"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n],\n
Then you review the attributes and objects defined by https://schema.org
and decide what you need to include in your credential.
For example to issue a credential with givenName, familyName and alumniOf attributes, submit the following:
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n ],\n \"type\": [\"VerifiableCredential\", \"Person\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Note that with https://schema.org
, if you include attributes that aren't defined by any context, you will not get an error. For example you can try replacing the credentialSubject
in the above with:
\"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\",\n \"someUndefinedAttribute\": \"the value of the attribute\"\n}\n
... and the credential issuance should fail, however https://schema.org
defines a @vocab
that by default all terms derive from (see here).
You can include more complex schemas, for example to use the schema.org Person schema (which includes givenName
and familyName
):
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://schema.org\"\n ],\n \"type\": [\"VerifiableCredential\", \"Person\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"student\": {\n \"type\": \"Person\",\n \"givenName\": \"Sally\",\n \"familyName\": \"Student\",\n \"alumniOf\": \"Example University\"\n }\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#credential-specific-contexts","title":"Credential-Specific Contexts","text":"The recommended approach to defining credentials is to define a credential-specific vocabulary (or make use of existing ones). (Note that these can include references to https://schema.org
, you just shouldn't use this directly in your credential.)
The following example uses the W3C citizenship context to issue a PermanentResident credential (replace the connection_id
, issuer
and credentialSubject.id
with your local values):
{\n \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/citizenship/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\",\n \"PermanentResident\"\n ],\n \"id\": \"https://credential.example.com/residents/1234567890\",\n \"issuer\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"type\": [\n \"PermanentResident\"\n ],\n \"id\": \"did:key:zUC7CXi82AXbkv4SvhxDxoufrLwQSAo79qbKiw7omCQ3c4TyciDdb9s3GTCbMvsDruSLZX6HNsjGxAr2SMLCNCCBRN5scukiZ4JV9FDPg5gccdqE9nfCU2zUcdyqRiUVnn9ZH83\",\n \"givenName\": \"ALICE\",\n \"familyName\": \"SMITH\",\n \"gender\": \"Female\",\n \"birthCountry\": \"Bahamas\",\n \"birthDate\": \"1958-07-17\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
Copy and paste this content into Faber's /issue-credential-2.0/send-offer
endpoint, and it will kick off the exchange process to issue a W3C credential to Alice.
In Alice's swagger page, submit the /credentials/records/w3c
endpoint to see the issued credential.
To request a proof, submit the following (with appropriate connection_id
) to Faber's /present-proof-2.0/send-request
endpoint:
{\n \"comment\": \"string\",\n \"connection_id\": \"41acd909-9f45-4c69-8641-8146e0444a57\",\n \"presentation_request\": {\n \"dif\": {\n \"options\": {\n \"challenge\": \"3fa85f64-5717-4562-b3fc-2c963f66afa7\",\n \"domain\": \"4jt78h47fh47\"\n },\n \"presentation_definition\": {\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"EU Driver's License\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"is_holder\": [\n {\n \"directive\": \"required\",\n \"field_id\": [\n \"1f44d55f-f161-4938-a659-f8026467f126\"\n ]\n }\n ],\n \"fields\": [\n {\n \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n },\n {\n \"path\": [\n \"$.credentialSubject.givenName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\"\n }\n ]\n }\n }\n ]\n }\n }\n }\n}\n
Note that the is_holder
property can be used by Faber to verify that the holder of credential is the same as the subject of the attribute (familyName
). Later on, the received presentation will be signed and verifiable only if is_holder
with \"directive\": \"required\"
is included in the presentation request.
There are several ways that Alice can respond with a presentation. The simplest will just tell ACA-Py to put the presentation together and send it to Faber - submit the following to Alice's /present-proof-2.0/records/{pres_ex_id}/send-presentation
:
{\n \"dif\": {\n }\n}\n
There are two ways that Alice can provide some constraints to tell ACA-Py which credential(s) to include in the presentation.
Firstly, Alice can include the received presentation request in the body to the /send-presentation
endpoint, and can include additional constraints on the fields:
{\n \"dif\": {\n \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"presentation_definition\": {\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"Some kind of citizenship check\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"is_holder\": [\n {\n \"directive\": \"required\",\n \"field_id\": [\n \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"332be361-823a-4863-b18b-c3b930c5623e\"\n ],\n }\n ],\n \"fields\": [\n {\n \"id\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n },\n {\n \"id\": \"332be361-823a-4863-b18b-c3b930c5623e\",\n \"path\": [\n \"$.id\"\n ],\n \"purpose\": \"Specify the id of the credential to present\",\n \"filter\": {\n \"const\": \"https://credential.example.com/residents/1234567890\"\n }\n }\n ]\n }\n }\n ]\n }\n }\n}\n
Note the additional constraint on \"path\": [ \"$.id\" ]
- this restricts the presented credential to the one with the matching credential.id
. Any credential attributes can be used, however this presumes that the issued credentials contain a uniquely identifying attribute.
Another option is for Alice to specify the credential record_id
- this is an internal value within ACA-Py:
{\n \"dif\": {\n \"issuer_id\": \"did:key:zUC7Dus47jW5Avcne8LLsUvJSdwspmErgehxMWqZZy8eSSNoHZ4x8wgs77sAmQtCADED5RQP1WWhvt7KFNm6GGMxdSGpKu3PX6R9a61G9VoVsiFoRf1yoK6pzhq9jtFP3e2SmU9\",\n \"presentation_definition\": {\n \"format\": {\n \"ldp_vp\": {\n \"proof_type\": [\n \"BbsBlsSignature2020\"\n ]\n }\n },\n \"id\": \"32f54163-7166-48f1-93d8-ff217bdb0654\",\n \"input_descriptors\": [\n {\n \"id\": \"citizenship_input_1\",\n \"name\": \"Some kind of citizenship check\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials#VerifiableCredential\"\n },\n {\n \"uri\": \"https://w3id.org/citizenship#PermanentResident\"\n }\n ],\n \"constraints\": {\n \"limit_disclosure\": \"required\",\n \"fields\": [\n {\n \"path\": [\n \"$.credentialSubject.familyName\"\n ],\n \"purpose\": \"The claim must be from one of the specified issuers\",\n \"filter\": {\n \"const\": \"SMITH\"\n }\n }\n ]\n }\n }\n ]\n },\n \"record_ids\": {\n \"citizenship_input_1\": [ \"1496316f972e40cf9b46b35971182337\" ]\n }\n }\n}\n
"},{"location":"demo/AliceWantsAJsonCredential/#another-credential-issue-example","title":"Another Credential Issue Example","text":"TBD the following credential is based on the W3C Vaccination schema:
{\n \"connection_id\": \"ad35a4d8-c84b-4a4f-a83f-1afbf134b8b9\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/vaccination/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"VaccinationCertificate\"],\n \"issuer\": \"did:key:zUC71pj2gpDLfcZ9DE1bMtjZGWCSLhkQsUCaKjqXtCftGkz27894pEX9VvGNiFsaV67gqv2TEPQ2aDaDDdTDNp42LfDdK1LaWSBCfzsQEyaiR1zjZm1RtoRu1ZM6v6vz4TiqDgU\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:aksdkajshdkajhsdkjahsdkjahsdj\",\n \"type\": \"VaccinationEvent\",\n \"batchNumber\": \"1183738569\",\n \"administeringCentre\": \"MoH\",\n \"healthProfessional\": \"MoH\",\n \"countryOfVaccination\": \"NZ\",\n \"recipient\": {\n \"type\": \"VaccineRecipient\",\n \"givenName\": \"JOHN\",\n \"familyName\": \"SMITH\",\n \"gender\": \"Male\",\n \"birthDate\": \"1958-07-17\"\n },\n \"vaccine\": {\n \"type\": \"Vaccine\",\n \"disease\": \"COVID-19\",\n \"atcCode\": \"J07BX03\",\n \"medicinalProductName\": \"COVID-19 Vaccine Moderna\",\n \"marketingAuthorizationHolder\": \"Moderna Biotech\"\n }\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"demo/Endorser/","title":"Endorser Demo","text":"There are two ways to run the alice/faber demo with endorser support enabled.
"},{"location":"demo/Endorser/#run-faber-as-an-author-with-a-dedicated-endorser-agent","title":"Run Faber as an Author, with a dedicated Endorser agent","text":"This approach runs Faber as an un-privileged agent, and starts a dedicated Endorser Agent in a sub-process (an instance of ACA-Py) to endorse Faber's transactions.
Start a VON Network instance and a Tails server:
--logs
option if you want to use the same terminal for running both VON Network and the Tails server. When you are finished with VON Network, follow the Stopping And Removing a VON Network instructions.Start up Faber as Author (note the tails file size override, to allow testing of the revocation registry roll-over):
TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role author --revocation\n
Start up Alice as normal:
./run_demo alice\n
You can run all of Faber's functions as normal - if you watch the console you will see that all ledger operations go through the endorser workflow.
If you issue more than 5 credentials, you will see Faber creating a new revocation registry (including endorser operations).
"},{"location":"demo/Endorser/#run-alice-as-an-author-and-faber-as-an-endorser","title":"Run Alice as an Author and Faber as an Endorser","text":"This approach sets up the endorser roles to allow manual testing using the agents' swagger pages:
Start a VON Network and a Tails server using the instructions above.
Start up Faber as Endorser:
TAILS_FILE_COUNT=5 ./run_demo faber --endorser-role endorser --revocation\n
Start up Alice as Author:
TAILS_FILE_COUNT=5 ./run_demo alice --endorser-role author --revocation\n
Copy the invitation from Faber to Alice to complete the connection.
Then in the Alice shell, select option \"D\" and copy Faber's DID (it is the DID displayed on faber agent startup).
This starts up the ACA-Py agents with the endorser role set (via the new command-line args) and sets up the connection between the 2 agents with appropriate configuration.
Then, in the Alice swagger page you can create a schema and cred def, and all the endorser steps will happen automatically. You don't need to specify a connection id or explicitly request endorsement (ACA-Py does it all automatically based on the startup args).
If you check the endorser transaction records in either Alice or Faber you can see that the endorser protocol executes automatically and the appropriate endorsements were endorsed before writing the transactions to the ledger.
"},{"location":"demo/OpenAPIDemo/","title":"Aries OpenAPI Demo","text":"What better way to learn about controllers than by actually being one yourself! In this demo, that\u2019s just what happens\u2014you are the controller. You have access to the full set of API endpoints exposed by an ACA-Py instance, and you will see the events coming from ACA-Py as they happen. Using that information, you'll help Alice's and Faber's agents connect, Faber's agent issue an education credential to Alice, and then ask Alice to prove she possesses the credential. Who knows why Faber needs to get the proof, but it lets us show off more protocols.
"},{"location":"demo/OpenAPIDemo/#contents","title":"Contents","text":"We will get started by opening three browser tabs that will be used throughout the lab. Two will be Swagger UIs for the Faber and Alice agent and one for the public ledger (showing the Hyperledger Indy ledger). As well, we'll keep the terminal sessions where we started the demos handy, as we'll be grabbing information from them as well.
Let's start with the ledger browser. For this demo, we're going to use an open public ledger operated by the BC Government's VON Team. In your first browser tab, go to: http://test.bcovrin.vonx.io. This will be called the \"ledger tab\" in the instructions below.
For the rest of the set up, you can choose to run the terminal sessions in your browser (no local resources needed), or you can run it in Docker on your local system. Your choice, each is covered in the next two sections.
Note: In the following, when we start the agents we use several special demo settings. The command we use is this: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg
. In that:
LEDGER_URL
environment variable informs the agent what ledger to use.--events
option indicates that we want the controller to display the webhook events from ACA-Py in the log displayed on the terminal.--no-auto
option indicates that we don't want the ACA-Py agent to automatically handle some events such as connecting. We want the controller (you!) to handle each step of the protocol.--bg
option indicates that the docker container will run in the background, so accidentally hitting Ctrl-C won't stop the process.To run the necessary terminal sessions in your browser, go to the Docker playground service Play with Docker. Don't know about Play with Docker? Check this out to learn more.
"},{"location":"demo/OpenAPIDemo/#start-the-faber-agent","title":"Start the Faber Agent","text":"In a browser, go to the Play with Docker home page, Login (if necessary) and click \"Start.\" On the next screen, click (in the left menu) \"+Add a new instance.\" That will start up a terminal in your browser. Run the following commands to start the Faber agent.
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f faber\n
Once the Faber agent has started up (with the invite displayed), click the link near the top of the screen 8021
. That will start an instance of the OpenAPI/Swagger user interface connected to the Faber instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8021.direct...
.
Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.
NOTE: Hit \"Ctrl-C\" at any time to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Now to start Alice's agent. Click the \"+Add a new instance\" button again to open another terminal session. Run the following commands to start Alice's agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f alice\n
You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR).
Once the Alice agent has started up (with the invite:
prompt displayed), click the link near the top of the screen 8031
. That will start an instance of the OpenAPI/Swagger User Interface connected to the Alice instance. Note that the URL on the OpenAPI/Swagger instance is: http://ip....8031.direct...
.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.
Show me a screenshot!You are ready to go. Skip down to the Using the OpenAPI/Swagger User Interface section.
"},{"location":"demo/OpenAPIDemo/#running-in-docker","title":"Running in Docker","text":"To run the demo on your local system, you must have git, a running Docker installation, and terminal windows running bash. Need more information about getting set up? Click here to learn more.
"},{"location":"demo/OpenAPIDemo/#start-the-faber-agent_1","title":"Start the Faber Agent","text":"To begin running the demo in Docker, open up two terminal windows, one each for Faber\u2019s and Alice\u2019s agent.
In the first terminal window, clone the ACA-Py repo, change into the demo folder and start the Faber agent:
git clone https://github.com/openwallet-foundation/acapy\ncd acapy/demo\nLEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f faber\n
If all goes well, the agent will show a message indicating it is running. Use the second browser tab to navigate to http://localhost:8021. You should see an OpenAPI/Swagger user interface with a (long-ish) list of API endpoints. These are the endpoints exposed by the Faber agent.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Faber agent by running docker logs -f faber
Remember that the OpenAPI/Swagger browser tab with an address containing 8021 is the Faber agent.
Show me a screenshot!"},{"location":"demo/OpenAPIDemo/#start-the-alice-agent_1","title":"Start the Alice Agent","text":"To start Alice's agent, open up a second terminal window and in it, change to the same demo
directory as where Faber's agent was started above. Once there, start Alice's agent:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo alice --events --no-auto --bg\n
Once you are back at the command prompt, we'll use docker's logging capability to see what's being written to the terminal:
docker logs -f alice\n
You can ignore a message like WARNING: your terminal doesn't support cursor position requests (CPR)
that may appear.
If all goes well, the agent will show a message indicating it is running. Open a third browser tab and navigate to http://localhost:8031. Again, you should see the OpenAPI/Swagger user interface with a list of API endpoints, this time the endpoints for Alice\u2019s agent.
NOTE: Hit \"Ctrl-C\" to get back to the command line. When you are done with the command line, you can return to seeing the logs from the Alice agent by running docker logs -f alice
Remember that the OpenAPI/Swagger browser tab with an address containing 8031 is Alice's agent.
Show me a screenshot!"},{"location":"demo/OpenAPIDemo/#restarting-the-docker-containers","title":"Restarting the Docker Containers","text":"When you complete the entire demo (not now!!), you can need to stop the two agents. To do that, get to the command line by hitting Ctrl-C and running:
docker stop faber\ndocker stop alice\n
"},{"location":"demo/OpenAPIDemo/#using-the-openapiswagger-user-interface","title":"Using the OpenAPI/Swagger User Interface","text":"Try to organize what you see on your screen to include both the Alice and Faber OpenAPI/Swagger tabs, and both (Alice and Faber) terminal sessions, all at the same time. After you execute an API call in one of the browser tabs, you will see a webhook event from the ACA-Py instance in the terminal window of the other agent. That's a controller's life. See an event, process it, send a response.
From time to time you will want to see what's happening on the ledger, so keep that handy as well. As well, if you make an error with one of the commands (e.g. bad data, improperly structured JSON), you will see the errors in the terminals.
In the instructions that follow, we\u2019ll let you know if you need to be in the Faber, Alice or Indy browser tab. We\u2019ll leave it to you to track which is which.
Using the OpenAPI/Swagger user interface is pretty simple. In the steps below, we\u2019ll indicate what API endpoint you need use, such as POST /connections/create-invitation
. That means you must:
Try it out
button;Execute
;So, the mechanical steps are easy. It\u2019s fourth step from the list above that can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct - braces and quotes can be a pain. When steps don\u2019t work, start your debugging by looking at your JSON.
Enough with the preliminaries, let\u2019s get started!
"},{"location":"demo/OpenAPIDemo/#establishing-a-connection","title":"Establishing a Connection","text":"We\u2019ll start the demo by establishing a connection between the Alice and Faber agents. We\u2019re starting there to demonstrate that you can use agents without having a ledger. We won\u2019t be using the Indy public ledger at all for this step. Since the agents communicate using DIDComm messaging and connect by exchanging pairwise DIDs and DIDDocs based on (an early version of) the did:peer
DID method, a public ledger is not needed.
In the Faber browser tab, navigate to the POST /connections/create-invitation
endpoint. Replace the sample body with and empty production ({}
) and execute the call. If successful, you should see a connection id, an invitation, and the invitation URL. The connection ids will be different on each run.
Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on
Show me a screenshot - Create Invitation Request Show me a screenshot - Create Invitation Response"},{"location":"demo/OpenAPIDemo/#copy-the-invitation-created-by-the-faber-agent","title":"Copy the Invitation created by the Faber Agent","text":"Copy the entire block of the invitation
object, from the curly brackets {}
, excluding the trailing comma.
Before switching over to the Alice browser tab, scroll to and execute the GET /connections
endpoint to see the list of Faber's connections. You should see a connection with a connection_id
that is identical to the invitation you just created, and that its state is invitation
.
Switch to the Alice browser tab and get ready to execute the POST /connections/receive-invitation
endpoint. Select all of the pre-populated text and replace it with the invitation object from the Faber tab. When you click Execute
you should get back a connection response with a connection Id, an invitation key, and the state of the connection, which should be invitation
.
Hint: set an Alias on the Invitation, this makes it easier to find the Connection later on
Show me a screenshot - Receive Invitation Request Show me a screenshot - Receive Invitation ResponseA key observation to make here. The \"copy and paste\" we are doing here from Faber's agent to Alice's agent is what is called an \"out of band\" message. Because we don't yet have a DIDComm connection between the two agents, we have to convey the invitation in plaintext (we can't encrypt it - no channel) using some other mechanism than DIDComm. With mobile agents, that's where QR codes often come in. Once we have the invitation in the receivers agent, we can get back to using DIDComm.
"},{"location":"demo/OpenAPIDemo/#tell-alices-agent-to-accept-the-invitation","title":"Tell Alice's Agent to Accept the Invitation","text":"At this point Alice has simply stored the invitation in her wallet. You can see the status using the GET /connections
endpoint.
To complete a connection with Faber, she must accept the invitation and send a corresponding connection request to Faber. Find the connection_id
in the connection response from the previous POST /connections/receive-invitation
endpoint call. You may note that the same data was sent to the controller as an event from ACA-Py and is visible in the terminal. Scroll to the POST /connections/{conn_id}/accept-invitation
endpoint and paste the connection_id
in the id
parameter field (you will have to click the Try it out
button to see the available URL parameters). The response from clicking Execute
should show that the connection has a state of request
.
In the Faber terminal session, an event (a web service callback from ACA-Py to the controller) has been received about the request from Alice. Copy the connection_id
from the event for the next step.
Note that the connection ID held by Alice is different from the one held by Faber. That makes sense, as both independently created connection objects, each with a unique, self-generated GUID.
"},{"location":"demo/OpenAPIDemo/#the-faber-agent-completes-the-connection","title":"The Faber Agent Completes the Connection","text":"To complete the connection process, Faber will respond to the connection request from Alice. Scroll to the POST /connections/{conn_id}/accept-request
endpoint and paste the connection_id
you previously copied into the id
parameter field (you will have to click the Try it out
button to see the available URL parameters). The response from clicking the Execute
button should show that the connection has a state of response
, which indicates that Faber has accepted Alice's connection request.
Switch over to the Alice browser tab.
Scroll to and execute GET /connections
to see a list of Alice's connections, and the information tracked about each connection. You should see the one connection Alice\u2019s agent has, that it is with the Faber agent, and that its state is active
.
As with Faber's side of the connection, Alice received a notification that Faber had accepted her connection request.
Show me the event"},{"location":"demo/OpenAPIDemo/#review-the-connection-status-in-fabers-agent","title":"Review the Connection Status in Faber's Agent","text":"You are connected! Switch to the Faber browser tab and run the same GET /connections
endpoint to see Faber's view of the connection. Its state is also active
. Note the connection_id
, you\u2019ll need it later in the tutorial.
Once you have a connection between two agents, you have a channel to exchange secure, encrypted messages. In fact these underlying encrypted messages (similar to envelopes in a postal system) enable the delivery of messages that form the higher level protocols, such as issuing Credentials and providing Proofs. So, let's send a couple of messages that contain the simplest of context\u2014text. For this we wil use the Basic Message protocol, Aries RFC 0095.
"},{"location":"demo/OpenAPIDemo/#sending-a-message-from-alice-to-faber","title":"Sending a message from Alice to Faber","text":"On Alice's swagger page, scroll to the POST /connections/{conn_id}/send-message
endpoint. Click on Try it Out
and enter a message in the body provided (for example {\"content\": \"Hello Faber\"}
). Enter the connection id of Alice's connection in the field provided. Then click on Execute
.
How does Faber know that a message was sent? If you take a look at Faber's console window, you can see that Faber's agent has raised an Event that the message was received:
Show me a screenshotFaber's controller application can take whatever action is necessary to process this message. It could trigger some application code, or it might just be something the Faber application needs to display to its user (for example a reminder about some action the user needs to take).
"},{"location":"demo/OpenAPIDemo/#alices-agent-verifies-that-faber-has-received-the-message","title":"Alice's Agent Verifies that Faber has Received the Message","text":"How does Alice get feedback that Faber has received the message? The same way - when Faber's agent acknowledges receipt of the message, Alice's agent raises an Event to let the Alice controller know:
Show me a screenshotAgain, Alice's agent can take whatever action is necessary, possibly just flagging the message as having been received
.
The next thing we want to do in the demo is have the Faber agent issue a credential to Alice\u2019s agent. To this point, we have not used the Indy ledger at all. Establishing the connection and messaging has been done with pairwise DIDs based on the did:peer
method. Verifiable credentials must be rooted in a public DID ledger to enable the presentation of proofs.
Before the Faber agent can issue a credential, it must register a DID on the Indy public ledger, publish a schema, and create a credential definition. In the \u201creal world\u201d, the Faber agent would do this before connecting with any other agents. And, since we are using the handy \"./run_demo faber\" (and \"./run_demo alice\") scripts to start up our agents, the Faber version of the script has already:
The schema and credential definition could also be created through this swagger interface.
We don't cover the details of those actions in this tutorial, but there are other materials available that go through these details.
To Do: Add a link to directions for doing this manually, and to where in the controller Python code this is done.
"},{"location":"demo/OpenAPIDemo/#confirming-your-schema-and-credential-definition","title":"Confirming your Schema and Credential Definition","text":"You can confirm the schema and credential definition were published by going back to the Indy ledger browser tab using Faber's public DID. You may have saved that from a previous step, but if not here is an API call you can make to get that information. Using Faber's swagger page and scroll to the GET /wallet/did/public
endpoint. Click on Try it Out
and Execute
and you will see Faber's public DID.
On the ledger browser of the BCovrin ledger, click the Domain
page, refresh, and paste the Faber public DID into the Filter:
field:
The ledger browser should refresh and display the four (4) transactions on the ledger related to this DID:
You can also look up the Schema and Credential Definition information using Faber's swagger page. Use the GET /schemas/created
endpoint to get a list of schemas, including the one schema_id
that the Faber agent has defined. Keep this section of the Swagger page expanded as we'll need to copy the Id as part of starting the issue credential protocol coming next.
Likewise use the GET /credential-definitions/created
endpoint to get the list of the one (in this case) credential definition id created by Faber. Keep this section of the Swagger page expanded as we'll also need to copy the Id as part of starting the issue credential protocol coming next.
Hint: Remember how the schema and credential definitions were created for you as Faber started up? To do it yourself, use the POST
versions of these endpoints. Now you know!
The one time setup work for issuing a credential is complete\u2014creating a DID, schema and credential definition. We can now issue 1 or 1 million credentials without having to do those steps again. Astute readers might note that we did not setup a revocation registry, so we cannot revoke the credentials we issue with that credential definition. You can\u2019t have everything in an \"easy\" tutorial!
"},{"location":"demo/OpenAPIDemo/#issuing-a-credential","title":"Issuing a Credential","text":"Triggering the issuance of a credential from the Faber agent to Alice\u2019s agent is done with another API call. In the Faber browser tab, scroll down to the POST /issue-credential-2.0/send
and get ready to (but don\u2019t yet) execute the request. Before execution, you need to update most of the data elements in the JSON. We now cover how to update all the fields.
First, get the connection Id for Faber's connection with Alice. You can copy that from the Faber terminal (the last received event includes it), or scroll up on the Faber swagger tab to the GET /connections
API endpoint, execute, copy it and paste the connection_id
value into the same field in the issue credential JSON.
For the following fields, scroll on Faber's Swagger page to the listed endpoint, execute (if necessary), copy the response value and paste as the values of the following JSON items:
issuer_did
the Faber public DID (use GET /wallet/DID/public
), schema_id
the Id of the schema Faber created (use GET /schemas/created
) and,cred_def_id
the Id of the credential definition Faber created (use GET /credential-definitions/created
)into the filter
section's indy
subsection. Remove the \"dif\"
subsection of the filter
section within the JSON, and specify the remaining indy filter criteria as follows:
schema_version
: set to the last segment of the schema_id
, a three part version number that was randomly generated on startup of the Faber agent. Segments of the schema_id
are separated by \":\"s.schema_issuer_did
: set to the same the value as in issuer_did
,schema_name
: set to the second last segment of the schema_id
, in this case degree schema
Finally, set the remaining values as follows: - auto_remove
: set to true
(no quotes), see note below - comment
: set to any string. It's intended to let Alice know something about the credential being offered. - trace
: set to false
(no quotes). It's for troubleshooting, performance profiling, and/or diagnostics.
By setting auto_remove
to true, ACA-Py will automatically remove the credential exchange record after the protocol completes. When implementing a controller, this is the likely setting to use to reduce agent storage usage, but implies if a record of the issuance of the credential is needed, the controller must save it somewhere else. For example, Faber College might extend their Student Information System, where they track all their students, to record when credentials are issued to students, and the Ids of the issued credentials.
Finally, we need put into the JSON the data values for the credential_preview
section of the JSON. Copy the following and paste it between the square brackets of the attributes
item, replacing what is there. Feel free to change the attribute value
items, but don't change the labels or names:
{\n \"name\": \"name\",\n \"value\": \"Alice Smith\"\n },\n {\n \"name\": \"timestamp\",\n \"value\": \"1234567890\"\n },\n {\n \"name\": \"date\",\n \"value\": \"2018-05-28\"\n },\n {\n \"name\": \"degree\",\n \"value\": \"Maths\"\n },\n {\n \"name\": \"birthdate_dateint\",\n \"value\": \"19640101\"\n }\n
(Note that the birthdate above is used to present later on to pass an \"age proof\".)
OK, finally, you are ready to click Execute
. The request should work, but if it doesn\u2019t - check your JSON! Did you get all the quotes and commas right?
To confirm the issuance worked, scroll up on the Faber Swagger page to the issue-credential v2.0
section and execute the GET /issue-credential-2.0/records
endpoint. You should see a lot of information about the exchange just initiated.
Let\u2019s look at it from Alice\u2019s side. Alice's agent source code automatically handles credential offers by immediately responding with a credential request. Scroll back in the Alice terminal to where the credential issuance started. If you've followed the full script, that is just after where we used the basic message protocol to send text messages between Alice and Faber.
Alice's agent first received a notification of a Credential Offer, to which it responded with a Credential Request. Faber received the Credential Request and responded in turn with an Issue Credential message. Scroll down through the events from ACA-Py to the controller to see the notifications of those messages. Make sure you scroll all the way to the bottom of the terminal so you can continue with the process.
Show me a screenshot - issue credential"},{"location":"demo/OpenAPIDemo/#alice-stores-credential-in-her-wallet","title":"Alice Stores Credential in her Wallet","text":"We can check (via Alice's Swagger interface) the issue credential status by hitting the GET /issue-credential-2.0/records
endpoint. Note that within the results, the cred_ex_record
just received has a state
of credential-received
, but not yet done
. Let's address that.
First, we need the cred_ex_id
from the API call response above, or from the event in the terminal; use the endpoint POST /issue-credential-2.0/records/{cred_ex_id}/store
to tell Alice's ACA-Py instance to store the credential in agent storage (aka the Indy Wallet). Note that in the JSON for that endpoint we can provide a credential Id to store in the wallet by setting a value in the credential_id
string. A real controller might use the cred_ex_id
for that, or use something else that makes sense in the agent's business scenario (but the agent generates a random credential identifier by default).
Now, in Alice\u2019s swagger browser tab, find the credentials
section and within that, execute the GET /credentials
endpoint. There should be a list of credentials held by Alice, with just a single entry, the credential issued from the Faber agent. Note that the element referent
is the value of the credential_id
element used in other calls. referent
is the name returned in the indy-sdk
call to get the set of credentials for the wallet and ACA-Py code does not change it in the response.
On the Faber side, we can see by scanning back in the terminal that it receive events to notify that the credential was issued and accepted.
Show me Faber's event activityNote that once the credential processing completed, Faber's agent deleted the credential exchange record from its wallet. This can be confirmed by executing the endpoint GET /issue-credential-2.0/records
You\u2019ve done it, issued a credential! w00t!
"},{"location":"demo/OpenAPIDemo/#issue-credential-notes","title":"Issue Credential Notes","text":"Those that know something about the Indy process for issuing a credential and the DIDComm Issue Credential
protocol know that there multiple steps to issuing credentials, a back and forth between the issuer and the holder to (at least) offer, request and issue the credential. All of those messages happened, but the two agents took care of those details rather than bothering the controller (you, in this case) with managing the back and forth.
POST /issue-credential-2.0/send
administrative message, which handles the back and forth for the issuer automatically. We could have used the other /issue-credential-2.0/
endpoints to allow the controller to handle each step of the protocol.issue_credential_v2_0
event always responds to credential offers with corresponding credential requests.If you would like to perform all of the issuance steps manually on the Faber agent side, use a sequence of the other /issue-credential-2.0/
messages. Use the GET /issue-credential-2.0/records
to both check the credential exchange state as you progress through the protocol and to find some of the data you\u2019ll need in executing the sequence of requests.
The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that your need to respond to. See the detailed API docs.
Protocol Step Faber (Issuer) Alice (Holder) Notes Send Credential OfferPOST /issue-credential-2.0/send-offer
REST service Receive Offer /issue_credential_v2_0/ callback Send Credential Request POST /issue-credential-2.0/records/{cred_ex_id}/send-request
REST service Receive Request /issue_credential_v2_0/ callback Issue Credential POST /issue-credential-2.0/records/{cred_ex_id}/issue
REST service Receive Credential /issue_credential_v2_0/ callback Store Credential POST /issue-credential-2.0/records/{cred_ex_id}/store
REST service Receive Acknowledgement /issue_credential_v2_0/ callback Store Credential Id application function"},{"location":"demo/OpenAPIDemo/#requestingpresenting-a-proof","title":"Requesting/Presenting a Proof","text":"Alice now has her Faber credential. Let\u2019s have the Faber agent send a request for a presentation (a proof) using that credential. This should be pretty easy for you at this point.
"},{"location":"demo/OpenAPIDemo/#faber-sends-a-proof-request","title":"Faber sends a Proof Request","text":"From the Faber browser tab, get ready to execute the POST /present-proof-2.0/send-request
endpoint. After hitting Try it Now
, erase the data in the block labelled \"Edit Value Model\", replacing it with the text below. Once that is done, replace in the JSON each instance of cred_def_id
(there are four instances) and connection_id
with the values found using the same techniques we've used earlier in this tutorial. Both can be found by scrolling back a little in the Faber terminal, or you can execute API endpoints we've already covered. You can also change the value of the comment
item to whatever you want.
{\n \"comment\": \"This is a comment about the reason for the proof\",\n \"connection_id\": \"e469e0f3-2b4d-4b12-9ac7-293f23e8a816\",\n \"presentation_request\": {\n \"indy\": {\n \"name\": \"Proof of Education\",\n \"version\": \"1.0\",\n \"requested_attributes\": {\n \"0_name_uuid\": {\n \"name\": \"name\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_date_uuid\": {\n \"name\": \"date\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_degree_uuid\": {\n \"name\": \"degree\",\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n },\n \"0_self_attested_thing_uuid\": {\n \"name\": \"self_attested_thing\"\n }\n },\n \"requested_predicates\": {\n \"0_age_GE_uuid\": {\n \"name\": \"birthdate_dateint\",\n \"p_type\": \"<=\",\n \"p_value\": 20030101,\n \"restrictions\": [\n {\n \"cred_def_id\": \"SsX9siFWXJyCAmXnHY514N:3:CL:8:faber.agent.degree_schema\"\n }\n ]\n }\n }\n }\n }\n}\n
(Note that the birthdate requested above is used as an \"age proof\", the calculation is something like now() - years(18)
, and the presented birthdate must be on or before this date. You can see the calculation in action in the faber.py
demo code.)
Notice that the proof request is using a predicate to check if Alice is older than 18 without asking for her age. Not sure what this has to do with her education level! Click Execute
and cross your fingers. If the request fails check your JSON!
As before, Alice receives a webhook event from her agent telling her she has received a Proof Request. In our scenario, the ACA-Py instance automatically selects a matching credential and responds with a Proof.
Show me Alice's event activityIn a real scenario, for example if Alice had a mobile agent on her smartphone, the agent would prompt Alice whether she wanted to respond or not.
"},{"location":"demo/OpenAPIDemo/#faber-verifying-the-proof","title":"Faber - Verifying the Proof","text":"Note that in the response, the state is request-sent
. That is because when the HTTP response was generated (immediately after sending the request), Alice's agent had not yet responded to the request. We\u2019ll have to do another request to verify the presentation worked. Copy the value of the pres_ex_id
field from the event in the Faber terminal and use it in executing the GET /present-proof-2.0/records/{pres_ex_id}
endpoint. That should return a result showing the state
as done
and verified
as true
. Proof positive!
You can see some of Faber's activity below:
Show me Faber's event activity"},{"location":"demo/OpenAPIDemo/#present-proof-notes","title":"Present Proof Notes","text":"As with the issue credential process, the agents handled some of the presentation steps without bothering the controller. In this case, Alice's agent processed the presentation request automatically through its handler for the present_proof_v2_0
event, and her wallet contained exactly one credential that satisfied the presentation-request from the Faber agent. Similarly, the Faber agent's handler for the event responds automatically and so on receipt of the presentation, it verified the presentation and updated the status accordingly.
If you would like to perform all of the proof request/response steps manually, you can call all of the individual /present-proof-2.0
messages.
The following table lists endpoints that you need to call (\"REST service\") and callbacks that your agent will receive (\"callback\") that you need to respond to. See the detailed API docs.
Protocol Step Faber (Verifier) Alice (Holder/Prover) Notes Send Proof RequestPOST /present-proof-2.0/send-request
REST service Receive Proof Request /present_proof_v2_0 callback (webhook) Find Credentials GET /present-proof-2.0/records/{pres_ex_id}/credentials
REST service Select Credentials application or user function Send Proof POST /present-proof-2.0/records/{pres_ex_id}/send-presentation
REST service Receive Proof /present_proof_v2_0 callback (webhook) Validate Proof POST /present-proof-2.0/records/{pres_ex_id}/verify-presentation
REST service Save Proof application data"},{"location":"demo/OpenAPIDemo/#conclusion","title":"Conclusion","text":"That\u2019s the OpenAPI-based tutorial. Feel free to play with the API and learn how it works. More importantly, as you implement a controller, use the OpenAPI user interface to test out the calls you will be using as you go. The list of API calls is grouped by protocol and if you are familiar with the protocols (Aries RFCs) the API call names should be pretty obvious.
One limitation of you being the controller is that you don't see the events from the agent that a controller program sees. For example, you, as Alice's agent, are not notified when Faber initiates the sending of a Credential. Some of those things show up in the terminal as messages, but others you just have to know have happened based on a successful API call.
"},{"location":"demo/PostmanDemo/","title":"Aries Postman Demo","text":"In these demos we will use Postman as our controller client.
"},{"location":"demo/PostmanDemo/#contents","title":"Contents","text":"Welcome to the Postman demo. This is an addition to the available OpenAPI demo, providing a set of collections to test and demonstrate various aca-py functionalities.
"},{"location":"demo/PostmanDemo/#installing-postman","title":"Installing Postman","text":"Download, install and launch postman.
"},{"location":"demo/PostmanDemo/#creating-a-workspace","title":"Creating a workspace","text":"Create a new postman workspace labeled \"acapy-demo\".
"},{"location":"demo/PostmanDemo/#importing-the-environment","title":"Importing the environment","text":"In the environment tab from the left, click the import button. You can paste this link which is the environment file in the ACA-Py repository.
Make sure you have the environment set as your active environment.
"},{"location":"demo/PostmanDemo/#importing-the-collections","title":"Importing the collections","text":"In the collections tab from the left, click the import button.
The following collections are available:
Once you are setup, you will be ready to run postman requests. The order of the request is important, since some values are saved dynamically as environment variables for subsequent calls.
You have your environment where you define variables to be accessed by your collections.
Each collection consists of a series of requests which can be configured independently.
"},{"location":"demo/PostmanDemo/#experimenting-with-the-vc-api-endpoints","title":"Experimenting with the vc-api endpoints","text":"Make sure you have a demo agent available. You can use the following command to deploy one:
LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber --bg\n
When running for the first time, please allow some time for the images to build.
"},{"location":"demo/PostmanDemo/#register-new-dids","title":"Register new dids","text":"The first 2 requests for this collection will create 2 did:keys. We will use those in subsequent calls to issue Ed25519Signature2020
and BbsBlsSignature2020
credentials. Run the 2 did creation requests. These requests will use the /wallet/did/create
endpoint.
For issuing, you must input a w3c compliant json-ld credential and issuance options in your request body. The issuer field must be a registered did from the agent's wallet. The suite will be derived from the did method.
{\n \"credential\": { \n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\"\n ],\n \"type\": [\n \"VerifiableCredential\"\n ],\n \"issuer\": \"did:example:123\",\n \"issuanceDate\": \"2022-05-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:example:123\"\n }\n },\n \"options\": {}\n}\n
Some examples have been pre-configured in the collection. Run the requests and inspect the results. Experiment with different credentials.
"},{"location":"demo/PostmanDemo/#store-and-retrieve-credentials","title":"Store and retrieve credentials","text":"Your last issued credential will be stored as an environment variable for subsequent calls, such as storing, verifying and including in a presentation.
Try running the store credential request, then retrieve the credential with the list and fetch requests. Try going back and forth between the issuance endpoints and the storage endpoints to store multiple different credentials.
"},{"location":"demo/PostmanDemo/#verify-credentials","title":"Verify credentials","text":"You can verify your last issued credential with this endpoint or any issued credential you provide to it.
"},{"location":"demo/PostmanDemo/#prove-a-presentation","title":"Prove a presentation","text":"Proving a presentation is an action where a holder will prove ownership of a credential by signing or demonstrating authority over the document.
"},{"location":"demo/PostmanDemo/#verify-a-presentation","title":"Verify a presentation","text":"The final request is to verify a presentation.
"},{"location":"demo/ReusingAConnection/","title":"Reusing a Connection","text":"The Aries RFC 0434 Out of Band protocol enables the concept of reusing a connection such that when using RFC 0023 DID Exchange to establish a connection with an agent with which you already have a connection, you can reuse the existing connection instead of creating a new one. This is something you couldn't do a with the older RFC 0160 Connection Protocol that we used in the early days of Aries. It was a pain, and made for a lousy user experience, as on every visit to an existing contact, the invitee got a new connection.
The requirements on your invitations (such as in the example below) are:
services
item MUST be a resolvable DID.services
item MUST NOT be an inline
service.services
item is the same one in every invitation.Example invitation:
{\n \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n \"label\": \"faber.agent\",\n \"handshake_protocols\": [\n \"https://didcomm.org/didexchange/1.0\"\n ],\n \"services\": [\n \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n ]\n}\n
Here's the flow that demonstrates where reuse helps. For simplicity, we'll use the terms \"Issuer\" and \"Wallet\" in this example, but it applies to any connection between any two agents (the inviter and the invitee) that establish connections with one another.
request
with a response
message, and the connection is established.services
item in the invitation -- see example below) that it already has a connection to the Issuer, so instead of sending a DID Exchange request
message back to the Issuer, they send an RFC 0434 Out of Band reuse DIDComm message, and both parties know to use the existing connection.request
message, a new connection would have been established.The RFC 0434 Out of Band protocol requirement enables reuse
message by the invitee (the Wallet in the flow above) is that the service
in the invitation MUST be a resolvable DID that is the same in all of the invitations. In the example invitation above, the DID is a did:sov
DID that is resolvable on a public Hyperledger Indy network. The DID could also be a Peer DID of types 2 or 4, which encode the entire DIDDoc contents into the DID identifier (thus they are \"resolvable DIDs\"). What cannot be used is either the old \"unqualified\" DIDs that were commonly used in Aries prior to 2024, and Peer DID type 1. Both of those have DID types include both an identifier and a DIDDoc in the services
item of the Out of Band invitation. As noted in the Out of Band specification, reuse
cannot be used with such DID types even if the contents are the same.
Example invitation:
{\n \"@type\": \"https://didcomm.org/out-of-band/1.1/invitation\",\n \"@id\": \"77489d63-caff-41fe-a4c1-ec7e2ff00695\",\n \"label\": \"faber.agent\",\n \"handshake_protocols\": [\n \"https://didcomm.org/didexchange/1.0\"\n ],\n \"services\": [\n \"did:sov:4JiUsoK85pVkkB1bAPzFaP\"\n ]\n}\n
The use of connection reuse can be demonstrated with the Alice / Faber demos as follows. We assume you have already somewhat familiar with your options for running the Alice Faber Demo (e.g. locally or in a browser). Follow those instruction up to the point where you are about to start the Faber and Alice agents.
./run_demo faber --reuse-connections --public-did-connections --events
.events
option: ./run_demo alice --reuse-connections --events
8031
, path api/docs
), and then use the GET Connections
to see that Alice has one connection to Faber.4
to get a prompt for a new connection. This will generate a new invitation with the same public DID.4
to get a prompt for a new connection, and paste the new invitation.reuse
message is received from Alice, and as a result, no new connection was created.GET Connections
endpoint on the Alice OpenAPI screen to confirm that there is still just one established connection.--reuse-connections
parameter and compare the services
value in the new invitation vs. what was generated in Steps 3 and 7. It is not a DID, but rather a one time use, inline DIDDoc item.While in the demo Faber uses in the invitation the same DID they publish as an issuer (and uses in creating the schema and Cred Def for the demo), Faber could use any resolvable (not inline) DID, including DID Peer types 2 or 4 DIDs, as long as the DID is the same in every invitation. It is the fact that the DID is always the same that tells the invitee that they can reuse an existing connection.
For example, to run faber with connection reuse using a non-public DID:
./run_demo faber --reuse-connections --events\n
To run faber using a did:peer
and reusable connections:
./run_demo faber --reuse-connections --emit-did-peer-2 --events\n
To run this demo using a multi-use invitation (from Faber):
./run_demo faber --reuse-connections --emit-did-peer-2 --multi-use-invitations --events\n
"},{"location":"deploying/AnonCredsWalletType/","title":"AnonCreds-RS Support","text":"A new wallet type has been added to Aca-Py to support the new anoncreds-rs library:
--wallet-type askar-anoncreds\n
When Aca-Py is run with this wallet type it will run with an Askar format wallet (and askar libraries) but will use anoncreds-rs
instead of credx
.
There is a new package under acapy_agent/anoncreds
with code that supports the new library.
There are new endpoints (under /anoncreds
) for managing schemas, cred defs and revocation objects. However the new anoncreds code is integrated into the existing Credential and Presentation endpoints (V2.0 endpoints only).
Within the protocols, there are new handler
libraries to support the new anoncreds
format (these are in parallel to the existing indy
libraries).
The existing indy
code are in:
acapy_agent/protocols/issue_credential/v2_0/formats/indy/handler.py\nacapy_agent/protocols/indy/anoncreds/pres_exch_handler.py\nacapy_agent/protocols/present_proof/v2_0/formats/indy/handler.py\n
The new anoncreds
code is in:
acapy_agent/protocols/issue_credential/v2_0/formats/anoncreds/handler.py\nacapy_agent/protocols/present_proof/anoncreds/pres_exch_handler.py\nacapy_agent/protocols/present_proof/v2_0/formats/anoncreds/handler.py\n
The Indy handler checks to see if the wallet type is askar-anoncreds
and if so delegates the calls to the anoncreds handler, for example:
# Temporary shim while the new anoncreds library integration is in progress\n wallet_type = profile.settings.get_value(\"wallet.type\")\n if wallet_type == \"askar-anoncreds\":\n self.anoncreds_handler = AnonCredsPresExchangeHandler(profile)\n
... and then:
# Temporary shim while the new anoncreds library integration is in progress\n if self.anoncreds_handler:\n return self.anoncreds_handler.get_format_identifier(message_type)\n
To run the alice/faber demo using the new anoncreds library, start the demo with:
--wallet-type askar-anoncreds\n
There are no anoncreds-specific integration tests, for the new anoncreds functionality the agents within the integration tests are started with:
--wallet-type askar-anoncreds\n
Everything should just work!!!
Theoretically AATH should work with anoncreds as well, by setting the wallet type (see https://github.com/hyperledger/aries-agent-test-harness#extra-backchannel-specific-parameters).
"},{"location":"deploying/AnonCredsWalletType/#revocation-new-in-anoncreds","title":"Revocation (new in anoncreds)","text":"The changes are significant. Notably:
The Tails File changes are minimal -- nothing about the file itself changed. What changed:
The main changes for the Credential and Presentation support are in the following two files:
acapy_agent/protocols/issue_credential/v2_0/messages/cred_format.py\nacapy_agent/protocols/present_proof/v2_0/messages/pres_format.py\n
The INDY
handler just need to be re-pointed to the new anoncreds handler, and then all the old Indy code can be retired.
The new code is already in place (in comments). For example for the Credential handler:
To make the switch from indy to anoncreds replace the above with the following\n INDY = FormatSpec(\n \"hlindy/\",\n DeferLoad(\n \"acapy_agent.protocols.present_proof.v2_0\"\n \".formats.anoncreds.handler.AnonCredsPresExchangeHandler\"\n ),\n )\n
There is a bunch of duplicated code, i.e. the new anoncreds code was added either as new classes (as above) or as new methods within an existing class.
Some new methods were added within the Ledger class.
New unit tests were added - in some cases as methods within existing test classes, and in some cases as new classes (whichever was easiest at the time).
"},{"location":"deploying/AnoncredsControllerMigration/","title":"AnonCreds Controller Migration","text":"To upgrade an agent to use AnonCreds a controller should implement the required changes to endpoints and payloads in a way that is backwards compatible. The controller can then trigger the upgrade via the upgrade endpoint.
"},{"location":"deploying/AnoncredsControllerMigration/#step-1-endpoint-payload-and-response-changes","title":"Step 1 - Endpoint Payload and Response Changes","text":"There is endpoint and payload changes involved with creating schema, credential definition and revocation objects. Your controller will need to implement these changes for any endpoints it uses.
A good way to implement this with backwards compatibility is to get the wallet type via /settings and handle the existing endpoints when wallet.type is askar and the new anoncreds endpoints when wallet.type is askar-anoncreds. In this way the controller will handle both types of wallets in case the upgrade fails. After the upgrade is successful and stable the controller can be updated to only handle the new anoncreds endpoints.
"},{"location":"deploying/AnoncredsControllerMigration/#schemas","title":"Schemas","text":""},{"location":"deploying/AnoncredsControllerMigration/#creating-a-schema","title":"Creating a Schema:","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"attributes\": [\"score\"],\n \"schema_name\": \"simple\",\n \"schema_version\": \"1.0\"\n}\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n }\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"schema\": {\n \"ver\": \"1.0\",\n \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"name\": \"simple\",\n \"version\": \"1.0\",\n \"attrNames\": [\"score\"],\n \"seqNo\": 541\n }\n },\n \"schema_id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"schema\": {\n \"ver\": \"1.0\",\n \"id\": \"PzmGpSeCznzfPmv9B1EBqa:2:simple:1.0\",\n \"name\": \"simple\",\n \"version\": \"1.0\",\n \"attrNames\": [\"score\"],\n \"seqNo\": 541\n }\n}\n
to
{\n \"job_id\": \"string\",\n \"registration_metadata\": {},\n \"schema_metadata\": {},\n \"schema_state\": {\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"state\": \"finished\"\n }\n}\n
With endorsement:
{\n \"sent\": {\n \"schema\": {\n \"attrNames\": [\n \"score\"\n ],\n \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"name\": \"schema_name\",\n \"seqNo\": 10,\n \"ver\": \"1.0\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\"\n },\n \"txn\": {...}\n}\n
to
{\n \"job_id\": \"12cb896d648242c8b9b0fff3b870ed00\",\n \"schema_state\": {\n \"state\": \"wait\",\n \"schema_id\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n \"schema\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"attrNames\": [\n \"score\"\n ],\n \"name\": \"simple\",\n \"version\": \"1.1\"\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"schema_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#getting-schemas","title":"Getting schemas","text":"{\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"name\": \"schema_name\",\n \"seqNo\": 10,\n \"ver\": \"1.0\",\n \"version\": \"1.0\"\n }\n}\n
to
{\n \"resolution_metadata\": {},\n \"schema\": {\n \"attrNames\": [\"score\"],\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"name\": \"Example schema\",\n \"version\": \"1.0\"\n },\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"schema_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#credential-definitions","title":"Credential Definitions","text":""},{"location":"deploying/AnoncredsControllerMigration/#creating-a-credential-definition","title":"Creating a credential definition","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"revocation_registry_size\": 1000,\n \"schema_id\": \"WgWxqztrNooG92RXvxSTWv:2:simple:1.0\",\n \"support_revocation\": true,\n \"tag\": \"default\"\n}\n
to
{\n \"credential_definition\": {\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"tag\": \"default\"\n },\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n \"revocation_registry_size\": 1000,\n \"support_revocation\": true\n }\n}\n
Responses
Without Endoresment:
{\n \"sent\": {\n \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n },\n \"credential_definition_id\": \"CZGamdZoKhxiifjbdx3GHH:3:CL:558:default\"\n}\n
to
{\n \"schema_state\": {\n \"state\": \"finished\",\n \"schema_id\": \"BpGaCdTwgEKoYWm6oPbnnj:2:simple:1.0\",\n \"schema\": {\n \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n \"attrNames\": [\"score\"],\n \"name\": \"simple\",\n \"version\": \"1.0\"\n }\n },\n \"registration_metadata\": {},\n \"schema_metadata\": {\n \"seqNo\": 555\n }\n}\n
With Endorsement:
{\n \"sent\": {\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\"\n },\n \"txn\": {...}\n}\n
{\n \"job_id\": \"7082e58aa71d4817bb32c3778596b012\",\n \"credential_definition_state\": {\n \"state\": \"wait\",\n \"credential_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n \"credential_definition\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"schemaId\": \"RbyPM1EP8fKCrf28YsC1qK:2:simple:1.1\",\n \"type\": \"CL\",\n \"tag\": \"default\",\n \"value\": {\n \"primary\": {...},\n \"revocation\": {...}\n }\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"credential_definition_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#getting-credential-definitions","title":"Getting credential definitions","text":"{\n \"credential_definition\": {\n \"id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"schemaId\": \"20\",\n \"tag\": \"tag\",\n \"type\": \"CL\",\n \"value\": {...},\n \"revocation\": {...}\n },\n \"ver\": \"1.0\"\n }\n}\n
to
{\n \"credential_definition\": {\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"schemaId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"tag\": \"default\",\n \"type\": \"CL\",\n \"value\": {...},\n \"revocation\": {...}\n }\n },\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"credential_definitions_metadata\": {},\n \"resolution_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#revocation","title":"Revocation","text":"Most of the changes with revocation endpoints only require prepending /anoncreds
to the path. There are some other subtle changes listed below.
{\n \"credential_definition_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"max_cred_num\": 1000\n}\n
params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"revocation_registry_definition\": {\n \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:2:schema_name:1.0\",\n \"issuerId\": \"WgWxqztrNooG92RXvxSTWv\",\n \"maxCredNum\": 777,\n \"tag\": \"default\"\n }\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n },\n \"revocation_registry_id\": \"CZGamdZoKhxiifjbdx3GHH:4:CL:558:default\"\n}\n
to
{\n \"revocation_registry_definition_state\": {\n \"state\": \"finished\",\n \"revocation_registry_definition_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\",\n \"revocation_registry_definition\": {\n \"issuerId\": \"BpGaCdTwgEKoYWm6oPbnnj\",\n \"revocDefType\": \"CL_ACCUM\",\n \"credDefId\": \"BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default\",\n \"tag\": \"default\",\n \"value\": {...}\n }\n },\n \"registration_metadata\": {},\n \"revocation_registry_definition_metadata\": {\n \"seqNo\": 569\n }\n}\n
With endorsement:
{\n \"sent\": {\n \"result\": {\n \"created_at\": \"2021-12-31T23:59:59Z\",\n \"cred_def_id\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"error_msg\": \"Revocation registry undefined\",\n \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\",\n \"max_cred_num\": 1000,\n \"pending_pub\": [\n \"23\"\n ],\n \"record_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\",\n \"revoc_def_type\": \"CL_ACCUM\",\n \"revoc_reg_def\": {\n \"credDefId\": \"WgWxqztrNooG92RXvxSTWv:3:CL:20:tag\",\n \"id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n \"revocDefType\": \"CL_ACCUM\",\n \"tag\": \"string\",\n \"value\": {...},\n \"ver\": \"1.0\"\n },\n \"revoc_reg_entry\": {...},\n \"revoc_reg_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\",\n \"state\": \"active\",\n \"tag\": \"string\",\n \"tails_hash\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\",\n \"tails_local_path\": \"string\",\n \"tails_public_uri\": \"string\",\n \"updated_at\": \"2021-12-31T23:59:59Z\"\n }\n },\n \"txn\": {...}\n}\n
to
{\n \"job_id\": \"25dac53a1fb84cb8a5bf1b4362fbca11\",\n \"revocation_registry_definition_state\": {\n \"state\": \"wait\",\n \"revocation_registry_definition_id\": \"RbyPM1EP8fKCrf28YsC1qK:4:RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default:CL_ACCUM:default\",\n \"revocation_registry_definition\": {\n \"issuerId\": \"RbyPM1EP8fKCrf28YsC1qK\",\n \"revocDefType\": \"CL_ACCUM\",\n \"credDefId\": \"RbyPM1EP8fKCrf28YsC1qK:3:CL:547:default\",\n \"tag\": \"default\",\n \"value\": {...}\n }\n },\n \"registration_metadata\": {\n \"txn\": {...}\n },\n \"revocation_registry_definition_metadata\": {}\n}\n
"},{"location":"deploying/AnoncredsControllerMigration/#send-revocation-entry-or-list-to-ledger","title":"Send revocation entry or list to ledger","text":"params\n - conn_id\n - create_transaction_for_endorser\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"rev_reg_def_id\": \"WgWxqztrNooG92RXvxSTWv:4:WgWxqztrNooG92RXvxSTWv:3:CL:20:tag:CL_ACCUM:0\"\n}\n
Responses
Without endorsement:
{\n \"sent\": {\n \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n },\n \"revocation_registry_id\": \"BpGaCdTwgEKoYWm6oPbnnj:4:BpGaCdTwgEKoYWm6oPbnnj:3:CL:555:default:CL_ACCUM:default\"\n}\n
to
\n
"},{"location":"deploying/AnoncredsControllerMigration/#get-current-active-registry","title":"Get current active registry:","text":"params\n - conn_id\n - create_transaction_for_endorser\n
{\n \"rrid2crid\": {\n \"additionalProp1\": [\"12345\"],\n \"additionalProp2\": [\"12345\"],\n \"additionalProp3\": [\"12345\"]\n }\n}\n
to
{\n \"options\": {\n \"create_transaction_for_endorser\": false,\n \"endorser_connection_id\": \"3fa85f64-5717-4562-b3fc-2c963f66afa6\"\n },\n \"rrid2crid\": {\n \"additionalProp1\": [\"12345\"],\n \"additionalProp2\": [\"12345\"],\n \"additionalProp3\": [\"12345\"]\n }\n}\n
The upgrade endpoint is at POST /anoncreds/wallet/upgrade.
You need to be careful doing this, as there is no way to downgrade the wallet. It is recommended highly recommended to back-up any wallets and to test the upgrade in a development environment before upgrading a production wallet.
Params: wallet_name
is the name of the wallet to upgrade. Used to prevent accidental upgrades.
The behavior for a base wallet (standalone) or admin wallet in multitenant mode is slightly different from the behavior of a subwallet (or tenant) in multitenancy mode. However, the upgrade process is the same.
The agent will get a 503 error during the upgrade process. Any agent instance will shut down when the upgrade is complete. It is up to the aca-py agent to start up again. After the upgrade is complete the old endpoints will no longer be available and result in a 400 error.
The aca-py agent will work after the restart. However, it will receive a warning for having the wrong wallet type configured. It is recommended to change the wallet-type
to askar-anoncreds
in the agent configuration file or start-up command.
The sub-tenant which is in the process of being upgraded will get a 503 error during the upgrade process. All other sub-tenants will continue to operate normally. After the upgrade is complete the sub-tenant will be able to use the new endpoints. The old endpoints will no longer be available and result in a 403 error. Any aca-py agents will remain running after the upgrade and it's not required that the aca-py agent restarts.
"},{"location":"deploying/BBSSignatures/","title":"BBS Signatures Support","text":"ACA-Py has supported BBS Signatures for some time. However, the dependency that is used (bbs
) does not support the ARM architecture, and its inclusion in the default ACA-Py artifacts mean that developers using ARM-based hardware (such as Apple M1 Macs or later) cannot run ACA-Py \"out-of-the-box\". We feel that providing a better developer experience by supporting the ARM architecture is more important than BBS Signature support at this time. As such, we have removed the BBS dependency from the base ACA-Py artifacts and made it an add-on that those using ACA-Py with BBS must take extra steps to build their own artifacts. This file describes how to do those extra steps.
Regarding future support for BBS Signatures in ACA-Py. There is currently a lot of work going on in developing implementations and BBS-based Verifiable Credential standards. However, at the time of this release, there is not an obvious approach to an implementation to use in ACA-Py that includes ARM support. As a result, we will hold off on updating the BBS Signatures support in ACA-Py until the standards and path forward clarify. In the meantime, maintainers of ACA-Py plan to continue to do all we can to push for newer and better ZKP-based Verifiable Credential standards.
If you require BBS for your deployment an optional \"extended\" ACA-Py image has been released (aries-cloudagent-bbs
) that includes BBS, with the caveat that it will very likely not install on ARM architecture.
If you are a contributor or are developing using a local build of ACA-Py and need BBS, the easiest way to include it is to install the optional dependency bbs
with poetry
(again with the caveat that it will very likely not install on ARM architecture). The --all-extras
flag will install the bbs
optional dependency in ACA-Py:
poetry install --all-extras\n
"},{"location":"deploying/BBSSignatures/#testing","title":"Testing","text":"WARNNG: if you do NOT have bbs
installed you should exclude the BBS specific integration tests from running with the tag ~@BBS
otherwise they will fail:
./run_bdd -t ~@BBS\n
See the Unit and Integration testing docs for more information on how to run tests.
"},{"location":"deploying/ContainerImagesAndGithubActions/","title":"Container Images and Github Actions","text":"ACA-Py is most frequently deployed using containers. From the first release of ACA-Py up through 0.7.4, much of the community has built their deployments using the container images graciously provided by BC Gov and hosted through their bcgovimages
docker hub account. These images have been critical to the adoption of not only ACA-Py but also decentralized trust/SSI more generally.
Recognizing how critical these images are to the success of ACA-Py and consistent with the OpenWallet Foundation's commitment to open collaboration, container images are now built and published directly from the Aries Cloud Agent - Python project repository and made available through the Github Packages Container Registry.
"},{"location":"deploying/ContainerImagesAndGithubActions/#image","title":"Image","text":"This project builds and publishes the ghcr.io/openwallet-foundation/acapy
image. Multiple variants are available; see Tags.
ACA-Py is a foundation for building decentralized identity applications; to this end, there are multiple variants of ACA-Py built to suit the needs of a variety of environments and workflows. The following variants exist:
In the past, two image variants were published. These two variants are largely distinguished by providers for Indy Network and AnonCreds support. The Standard variant is recommended for new projects. Migration from an Indy based image (whether the new Indy image variant or the original BC Gov images) to the Standard image is outside of the scope of this document.
The ACA-Py images built by this project are tagged to indicate which of the above variants it is. Other tags may also be generated for use by developers.
Below is a table of all generated images and their tags:
Tag Variant Example Description py3.9-X.Y.Z Standard py3.9-0.7.4 Standard image variant built on Python 3.9 for ACA-Py version X.Y.Z py3.10-X.Y.Z Standard py3.10-0.7.4 Standard image variant built on Python 3.10 for ACA-Py version X.Y.Z"},{"location":"deploying/ContainerImagesAndGithubActions/#image-comparison","title":"Image Comparison","text":"There are several key differences that should be noted between the two image variants and between the BC Gov ACA-Py images.
libindy
aries
pyenv
indy-base
in the Dockerfile) which includes Indy dependencies; this could be replaced with an explicit indy-python
image from the Indy SDK repolibindy
but does NOT include the Indy CLIindy
pyenv
bcgovimages/aries-cloudagent
von-image
indy
libindy
and Indy CLIpyenv
.github/workflows/tests.yml
) - A reusable workflow that runs tests for the Standard ACA-Py variant for a given python version..github/workflows/pr-tests.yml
) - Run on pull requests; runs tests for the Standard ACA-Py variant for a \"default\" python version. Check this workflow for the current default python version in use..github/workflows/nightly-tests.yml
) - Run nightly; runs tests for the Standard ACA-Py variant for all currently supported python versions. Check this workflow for the set of currently supported versions in use..github/workflows/publish.yml
) - Run on new release published or when manually triggered; builds and pushes the Standard ACA-Py variant to the Github Container Registry..github/workflows/BDDTests.yml
) - Run on pull requests (to the openwallet-foundation fork only); runs BDD integration tests..github/workflows/format.yml
) - Run on pull requests; checks formatting of files modified by the PR..github/workflows/codeql.yml
) - Run on pull requests; performs CodeQL analysis..github/workflows/pythonpublish.yml
) - Run on release created; publishes ACA-Py python package to PyPI..github/workflows/pipaudit.yml
) - Run when manually triggered; performs pip audit.Your wallet stores secret keys, connections and other information. You have different choices to store this information. The wallet supports 2 different databases to store data, SQLite and PostgreSQL.
"},{"location":"deploying/Databases/#sqlite","title":"SQLite","text":"If the wallet is configured the default way in eg. demo-args.yaml, without explicit wallet-storage, a sqlite database file is used.
# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n
For this configuration, a folder called wallet will be created which contains a file called sqlite.db
.
The wallet can be configured to use PostgreSQL as storage.
# demo-args.yaml\nwallet-type: indy\nwallet-name: wallet\nwallet-key: wallet-password\n\nwallet-storage-type: postgres_storage\nwallet-storage-config: \"{\\\"url\\\":\\\"db:5432\\\",\\\"wallet_scheme\\\":\\\"DatabasePerWallet\\\"}\"\nwallet-storage-creds: \"{\\\"account\\\":\\\"postgres\\\",\\\"password\\\":\\\"mysecretpassword\\\",\\\"admin_account\\\":\\\"postgres\\\",\\\"admin_password\\\":\\\"mysecretpassword\\\"}\"\n
In this case the hostname for the database is db
on port 5432.
A docker-compose file could look like this:
# docker-compose.yml\nversion: '3'\nservices:\n # acapy ...\n # database\n db:\n image: postgres:10\n environment:\n POSTGRES_PASSWORD: mysecretpassword\n POSTGRES_USER: postgres\n POSTGRES_DB: postgres\n ports:\n - \"5432:5432\"\n
"},{"location":"deploying/IndySDKtoAskarMigration/","title":"Migrating from Indy SDK to Askar","text":"The document summarizes why the Indy SDK is being deprecated, it's replacement (Aries Askar and the \"shared components\"), how to use Aries Askar in a new ACA-Py deployment, and the migration process for an ACA-Py instance that is already deployed using the Indy SDK.
"},{"location":"deploying/IndySDKtoAskarMigration/#the-time-has-come-archiving-indy-sdk","title":"The Time Has Come! Archiving Indy SDK","text":"Yes, it\u2019s time. Indy SDK needs to be archived! In this article we\u2019ll explain why this change is needed, why Aries Askar is a faster, better replacement, and how to transition your Indy SDK-based ACA-Py deployment to Askar as soon as possible.
"},{"location":"deploying/IndySDKtoAskarMigration/#history-of-indy-sdk","title":"History of Indy SDK","text":"Indy SDK has been the basis of Hyperledger Indy and Hyperledger Aries clients accessing Indy networks for a long time. It has done an excellent job at exactly what you might imagine: being the SDK that enables clients to leverage the capabilities of a Hyperledger Indy ledger.
Its continued use has been all the more remarkable given that the last published release of the Indy SDK was in 2020. This speaks to the quality of the implementation \u2014 it just kept getting used, doing what it was supposed to do, and without major bugs, vulnerabilities or demands for new features.
However, the architecture of Indy SDK has critical bottlenecks. Most notably, as load increases, Indy SDK performance drops. And with Indy-based ecosystems flourishing and loads exponentially increasing, this means the Aries/Indy community needed to make a change.
"},{"location":"deploying/IndySDKtoAskarMigration/#aries-askar-and-the-shared-components","title":"Aries Askar and the Shared Components","text":"The replacement for the Indy SDK is a set of four components, each replacing a part of Indy SDK. (In retrospect, Indy SDK ought to have been split up this way from the start.)
The components are:
In ACA-Py, we are currently using CredX, but will be moving to Hyperledger AnonCreds soon.
If you\u2019re involved in the community, you\u2019ll know we\u2019ve been planning this replacement for almost three years. The first release of the Aries Askar and related components was in 2021. At the end of 2022 there was a concerted effort to eliminate the Indy SDK by creating migration scripts, and removing the Indy SDK from various tools in the community (the Indy CLI, the Indy Test Automation pipeline, and so on). This step is to finish the task.
"},{"location":"deploying/IndySDKtoAskarMigration/#performance","title":"Performance","text":"What\u2019s the performance and stability of the replacement? In short, it\u2019s dramatically better. Overall Aries Askar performance is faster, and as the load increases the performance remains constant. Combined with added flexibility and modularization, the community is very positive about the change.
"},{"location":"deploying/IndySDKtoAskarMigration/#new-aca-py-deployments","title":"New ACA-Py Deployments","text":"If you are new to ACA-Py, the instructions are easy. Use Aries Askar and the shared components from the start. To do that, simply make sure that you are using the --wallet-type askar
configuration parameter. You will automatically be using all of the shared components.
As of release 0.9.0, you will get a deprecation warning when you start ACA-Py with the Indy SDK. Switch to Aries Askar to eliminate that warning.
"},{"location":"deploying/IndySDKtoAskarMigration/#migrating-existing-indy-sdk-aca-py-deployments-to-askar","title":"Migrating Existing Indy SDK ACA-Py Deployments to Askar","text":"If you have an existing deployment, in changing the --wallet-type
configuration setting, your database must be migrated from the Indy SDK format to Aries Askar format. In order to facilitate the migration, an Indy SDK to Askar migration script has been published in the acapy-tools repository. There is lots of information in that repository about the migration tool and how to use it. The following is a summary of the steps you will have to perform. Of course, all deployments are a little (or a lot!) different, and your exact steps will be dependent on where and how you have deployed ACA-Py.
Note that in these steps you will have to take your ACA-Py instance offline, so scheduling the maintenance must be a part of your migration plan. You will also want to script the entire process so that downtime and risk of manual mistakes are minimized.
We hope that you have one or two test environments (e.g., Dev and Test) to run through these steps before upgrading your production deployment. As well, it is good if you can make a copy of your production database and test the migration on the real (copy) database before the actual upgrade.
askar-upgrade
script. For example:askar-upgrade \\\n --strategy dbpw \\\n --uri postgres://<username>:<password>@<hostname>:<port>/<dbname> \\\n --wallet-name <wallet name> \\\n --wallet-key <wallet key>\n
--wallet-type
configuration setting to askar
--wallet-type
change to rollback to the pre-migration state.It is very important that the Askar Upgrade script has direct access to the database. In our very first upgrade attempt, we ran the Upgrade Askar script from a container running outside of our container orchestration platform (OpenShift) using port forwarding. The script ran EXTREMELY slowly, taking literally hours to run before we finally stopped it. Once we ran the script inside the OpenShift environment, the script ran (for the same database) in about 7 minutes. The entire app downtime was less than 20 minutes.
"},{"location":"deploying/IndySDKtoAskarMigration/#questions","title":"Questions?","text":"If you have questions, comments, or suggestions about the upgrade process, please use the ACA-Py channel on OpenWallet Foundation Discord, or submit a GitHub issue to the ACA-Py repository.
"},{"location":"deploying/Poetry/","title":"Poetry Cheat Sheet for Developers","text":""},{"location":"deploying/Poetry/#introduction-to-poetry","title":"Introduction to Poetry","text":"Poetry is a dependency management and packaging tool for Python that aims to simplify and enhance the development process. It offers features for managing dependencies, virtual environments, and building and publishing Python packages.
"},{"location":"deploying/Poetry/#virtual-environments-with-poetry","title":"Virtual Environments with Poetry","text":"Poetry manages virtual environments for your projects to ensure clean and isolated development environments.
"},{"location":"deploying/Poetry/#creating-a-virtual-environment","title":"Creating a Virtual Environment","text":"poetry install\n
"},{"location":"deploying/Poetry/#activating-the-virtual-environment","title":"Activating the Virtual Environment","text":"poetry shell\n
Alternatively you can source the environment settings in the current shell
source $(poetry env info --path)/bin/activate\n
for powershell users this would be
(& ((poetry env info --path) + \"\\Scripts\\activate.ps1\")\n
"},{"location":"deploying/Poetry/#deactivating-the-virtual-environment","title":"Deactivating the Virtual Environment","text":"When using poetry shell
exit\n
When using the activate
script
deactivate\n
"},{"location":"deploying/Poetry/#dependency-management","title":"Dependency Management","text":"Poetry uses the pyproject.toml
file to manage dependencies. Add new dependencies to this file and update existing ones as needed.
poetry add package-name\n
"},{"location":"deploying/Poetry/#adding-a-development-dependency","title":"Adding a Development Dependency","text":"poetry add --dev package-name\n
"},{"location":"deploying/Poetry/#removing-a-dependency","title":"Removing a Dependency","text":"poetry remove package-name\n
"},{"location":"deploying/Poetry/#updating-dependencies","title":"Updating Dependencies","text":"poetry update\n
"},{"location":"deploying/Poetry/#running-tasks-with-poetry","title":"Running Tasks with Poetry","text":"Poetry provides a way to run scripts and commands without activating the virtual environment explicitly.
"},{"location":"deploying/Poetry/#running-a-command","title":"Running a Command","text":"poetry run command-name\n
"},{"location":"deploying/Poetry/#running-a-script","title":"Running a Script","text":"poetry run python script.py\n
"},{"location":"deploying/Poetry/#building-and-publishing-with-poetry","title":"Building and Publishing with Poetry","text":"Poetry streamlines the process of building and publishing Python packages.
"},{"location":"deploying/Poetry/#building-the-package","title":"Building the Package","text":"poetry build\n
"},{"location":"deploying/Poetry/#publishing-the-package","title":"Publishing the Package","text":"poetry publish\n
"},{"location":"deploying/Poetry/#using-extras","title":"Using Extras","text":"Extras allow you to specify additional dependencies based on project requirements.
"},{"location":"deploying/Poetry/#installing-with-extras","title":"Installing with Extras","text":"poetry install -E extras-name\n
for example
poetry install -E \"askar bbs indy\"\n
"},{"location":"deploying/Poetry/#managing-development-dependencies","title":"Managing Development Dependencies","text":"Development dependencies are useful for tasks like testing, linting, and documentation generation.
"},{"location":"deploying/Poetry/#installing-development-dependencies","title":"Installing Development Dependencies","text":"poetry install --dev\n
"},{"location":"deploying/Poetry/#additional-resources","title":"Additional Resources","text":"redis_queue
","text":"It provides a mechanism to persists both inbound and outbound messages using redis, deliver messages and webhooks, and dispatch events.
More details can be found here.
"},{"location":"deploying/RedisPlugins/#redis-queue-configuration-yaml","title":"Redis Queue configurationyaml
","text":"redis_queue:\n connection: \n connection_url: \"redis://default:test1234@172.28.0.103:6379\"\n\n ### For Inbound ###\n inbound:\n acapy_inbound_topic: \"acapy_inbound\"\n acapy_direct_resp_topic: \"acapy_inbound_direct_resp\"\n\n ### For Outbound ###\n outbound:\n acapy_outbound_topic: \"acapy_outbound\"\n mediator_mode: false\n\n ### For Event ###\n event:\n event_topic_maps:\n ^acapy::webhook::(.*)$: acapy-webhook-$wallet_id\n ^acapy::record::([^:]*)::([^:]*)$: acapy-record-with-state-$wallet_id\n ^acapy::record::([^:])?: acapy-record-$wallet_id\n acapy::basicmessage::received: acapy-basicmessage-received\n acapy::problem_report: acapy-problem_report\n acapy::ping::received: acapy-ping-received\n acapy::ping::response_received: acapy-ping-response_received\n acapy::actionmenu::received: acapy-actionmenu-received\n acapy::actionmenu::get-active-menu: acapy-actionmenu-get-active-menu\n acapy::actionmenu::perform-menu-action: acapy-actionmenu-perform-menu-action\n acapy::keylist::updated: acapy-keylist-updated\n acapy::revocation-notification::received: acapy-revocation-notification-received\n acapy::revocation-notification-v2::received: acapy-revocation-notification-v2-received\n acapy::forward::received: acapy-forward-received\n event_webhook_topic_maps:\n acapy::basicmessage::received: basicmessages\n acapy::problem_report: problem_report\n acapy::ping::received: ping\n acapy::ping::response_received: ping\n acapy::actionmenu::received: actionmenu\n acapy::actionmenu::get-active-menu: get-active-menu\n acapy::actionmenu::perform-menu-action: perform-menu-action\n acapy::keylist::updated: keylist\n deliver_webhook: true\n
redis_queue.connection.connection_url
: This is required and is expected in redis://{username}:{password}@{host}:{port}
format.redis_queue.inbound.acapy_inbound_topic
: This is the topic prefix for the inbound message queues. Recipient key of the message are also included in the complete topic name. The final topic will be in the following format acapy_inbound_{recip_key}
redis_queue.inbound.acapy_direct_resp_topic
: Queue topic name for direct responses to inbound message.redis_queue.outbound.acapy_outbound_topic
: Queue topic name for the outbound messages. Used by Deliverer service to deliver the payloads to specified endpoint.redis_queue.outbound.mediator_mode
: Set to true, if using Redis as a http bridge when setting up a mediator agent. By default, it is set to false.event.event_topic_maps
: Event topic mapevent.event_webhook_topic_maps
: Event to webhook topic mapevent.deliver_webhook
: When set to true, this will deliver webhooks to endpoints specified in admin.webhook_urls
. By default, set to true.Running the plugin with docker is simple. An example docker-compose.yml file is available which launches both ACA-Py with redis and an accompanying Redis cluster.
docker-compose up --build -d\n
More details can be found here.
"},{"location":"deploying/RedisPlugins/#without-docker","title":"Without Docker","text":"Installation
pip install git+https://github.com/openwallet-foundation/acapy-plugins.git\n
Startup ACA-Py with redis_queue
plugin loaded
docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\naca-py start \\\n --plugin redis_queue.v1_0.events \\\n --plugin-config plugins-config.yaml \\\n -it redis_queue.v1_0.inbound redis 0 -ot redis_queue.v1_0.outbound\n # ... the remainder of your startup arguments\n
Regardless of the options above, you will need to startup deliverer
and relay
/mediator
service as a bridge to receive inbound messages. Consider the following to build your docker-compose
file which should also start up your redis cluster:
Relay + Deliverer
relay:\n image: redis-relay\n build:\n context: ..\n dockerfile: redis_relay/Dockerfile\n ports:\n - 7001:7001\n - 80:80\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7001\n - STATUS_ENDPOINT_API_KEY=test_api_key_1\n - INBOUND_TRANSPORT_CONFIG=[[\"http\", \"0.0.0.0\", \"80\"]]\n - TUNNEL_ENDPOINT=http://relay-tunnel:4040\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n depends_on:\n - redis-cluster\n - relay-tunnel\n networks:\n - acapy_default\ndeliverer:\n image: redis-deliverer\n build:\n context: ..\n dockerfile: redis_deliverer/Dockerfile\n ports:\n - 7002:7002\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7002\n - STATUS_ENDPOINT_API_KEY=test_api_key_2\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n depends_on:\n - redis-cluster\n networks:\n - acapy_default\n
Mediator + Deliverer
mediator:\n image: acapy-redis-queue\n build:\n context: ..\n dockerfile: docker/Dockerfile\n ports:\n - 3002:3001\n depends_on:\n - deliverer\n volumes:\n - ./configs:/home/indy/configs:z\n - ./acapy-endpoint.sh:/home/indy/acapy-endpoint.sh:z\n environment:\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n - TUNNEL_ENDPOINT=http://mediator-tunnel:4040\n networks:\n - acapy_default\n entrypoint: /bin/sh -c '/wait && ./acapy-endpoint.sh poetry run aca-py \"$$@\"' --\n command: start --arg-file ./configs/mediator.yml\n\ndeliverer:\n image: redis-deliverer\n build:\n context: ..\n dockerfile: redis_deliverer/Dockerfile\n depends_on:\n - redis-cluster\n ports:\n - 7002:7002\n environment:\n - REDIS_SERVER_URL=redis://default:test1234@172.28.0.103:6379\n - TOPIC_PREFIX=acapy\n - STATUS_ENDPOINT_HOST=0.0.0.0\n - STATUS_ENDPOINT_PORT=7002\n - STATUS_ENDPOINT_API_KEY=test_api_key_2\n - WAIT_BEFORE_HOSTS=15\n - WAIT_HOSTS=redis-node-3:6379\n - WAIT_HOSTS_TIMEOUT=120\n - WAIT_SLEEP_INTERVAL=1\n - WAIT_HOST_CONNECT_TIMEOUT=60\n networks:\n - acapy_default\n
Both relay and mediator demos are also available.
"},{"location":"deploying/RedisPlugins/#acapy-cache-redis-redis_cache","title":"acapy-cache-redisredis_cache
","text":"ACA-Py uses a modular cache layer to story key-value pairs of data. The purpose of this plugin is to allow ACA-Py to use Redis as the storage medium for it's caching needs.
More details can be found here.
"},{"location":"deploying/RedisPlugins/#redis-cache-plugin-configuration-yaml","title":"Redis Cache Plugin configurationyaml
","text":"redis_cache:\n connection: \"redis://default:test1234@172.28.0.103:6379\"\n max_connection: 50\n credentials:\n username: \"default\"\n password: \"test1234\"\n ssl:\n cacerts: ./ca.crt\n
redis_cache.connection
: This is required and is expected in redis://{username}:{password}@{host}:{port}
format.redis_cache.max_connection
: Maximum number of redis pool connections. Default: 50redis_cache.credentials.username
: Redis instance usernameredis_cache.credentials.password
: Redis instance passwordredis_cache.ssl.cacerts
Running the plugin with docker is simple and straight-forward. There is an example docker-compose.yml file in the root of the project that launches both ACA-Py and an accompanying Redis instance. Running it is as simple as:
docker-compose up --build -d\n
To launch ACA-Py with an accompanying redis cluster of 6 nodes (3 primaries and 3 replicas), please refer to example docker-compose.cluster.yml and run the following:
Note: Cluster requires external docker network with specified subnet
docker network create --subnet=172.28.0.0/24 `network_name`\nexport REDIS_PASSWORD=\" ... As specified in redis_cluster.conf ... \"\nexport NETWORK_NAME=\"`network_name`\"\ndocker-compose -f docker-compose.cluster.yml up --build -d\n
Installation
pip install git+https://github.com/Indicio-tech/aries-acapy-cache-redis.git\n
Startup ACA-Py with redis_cache
plugin loaded
aca-py start \\\n --plugin acapy_cache_redis.v0_1 \\\n --plugin-config plugins-config.yaml \\\n # ... the remainder of your startup arguments\n
or
aca-py start \\\n --plugin acapy_cache_redis.v0_1 \\\n --plugin-config-value \"redis_cache.connection=redis://redis-host:6379/0\" \\\n --plugin-config-value \"redis_cache.max_connections=90\" \\\n --plugin-config-value \"redis_cache.credentials.username=username\" \\\n --plugin-config-value \"redis_cache.credentials.password=password\" \\\n # ... the remainder of your startup arguments\n
"},{"location":"deploying/RedisPlugins/#redis-cluster","title":"Redis Cluster","text":"If you startup a redis cluster and an ACA-Py agent loaded with either redis_queue
or redis_cache
plugin or both, then during the initialization of the plugin, it will bind an instance of redis.asyncio.RedisCluster
(onto the root_profile
). Other plugin will have access to this redis client for it's functioning. This is done for efficiency and to avoid duplication of resources.
Some releases of ACA-Py may be improved by, or even require, an upgrade when moving to a new version. Such changes are documented in the CHANGELOG.md, and those with ACA-Py deployments should take note of those upgrades. This document summarizes the upgrade system in ACA-Py.
"},{"location":"deploying/UpgradingACA-Py/#version-information-and-automatic-upgrades","title":"Version Information and Automatic Upgrades","text":"The file version.py contains the current version of a running instance of ACA-Py. In addition, a record is made in the ACA-Py secure storage (database) about the \"most recently upgraded\" version. When deploying a new version of ACA-Py, the version.py value will be higher than the version in secure storage. When that happens, an upgrade is executed, and on successful completion, the version is updated in secure storage to match what is in version.py.
Upgrades are defined in the Upgrade Definition YML file. For a given version listed in the follow, the corresponding entry is what actions are required when upgrading from a previous version. If a version is not listed in the file, there is no upgrade defined for that version from its immediate predecessor version.
Once an upgrade is identified as needed, the process is:
In some cases, it may be necessary to do an offline upgrade, where ACA-Py is taken off line temporarily, the database upgraded explicitly, and then ACA-Py re-deployed as normal. As yet, we do not have any use cases for this, but those deploying ACA-Py should be aware of this possibility. For example, we may at some point need an upgrade that MUST NOT be executed by more than one ACA-Py instance. In that case, a \"normal\" upgrade could be dangerous for deployments on container orchestration platforms like Kubernetes.
If the Maintainers of ACA-Py recognize a case where ACA-Py must be upgraded while offline, a new Upgrade feature will be added that will prevent the \"auto upgrade\" process from executing. See Issue 2201 and Pull Request 2204 for the status of that feature.
Those deploying ACA-Py upgrades for production installations (forced offline or not) should check in each CHANGELOG.md release entry about what upgrades (if any) will be run when upgrading to that version, and consider how they want those upgrades to run in their ACA-Py installation. In most cases, simply deploying the new version should be OK. If the number of records to be upgraded is high (such as a \"resave connections\" upgrade to a deployment with many, many connections), you may want to do a test upgrade offline first, to see if there is likely to be a service disruption during the upgrade. Plan accordingly!
"},{"location":"deploying/UpgradingACA-Py/#tagged-upgrades","title":"Tagged upgrades","text":"Upgrades are defined in the Upgrade Definition YML file, in addition to specifying upgrade actions by version they can also be specified by named tags. Unlike version based upgrades where all applicable version based actions will be performed based upon sorted order of versions, with named tags only actions corresponding to provided tags will be performed. Note: --force-upgrade
is required when running name tags based upgrade (i.e. providing --named-tag
).
Tags are specified in YML file as below:
fix_issue_rev_reg:\n fix_issue_rev_reg_records: true\n
Example:
./scripts/run_docker upgrade --force-upgrade --named-tag fix_issue_rev_reg\n\n# In case, running multiple tags [say test1 & test2]:\n ./scripts/run_docker upgrade --force-upgrade --named-tag test1 --named-tag test2\n
"},{"location":"deploying/UpgradingACA-Py/#subwallet-upgrades","title":"Subwallet upgrades","text":"With multitenant enabled, there is a subwallet associated with each tenant profile, so there is a need to upgrade those sub wallets in addition to the base wallet associated with root profile.
There are 2 options to perform such upgrades:
--upgrade-all-subwallets
This will apply the upgrade steps to all sub wallets (tenant profiles) and the base wallet (root profiles).
--upgrade-subwallet
This will apply the upgrade steps to specified sub wallets (identified by wallet id) and the base wallet.
Note: multiple specifications allowed
"},{"location":"deploying/UpgradingACA-Py/#exceptions","title":"Exceptions","text":"There are a couple of upgrade exception conditions to consider, as outlined in the following sections.
"},{"location":"deploying/UpgradingACA-Py/#no-version-in-secure-storage","title":"No version in secure storage","text":"Versions prior to ACA-Py 0.8.1 did not automatically populate the secure storage \"version\" record. That only occurred if an upgrade was explicitly executed. As of ACA-Py 0.8.1, the version record is added immediately after the secure storage database is created. If you are upgrading to ACA-Py 0.8.1 or later, and there is no version record in the secure storage, ACA-Py will assume you are running version 0.7.5, and execute the upgrades from version 0.7.5 to the current version. The choice of 0.7.5 as the default is safe because the same upgrades will be run on any version of ACA-Py up to and including 0.7.5, as can be seen in the Upgrade Definition YML file. Thus, even if you are really upgrading from (for example) 0.6.2, the same upgrades are needed as from 0.7.5 to a post-0.8.1 version.
"},{"location":"deploying/UpgradingACA-Py/#forcing-an-upgrade","title":"Forcing an upgrade","text":"If you need to force an upgrade from a given version of ACA-Py, a pair of configuration options can be used together. If you specify \"--from-version <ver>
\" and \"--force-upgrade
\", the --from-version
version will override what is found (or not) in secure storage, and the upgrade will be from that version to the current one. For example, if you have \"0.8.1\" in your \"secure storage\" version, and you know that the upgrade for version 0.8.1 has not been executed, you can use the parameters --from-version v0.7.5 --force-upgrade
to force the upgrade on next starting an ACA-Py instance. However, given the few upgrades defined prior to version 0.8.1, and the \"no version in secure storage\" handling, it is unlikely this capability will ever be needed. We expect to deprecate and remove these options in future (post-0.8.1) ACA-Py versions.
This document is a \"concept of operations\" for an instance of an ACA-Py agent deployed from the primary artifact (a PyPi package) produced by this repo. In such a deployment there are always two components - a configured agent itself, and a controller that injects into that agent the business rules for the particular agent instance (see diagram).
The deployed agent messages with other agents via DIDComm protocols, and as events associated with those messages occur, sends webhook HTTP notifications to the controller. The agent also exposes for the controller's exclusive use an HTTP API covering all of the administrative handlers for those events. The controller receives the notifications from the agent, decides (with business rules - possible by asking a person using a UI) how to respond to the event and calls back to the agent via the HTTP API. Of course, the controller may also initiate events (e.g. messaging another agent) by calling that same API.
The following is an example of the interactions involved in creating a connection using the DIDComm \"Establish Connection\" protocol. The controller requests from the agent (via the administrative API) a connection invitation from the agent, and receives one back. The controller provides it to another agent (perhaps by displaying it in a QR code). Shortly after, the agent receives a DIDComm \"Connection Request\" message. The agent, sends it to the controller. The controller decides to accept the connection and calls the API with instructions to the agent to send a \"Connection Response\" message to the other agent. Since the controller always wants to know with whom a connection has been created, the controller also sends instructions to the agent (via the API, of course) to send a request presentation message to the new connection. And so on... During the interactions, the agent is tracking the state of the connections, and the state of the protocol instances (threads). Likewise, the controller may also be retaining state - after all, it's an application that could do anything.
Most developers will configure a \"black box\" instance of the ACA-Py. They need to know how it works, the DIDComm protocols it supports, the events it will generate and the administrative API it exposes. However, they don't need to drill into and maintain the ACA-Py code. Such developers will build controller applications (basically, traditional web apps) that at their simplest, use an HTTP interface to receive notification and send HTTP requests to the agent. It's the business logic implemented in, or accessed by the controller that gives the deployment its personality and role.
Note: the ACA-Py agent is designed to be stateless, persisting connection and protocol state to storage (such as Postgres database). As such, agents can be deployed to support horizontal scaling as necessary. Controllers can also be implemented to support horizontal scaling.
The sections below detail the internals of the ACA-Py and it's configurable elements, and the conceptual elements of a controller. There is no \"Aries controller\" repo to fork, as it is essentially just a web app. There are demos of using the elements in this repo, and several sample applications that you can use to get started on your on controller.
"},{"location":"deploying/deploymentModel/#aca-py","title":"ACA-Py","text":"ACA-Py implement services to manage the execution of DIDComm messaging protocols for interacting with other DIDComm agents, and exposes an administrative HTTP API that supports a controller to direct how the agent should respond to messaging events. The agent relies on the controller to provide the business rules for handling the messaging events, and to initiate the execution of new DIDComm protocol instances. The internals of an ACA-Py instance is diagramed below.
Instances of the ACA-Py agents are configured with the following sub-components:
A controller provides the personality of ACA-Py agent instance - the business logic (human, machine or rules driven) that drive the behaviour of the agent. The controller\u2019s \u201cBusiness Logic\u201d in a cloud agent could be built into the controller app, could be an integration back to an enterprise system, or even a user interface for an individual. In all cases, the business logic provide responses to agent events or initiates agent actions. A deployed controller talks to a single ACA-Py agent deployment and manages the configuration of that agent. Both can be configured and deployed to support horizontal scaling.
Generically, a controller is a web app invoked by HTTP webhook calls from its corresponding ACA-Py agent and invoking the DIDComm administration capabilities of the ACA-Py agent by calling the REST API exposed by that cloud agent. As well as responding to ACA-Py agent events, the controller initiates DIDComm protocol instances using the same REST API.
The controller and ACA-Py agent deployment MUST secure the HTTP interface between the two components. The interface provides the same HTTP integration between services as modern apps found in any enterprise today, and must be correspondingly secured.
A controller implements the following capabilities.
While there are several examples of controllers, there is no \u201ccookie cutter\u201d repository to fork and customize. A controller is just a web service that receives HTTP requests (webhooks) and sends HTTP messages to the ACA-Py agent it controls via the REST API exposed by that agent.
"},{"location":"deploying/deploymentModel/#deployment","title":"Deployment","text":"The ACA-Py agent CI pipeline configured into the repository generates a PyPi package as an artifact. Implementers will generally have a controller repository, possibly copied from an existing controller instance, that has the code (business logic) for the controller and the configuration (transports, handlers, DIDComm protocols, etc.) for the ACA-Py agent instance. In the most common scenario, the ACA-Py agent and controller instances will be deployed based on the artifacts (e.g. container images) generated from that controller repository. With the simple HTTP-based interface between the controller and ACA-Py agent, both components can be horizontally scaled as needed, with a load balancer between the components. The configuration of the ACA-Py agent to use the Postgres wallet supports enterprise scale agent deployments.
Current examples of deployed instances of ACA-Py agent and controllers include:
This design proposes to extend the ACA-PY to support Hyperledger AnonCreds credentials and presentations in the W3C Verifiable Credentials (VC) and Verifiable Presentations (VP) Format. The aim is to transition from the legacy AnonCreds format specified in Aries-Legacy-Method to the W3C VC format.
"},{"location":"design/AnoncredsW3CCompatibility/#overview","title":"Overview","text":"The pre-requisites for the work are:
As of 2024-01-15, these pre-requisites have been met.
"},{"location":"design/AnoncredsW3CCompatibility/#impacts-on-aca-py","title":"Impacts on ACA-Py","text":""},{"location":"design/AnoncredsW3CCompatibility/#issuer","title":"Issuer","text":"Issuer support needs to be added for using the RFC 0809 VC-DI attachment format when sending Issue Credential v2.0 protocoloffer
and issue
messages and when receiving request
messages.
Related notes:
A mechanism must be defined such that an Issuer controller can use the ACA-Py Admin API to initiate the sending of an AnonCreds credential Offer using the RFC 0809 VC-DI attachment format.
A credential's encoded attributes are not included in the issued AnonCreds W3C VC format credential. To be determined how that impacts the issuing process.
"},{"location":"design/AnoncredsW3CCompatibility/#verifier","title":"Verifier","text":"A verifier wanting a W3C VP Format presentation will send the Present Proof v2.0 request
message with an RFC 0510 DIF Presentation Exchange format attachment.
If needed, the RFC 0510 DIF Presentation Exchange document will be clarified and possibly updated to enable its use for handling AnonCreds W3C VP format presentations.
An AnonCreds W3C VP format presentation does not include the encoded revealed attributes, and the encoded values must be calculated as needed. To be determined where those would be needed.
"},{"location":"design/AnoncredsW3CCompatibility/#holder","title":"Holder","text":"A holder must support RFC 0809 VC-DI attachments when receiving Issue Credential v2.0 offer
and issue
messages, and when sending request
messages.
On receiving an Issue Credential v2.0 offer
message with a RFC 0809 VC-DI, the holder MUST respond using the RFC 0809 VC-DI on the subsequent request
message.
On receiving a credential from an issuer in an RFC 0809 VC-DI attachment, the holder must process and store the credential for subsequent use in presentations.
On receiving an RFC 0510 DIF Presentation Exchange request
message, a holder must include AnonCreds verifiable credentials in the search for credentials satisfying the request, and if found and selected for use, must construct the presentation using the RFC 0510 DIF Presentation Exchange presentation format, with an embedded AnonCreds W3C VP format presentation.
offer
message with an RFC 0809 VC-DI attachment.request
message with an RFC 0510 DIF Presentation Exchange format attachment that can be satisfied with AnonCreds credentials held by the holder.restrictions
and revocation
data elements conveyed?It appears that the issue and presentation sides can be approached independently, assuming that any stored AnonCreds VC can be used in an AnonCreds W3C VP format presentation.
"},{"location":"design/AnoncredsW3CCompatibility/#issue-credential","title":"Issue Credential","text":"request
message with an RFC 0510 DIF Presentation Exchange attachment so that AnonCreds VCs can found and used in the subsequent response.presentation
message with an RFC 0510 DIF Presentation Exchange containing AnonCreds W3C VP(s) derived from AnonCreds source VCs.After thoroughly reviewing upcoming changes from anoncreds-rs PR273, the classes or AnoncredsObject
impacted by changes are as follows:
W3CCredential
create
, load
)process
, to_legacy
, add_non_anoncreds_integrity_proof
, set_id
, set_subject_id
, add_context
, add_type
)schema_id
, cred_def_id
, rev_reg_id
, rev_reg_index
)create_w3c_credential
, process_w3c_credential
, _object_from_json
, _object_get_attribute
, w3c_credential_add_non_anoncreds_integrity_proof
, w3c_credential_set_id
, w3c_credential_set_subject_id
, w3c_credential_add_context
, w3c_credential_add_type
)W3CPresentation
create
, load
)verify
)create_w3c_presentation
, _object_from_json
, verify_w3c_presentation
)They will be added to __init__.py as additional exports of AnoncredsObject.
We also have to consider which classes or anoncreds objects have been modified
The classes modified according to the same PR mentioned above are:
Credential
from_w3c
)to_w3c
)credential_from_w3c
, credential_to_w3c
)PresentCredential
_get_entry
, add_attributes
, add_predicates
)The issuance, presentation and verification of legacy anoncreds are implemented in this ./acapy_agent/anoncreds directory. Therefore, we will also start from there.
Let us navigate these implementation examples through the respective processes of the concerning agents - Issuer and Holder as described in https://github.com/hyperledger/anoncreds-rs/blob/main/README.md. We will proceed through the following processes in comparison with the legacy anoncreds implementations while watching out for signature differences between the two. Looking at the /anoncreds/issuer.py file, from AnonCredsIssuer
class:
Create VC_DI Credential Offer
According to this DI credential offer attachment format - didcomm/w3c-di-vc-offer@v0.1,
could be the parameters for create_offer
method.
Create VC_DI Credential
NOTE: There has been some changes to encoding of attribute values for creating a credential, so we have to be adjust to the new changes.
async def create_credential(\n self,\n credential_offer: dict,\n credential_request: dict,\n credential_values: dict,\n ) -> str:\n...\n...\n try:\n credential = await asyncio.get_event_loop().run_in_executor(\n None,\n lambda: W3CCredential.create(\n cred_def.raw_value,\n cred_def_private.raw_value,\n credential_offer,\n credential_request,\n raw_values,\n None,\n None,\n None,\n None,\n ),\n )\n...\n
Create VC_DI Credential Request
async def create_vc_di_credential_request(\n self, credential_offer: dict, credential_definition: CredDef, holder_did: str\n ) -> Tuple[str, str]:\n...\n...\ntry:\n secret = await self.get_master_secret()\n (\n cred_req,\n cred_req_metadata,\n ) = await asyncio.get_event_loop().run_in_executor(\n None,\n W3CCredentialRequest.create,\n None,\n holder_did,\n credential_definition.to_native(),\n secret,\n AnonCredsHolder.MASTER_SECRET_ID,\n credential_offer,\n )\n...\n
Create VC_DI Credential Presentation
async def create_vc_di_presentation(\n self,\n presentation_request: dict,\n requested_credentials: dict,\n schemas: Dict[str, AnonCredsSchema],\n credential_definitions: Dict[str, CredDef],\n rev_states: dict = None,\n ) -> str:\n...\n...\n try:\n secret = await self.get_master_secret()\n presentation = await asyncio.get_event_loop().run_in_executor(\n None,\n Presentation.create,\n presentation_request,\n present_creds,\n self_attest,\n secret,\n {\n schema_id: schema.to_native()\n for schema_id, schema in schemas.items()\n },\n {\n cred_def_id: cred_def.to_native()\n for cred_def_id, cred_def in credential_definitions.items()\n },\n )\n...\n
"},{"location":"design/AnoncredsW3CCompatibility/#converting-an-already-issued-legacy-anoncreds-to-vc_di-formatvice-versa","title":"Converting an already issued legacy anoncreds to VC_DI format(vice versa)","text":"In this case, we can use to_w3c
method of Credential
class to convert from legacy to w3c and to_legacy
method of W3CCredential
class to convert from w3c to legacy.
We could call to_w3c
method like this:
vc_di_cred = Credential.to_w3c(cred_def)\n
and for to_legacy
:
legacy_cred = W3CCredential.to_legacy()\n
We don't need to input any parameters to it as it in turn calls Credential.from_w3c()
method under the hood.
Keeping in mind that we are trying to create anoncreds(not another type of VC) in w3c format, what if we add a protocol-level vc_di format support by adding a new format VC_DI
in ./protocols/issue_credential/v2_0/messages/cred_format.py
-
# /protocols/issue_credential/v2_0/messages/cred_format.py\n\nclass Format(Enum):\n \u201c\u201d\u201dAttachment Format\u201d\u201d\u201d\n INDY = FormatSpec(...)\n LD_PROOF = FormatSpec(...)\n VC_DI = FormatSpec(\n \u201cvc_di/\u201d,\n CredExRecordVCDI,\n DeferLoad(\n \u201cacapy_agent.protocols.issue_credential.v2_0\u201d\n \u201c.formats.vc_di.handler.AnonCredsW3CFormatHandler\u201d\n ),\n )\n
And create a new CredExRecordVCDI in reference to V20CredExRecordLDProof
# /protocols/issue_credential/v2_0/models/detail/w3c.py\n\nclass CredExRecordW3C(BaseRecord):\n \"\"\"Credential exchange W3C detail record.\"\"\"\n\n class Meta:\n \"\"\"CredExRecordW3C metadata.\"\"\"\n\n schema_class = \"CredExRecordW3CSchema\"\n\n RECORD_ID_NAME = \"cred_ex_w3c_id\"\n RECORD_TYPE = \"w3c_cred_ex_v20\"\n TAG_NAMES = {\"~cred_ex_id\"} if UNENCRYPTED_TAGS else {\"cred_ex_id\"}\n RECORD_TOPIC = \"issue_credential_v2_0_w3c\"\n
Based on the proposed credential attachment format with the new Data Integrity proof in aries-rfcs 809 -
{\n \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n \"comment\": \"<some comment>\",\n \"formats\": [\n {\n \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"format\": \"didcomm/w3c-di-vc@v0.1\"\n }\n ],\n \"credentials~attach\": [\n {\n \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"mime-type\": \"application/ld+json\",\n \"data\": {\n \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n }\n }\n ]\n}\n
Assuming VCDIDetail
and VCDIOptions
are already in place, VCDIDetailSchema
can be created like so:
# /protocols/issue_credential/v2_0/formats/vc_di/models/cred_detail.py\n\nclass VCDIDetailSchema(BaseModelSchema):\n \"\"\"VC_DI verifiable credential detail schema.\"\"\"\n\n class Meta:\n \"\"\"Accept parameter overload.\"\"\"\n\n unknown = INCLUDE\n model_class = VCDIDetail\n\n credential = fields.Nested(\n CredentialSchema(),\n required=True,\n metadata={\n \"description\": \"Detail of the VC_DI Credential to be issued\",\n \"example\": {\n \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n \"comment\": \"<some comment>\",\n \"formats\": [\n {\n \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"format\": \"didcomm/w3c-di-vc@v0.1\"\n }\n ],\n \"credentials~attach\": [\n {\n \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n \"mime-type\": \"application/ld+json\",\n \"data\": {\n \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n }\n }\n ]\n }\n },\n )\n
Then create w3c format handler with mapping like so:
# /protocols/issue_credential/v2_0/formats/w3c/handler.py\n\nmapping = {\n CRED_20_PROPOSAL: VCDIDetailSchema,\n CRED_20_OFFER: VCDIDetailSchema,\n CRED_20_REQUEST: VCDIDetailSchema,\n CRED_20_ISSUE: VerifiableCredentialSchema,\n }\n
Doing so would allow us to be more independent in defining the schema suited for anoncreds in w3c format and once the proposal protocol can handle the w3c format, probably the rest of the flow can be easily implemented by adding vc_di
flag to the corresponding routes.
To make sure that once an endpoint has been called to trigger the Issue Credential
flow in 0809 W3C_DI attachment formats
the subsequent endpoints also follow this format, we can keep track of this ATTACHMENT_FORMAT dictionary with the proposed VC_DI
format.
# Format specifications\nATTACHMENT_FORMAT = {\n CRED_20_PROPOSAL: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-filter@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_OFFER: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-abstract@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_REQUEST: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred-req@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc-detail@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di-detail@v2.0\",\n },\n CRED_20_ISSUE: {\n V20CredFormat.Format.INDY.api: \"hlindy/cred@v2.0\",\n V20CredFormat.Format.LD_PROOF.api: \"aries/ld-proof-vc@v1.0\",\n V20CredFormat.Format.VC_DI.api: \"aries/vc-di@v2.0\",\n },\n}\n
And this _formats_filter function takes care of keeping the attachment formats uniform across the iteration of the flow. We can see this function gets called in:
/issue-credential-2.0/send-offer
route (in addition to other offer routes)/issue-credential-2.0/send-request
route/issue-credential-2.0/create
route/issue-credential-2.0/send
routeThe same goes for ATTACHMENT_FORMAT of Present Proof
flow. In this case, DIF Presentation Exchange formats in these test vectors that are influenced by RFC 0510 DIF Presentation Exchange will be implemented. Here, the _formats_attach function is the key for the same purpose above. It gets called in:
/present-proof-2.0/send-proposal
route/present-proof-2.0/create-request
route/present-proof-2.0/send-request
routeThis route indirectly calls _formats_filters
function to create credential proposal, which is in turn used to create a credential offer in the filter format. The request body for this route might look like this:
{\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-issue\": true,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
This route indirectly calls _format_result_with_details
function to generate a cred_ex_record in the specified format, which is then returned. The request body for this route might look like this:
{\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-issue\": true,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"holder_did\": <holder_did>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
The request body for this route might look like this:
{\n \"connection_id\": <connection_id>,\n \"filter\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-remove\": true,\n \"replacement_id\": <replacement_id>,\n \"holder_did\": <holder_did>,\n \"credential_preview\": {\n \"@type\": \"issue-credential/2.0/credential-preview\",\n \"attributes\": {\n ...\n ...\n }\n }\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#presentation-admin-routes","title":"Presentation Admin Routes","text":"The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-present\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-verify\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n ...\n ...\n \"connection_id\": <connection_id>,\n \"presentation_proposal\": [\"vc_di\"],\n \"comment: <some_comment>,\n \"auto-verify\": true,\n \"auto-remove\": true,\n \"trace\": false\n}\n
The request body for this route might look like this:
{\n \"presentation_definition\": <presentation_definition_schema>,\n \"auto_remove\": true,\n \"dif\": {\n issuer_id: \"<issuer_id>\",\n record_ids: {\n \"<input descriptor id_1>\": [\"<record id_1>\", \"<record id_2>\"],\n \"<input descriptor id_2>\": [\"<record id>\"],\n }\n },\n \"reveal_doc\": {\n // vc_di dict\n }\n\n}\n
"},{"location":"design/AnoncredsW3CCompatibility/#how-a-w3c-credential-is-stored-in-the-wallet","title":"How a W3C credential is stored in the wallet","text":"Storing a credential in the wallet is somewhat dependent on the kinds of metadata that are relevant. The metadata mapping between the W3C credential and an AnonCreds credential is not fully clear yet.
One of the questions we need to answer is whether the preferred approach is to modify the existing store credential function so that any credential type is a valid input, or whether there should be a special function just for storing W3C credentials.
We will duplicate this store_credential function and modify it:
async def store_w3c_credential(...) {\n ...\n ...\n try:\n cred = W3CCredential.load(credential_data)\n ...\n ...\n}\n
Question: Would it also be possible to generate the credentials on the fly to eliminate the need for storage?
Answer: I don't think it is possible to eliminate the need for storage, and notably the secure storage (encrypted at rest) supported in Askar.
"},{"location":"design/AnoncredsW3CCompatibility/#how-can-we-handle-multiple-signatures-on-a-w3c-vc-format-credential","title":"How can we handle multiple signatures on a W3C VC Format credential?","text":"Only one of the signature types (CL) is allowed in the AnonCreds format, so if a W3C VC is created by to_legacy()
, all signature types that can't be turned into a CL signature will be dropped. This would make the conversion lossy. Similarly, an AnonCreds credential carries only the CL signature, limiting output from to_w3c()
signature types that can be derived from the source CL signature. A possible future enhancement would be to add an extra field to the AnonCreds data structure, in which additional signatures could be stored, even if they are not used. This could eliminate the lossiness, but it adds extra complexity and may not be worth doing.
We will write a test for the Aries Agent Test Framework that issues a W3C VC instead of an AnonCreds credential, and then run that test where one of the agents is ACA-PY and the other is based on AFJ -- and vice versa. Also write a test where a W3C VC is presented after an AnonCreds issuance, and run it with the two roles played by the two different agents. This is a simple approach, but if the tests pass, this should eliminate almost all risk of incompatibility.
"},{"location":"design/AnoncredsW3CCompatibility/#will-we-introduce-new-dependencies-and-what-is-risky-or-easy","title":"Will we introduce new dependencies, and what is risky or easy?","text":"Any significant bugs in the Rust implementation may prevent our wrappers from working, which would also prevent progress (or at least confirmed test results) on the higher-level code.
If AFJ lags behind in delivering equivalent functionality, we may not be able to demonstrate compatibility with the test harness.
"},{"location":"design/AnoncredsW3CCompatibility/#where-should-the-new-issuance-code-go","title":"Where should the new issuance code go?","text":"So the vc directory contains code to verify vc's, is this a logical place to add the code for issuance?
"},{"location":"design/AnoncredsW3CCompatibility/#what-do-we-call-the-new-things-flexcreds-or-just-w3c_xxx","title":"What do we call the new things? Flexcreds? or just W3C_xxx","text":"Are we defining a concept called Flexcreds that is a credential with a proof array that you can generate more specific or limited credentials from? If so should this be included in the naming?
If the wallet receives a \"Flexcred\" credential object with an array of proofs, the wallet may wish to present ONLY the more zero-knowledge anoncreds proof
How will wallets support that in a way that is developer-friendly to wallet devs?
presentation
message of the Present Proof v2.0 protocol.To isolate an upgrade process and trigger it via API the following pattern was designed to handle multitenant scenarios. It includes an is_upgrading record in the wallet(DB) and a middleware to prevent requests during the upgrade process.
"},{"location":"design/UpgradeViaApi/#flow","title":"Flow","text":"The diagram below describes the sequence of events for the anoncreds upgrade process which it was designed, but the architecture can be used for any upgrade process.
sequenceDiagram\n participant A1 as Agent 1\n participant M1 as Middleware\n participant IAS1 as IsAnoncredsSingleton Set\n participant UIPS1 as UpgradeInProgressSingleton Set\n participant W as Wallet (DB)\n participant UIPS2 as UpgradeInProgressSingleton Set\n participant IAS2 as IsAnoncredsSingleton Set\n participant M2 as Middleware\n participant A2 as Agent 2\n\n Note over A1,A2: Start upgrade for non-anoncreds wallet\n A1->>M1: POST /anoncreds/wallet/upgrade\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is not in set\n M1-->>UIPS1: check if wallet is in set\n UIPS1-->>M1: wallet is not in set\n M1->>A1: OK\n A1-->>W: Add is_upgrading = anoncreds_in_progress record\n A1->>A1: Upgrade wallet\n A1-->>UIPS1: Add wallet to set\n\n Note over A1,A2: Attempted Requests During Upgrade\n\n Note over A1: Attempted Request\n A1->>M1: GET /any-endpoint\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is not in set\n M1-->>UIPS1: check if wallet is in set\n UIPS1-->>M1: wallet is in set\n M1->>A1: 503 Service Unavailable\n\n Note over A2: Attempted Request\n A2->>M2: GET /any-endpoint\n M2-->>IAS2: check if wallet is in set\n IAS2->>M2: wallet is not in set\n M2-->>UIPS2: check if wallet is in set\n UIPS2-->>M2: wallet is not in set\n A2-->>W: Query is_upgrading = anoncreds_in_progress record\n W-->>A2: record = anoncreds_in_progress\n A2->>A2: Loop until upgrade is finished in seperate process\n A2-->>UIPS2: Add wallet to set\n M2->>A2: 503 Service Unavailable\n\n Note over A1,A2: Agent Restart During Upgrade\n A1-->>W: Get is_upgrading record for wallet or all subwallets\n W-->>A1: \n A1->>A1: Resume upgrade if in progress\n A1-->>UIPS1: Add wallet to set\n\n Note over A2: Same as Agent 1\n\n Note over A1,A2: Upgrade Completes\n\n Note over A1: Finish Upgrade\n A1-->>W: set is_upgrading = anoncreds_finished\n A1-->>UIPS1: Remove wallet from set\n A1-->>IAS1: Add wallet to set\n A1->>A1: update subwallet or restart\n\n Note over A2: Detect Upgrade Complete\n A2-->>W: Check is_upgrading = anoncreds_finished\n W-->>A2: record = anoncreds_in_progress\n A2->>A2: Wait 1 second\n A2-->>W: Check is_upgrading = anoncreds_finished\n W-->>A2: record = anoncreds_finished\n A2-->>UIPS2: Remove wallet from set\n A2-->>IAS2: Add wallet to set\n A2->>A2: update subwallet or restart\n\n Note over A1,A2: Restarted Agents After Upgrade\n\n A1-->W: Get is_upgrading record for wallet or all subwallets\n W-->>A1: \n A1->>IAS1: Add wallet to set if record = anoncreds_finished\n\n Note over A2: Same as Agent 1\n\n Note over A1,A2: Attempted Requests After Upgrade\n\n Note over A1: Attempted Request\n A1->>M1: GET /any-endpoint\n M1-->>IAS1: check if wallet is in set\n IAS1-->>M1: wallet is in set\n M1-->>A1: OK\n\n Note over A2: Same as Agent 1
"},{"location":"design/UpgradeViaApi/#example","title":"Example","text":"An example of the implementation can be found via the anoncreds upgrade components.
acapy_agent/wallet/routes.py
in the upgrade_anoncreds
controller wallet/anoncreds_upgrade.py
admin/server.py
in the upgrade_middleware
functionwallet/singletons.py
core/conductor.py
in the check_for_wallet_upgrades_in_progress
functionACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.
To see the specifics of the supported endpoints, as well as the expected request and response formats, it is recommended to run the aca-py
agent with the --admin {HOST} {PORT}
and --admin-insecure-mode
command line parameters. This exposes the OpenAPI UI on the provided port for interaction via a web browser. For production deployments, run the agent with --admin-api-key {KEY}
and add the X-API-Key: {KEY}
header to all requests instead of using the --admin-insecure-mode
parameter.
To invoke a specific method:
The mechanical steps are easy; however, the fourth step from the list above can be tricky. Supplying the right data and, where JSON is involved, getting the syntax correct\u2014braces and quotes can be a pain. When steps don't work, start your debugging by looking at your JSON. You may also choose to use a REST client like Postman or Insomnia, which will provide syntax highlighting and other features to simplify the process.
Because API methods often initiate asynchronous processes, the JSON response provided by an endpoint is not always sufficient to determine the next action. To handle this situation, as well as events triggered by external inputs (such as new connection requests), it is necessary to implement a webhook processor, as detailed in the next section.
The combination of an OpenAPI client and webhook processor is referred to as an ACA-Py Controller and is the recommended method to define custom behaviors for your ACA-Py-based agent application.
"},{"location":"features/AdminAPI/#administration-api-webhooks","title":"Administration API Webhooks","text":"When ACA-Py is started with the --webhook-url {URL}
command line parameter, state-management records are sent to the provided URL via POST requests whenever a record is created or its state
property is updated.
When a webhook is dispatched, the record topic
is appended as a path component to the URL. For example, https://webhook.host.example
becomes https://webhook.host.example/topic/connections
when a connection record is updated. A POST request is made to the resulting URL with the body of the request comprising a serialized JSON object. The full set of properties of the current set of webhook payloads are listed below. Note that empty (null-value) properties are omitted.
ACA-Py's Admin API also supports delivering webhooks over WebSocket. This can be especially useful when working with scripts that interact with the Admin API but don't have a web server listening to receive webhooks in response to its actions. No additional command line parameters are required to enable WebSocket support.
Webhooks received over WebSocket will contain the same data as webhooks posted over http but the structure differs in order to communicate details that would have been received as part of the HTTP request path and headers.
topic
: The topic of the webhook, such as connections
or basicmessages
payload
: The payload of the webhook; this is the data usually received in the request body when webhooks are delivered over HTTPwallet_id
: If using multitenancy, this is the wallet ID of the subwallet that emitted the webhook. This value will be omitted if not using multitenancy.To open a WebSocket, connect to the /ws
endpoint of the Admin API.
/connections
)","text":"connection_id
: the unique connection identifierstate
: init
/ invitation
/ request
/ response
/ active
/ error
/ inactive
my_did
: the DID this agent is using in the connectiontheir_did
: the DID the other agent in the connection is usingtheir_label
: a connection label provided by the other agenttheir_role
: a role assigned to the other agent in the connectioninbound_connection_id
: a connection identifier for the related inbound routing connectioninitiator
: self
/ external
/ multiuse
invitation_key
: a verification key used to identify the source connection invitationrequest_id
: the @id
property from the connection request messagerouting_state
: none
/ request
/ active
/ error
accept
: manual
/ auto
error_msg
: the most recent error messageinvitation_mode
: once
/ multi
alias
: a local alias for the connection record/basicmessages
)","text":"connection_id
: the identifier of the related pairwise connectionmessage_id
: the @id
of the incoming agent messagecontent
: the contents of the agent messagestate
: received
/forward
)","text":"Enable using --monitor-forward
.
connection_id
: the identifier of the connection associated with the recipient keyrecipient_key
: the recipient key of the forward message (to
field of the forward message)status
: The delivery status of the received forward message. Possible values:sent_to_session
: Message is sent directly to the connection over an active transport sessionsent_to_external_queue
: Message is sent to an external queue. No information is known on the delivery of the messagequeued_for_delivery
: Message is queued for delivery using outbound transport (recipient connection has an endpoint)waiting_for_pickup
: The connection has no reachable endpoint. Need to wait for the recipient to connect with return routing for deliveryundeliverable
: The connection has no reachable endpoint, and the internal queue for messages is not enabled (--enable-undelivered-queue
)./issue_credential
)","text":"credential_exchange_id
: the unique identifier of the credential exchangeconnection_id
: the identifier of the related pairwise connectionthread_id
: the thread ID of the previously received credential proposal or offerparent_thread_id
: the parent thread ID of the previously received credential proposal or offerinitiator
: issue-credential exchange initiator self
/ external
state
: proposal_sent
/ proposal_received
/ offer_sent
/ offer_received
/ request_sent
/ request_received
/ issued
/ credential_received
/ credential_acked
credential_definition_id
: the ledger identifier of the related credential definitionschema_id
: the ledger identifier of the related credential schemacredential_proposal_dict
: the credential proposal messagecredential_offer
: (Indy) credential offercredential_request
: (Indy) credential requestcredential_request_metadata
: (Indy) credential request metadatacredential_id
: the wallet identifier of the stored credentialraw_credential
: the credential record as receivedcredential
: the credential record as stored in the walletauto_offer
: (boolean) whether to automatically offer the credentialauto_issue
: (boolean) whether to automatically issue the credentialerror_msg
: the previous error message/present_proof
)","text":"presentation_exchange_id
: the unique identifier of the presentation exchangeconnection_id
: the identifier of the related pairwise connectionthread_id
: the thread ID of the previously received presentation proposal or offerinitiator
: present-proof exchange initiator: self
/ external
state
: proposal_sent
/ proposal_received
/ request_sent
/ request_received
/ presentation_sent
/ presentation_received
/ verified
presentation_proposal_dict
: the presentation proposal messagepresentation_request
: (Indy) presentation request (also known as proof request)presentation
: (Indy) presentation (also known as proof)verified
: (string) whether the presentation is verified: true
or false
auto_present
: (boolean) prover choice to auto-present proof as verifier requestserror_msg
: the previous error messageThe best way to develop a new admin API or protocol is to follow one of the existing protocols, such as the Credential Exchange or Presentation Exchange.
The routes.py
file contains the API definitions - API endpoints and payload schemas (note that these are not the Aries message schemas).
The payload schemas are defined using marshmallow and will be validated automatically when the API is executed (using middleware). (This raises a status 422
HTTP response with an error message if the schema validation fails.)
API endpoints are defined using aiohttp_apispec tags (e.g. @doc
, @request_schema
, @response_schema
etc.) which define the input and output parameters of the endpoint. API URL paths are defined in the register()
method and added to the Swagger page in the post_process_routes()
method.
The APIs should return the following HTTP status:
...and should not return:
ACA-Py was originally developed to be used with Hyperledger AnonCreds objects (Schemas, Credential Definitions and Revocation Registries) published on Hyperledger Indy networks. However, with the evolution of \"ledger-agnostic\" AnonCreds, ACA-Py supports publishing AnonCreds objects wherever you want to put them. If you want to add a new \"AnonCreds Methods\" to publish AnonCreds objects to a new Verifiable Data Registry (VDR) (perhaps to your favorite blockchain, or using a web-based DID method), you'll find the details of how to do that here. We often using the term \"ledger\" for the location where AnonCreds objects are published, but here will use \"VDR\", since a VDR does not have to be a ledger.
The information in this document was discussed on an ACA-Py Maintainers call in March 2024. You can watch the call recording by clicking here.
This is an early version of this document and we assume those reading it are quite familiar with using ACA-Py, have a good understanding of ACA-Py internals, and are Python experts. See the Questions or Comments section below for how to get help as you work through this.
"},{"location":"features/AnonCredsMethods/#create-a-plugin","title":"Create a Plugin","text":"We recommend that if you are adding a new AnonCreds method, you do so by creating an ACA-Py plugin. See the documentation on ACA-Py plugins and use the set of plugins available in the aries-acapy-plugins repository to help you get started. When you finish your AnonCreds method, we recommend that you publish the plugin in the aries-acapy-plugins repository. If you think that the AnonCreds method you create should be part of ACA-Py core, get your plugin complete and raise the question of adding it to ACA-Py. The Maintainers will be happy to discuss the merits of the idea. No promises though.
Your AnonCreds plugin will have an initialization routine that will register your AnonCreds implementation. It will be registering the identifiers that your method will be using such. It will be the identifier constructs that will trigger the appropriate AnonCreds Registrar and Resolver that will be called for any given AnonCreds object identifier. Check out this example of the registration of the \"legacy\" Indy AnonCreds method for more details.
"},{"location":"features/AnonCredsMethods/#the-implementation","title":"The Implementation","text":"The basic work involved in creating an AnonCreds method is the implementation of both a \"registrar\" to write AnonCreds objects to a VDR, and a \"resolver\" to read AnonCreds objects from a VDR. To do that for your new AnonCreds method, you will need to:
BaseAnonCredsResolver
- hereBaseAnonCredsRegistrar
- hereThe links above are to a specific commit and the code may have been updated since. You might want to look at the methods in the current version of acapy_agent/anoncreds/base.py in the main
branch.
The interface for those methods are very clean, and there are currently two implementations of the methods in the ACA-Py codebase -- the \"legacy\" Indy implementation, and the did:indy Indy implementation. There is also a did:web resolver implementation.
Models for the API are defined here
"},{"location":"features/AnonCredsMethods/#events","title":"Events","text":"When you create your AnonCreds method registrar, make sure that your implementations call appropriate finish_*
event (e.g., AnonCredsIssuer.finish_schema
, AnonCredsIssuer.finish_cred_def
, etc.) in AnonCreds Issuer. The calls are necessary to trigger the automation of AnonCreds event creation that is done by ACA-Py, particularly around the handling of Revocation Registries. As you (should) know, when an Issuer uses ACA-Py to create a Credential Definition that supports revocation, ACA-Py automatically creates and publishes two Revocation Registries related to the Credential Definition, publishes the tails file for each, makes one active, and sets the other to be activated as soon as the active one runs out of credentials. Your AnonCreds method implementation doesn't have to do much to make that happen -- ACA-Py does it automatically -- but your implementation must call the finish_*
to make trigger ACA-Py to continue the automation. You can see in Revocation Setup the automation setup.
The ACA-Py maintainers welcome questions from those new to the community that have the skills to implement a new AnonCreds method. Use the #aca-py
channel on the OpenWallet Foundation Discord Server or open an issue in this repo to get help.
Pull Requests to the ACA-Py repository to improve this content are welcome!
"},{"location":"features/AnoncredsProofValidation/","title":"AnonCreds Proof Validation in ACA-Py","text":"ACA-Py performs pre-validation when verifying AnonCreds presentations (proofs). Some scenarios are rejected (such as those indicative of tampering), while some attributes are removed before running the AnonCreds validation (e.g., removing superfluous non-revocation timestamps). Any ACA-Py validations or presentation modifications are indicated by the \"verify_msgs\" attribute in the final presentation exchange object.
The list of possible verification messages can be found here, and consists of:
class PresVerifyMsg(str, Enum):\n \"\"\"Credential verification codes.\"\"\"\n\n RMV_REFERENT_NON_REVOC_INTERVAL = \"RMV_RFNT_NRI\"\n RMV_GLOBAL_NON_REVOC_INTERVAL = \"RMV_GLB_NRI\"\n TSTMP_OUT_NON_REVOC_INTRVAL = \"TS_OUT_NRI\"\n CT_UNREVEALED_ATTRIBUTES = \"UNRVL_ATTR\"\n PRES_VALUE_ERROR = \"VALUE_ERROR\"\n PRES_VERIFY_ERROR = \"VERIFY_ERROR\"\n
If there is additional information, it will be included like this: TS_OUT_NRI::19_uuid
(which means the attribute identified by 19_uuid
contained a timestamp outside of the non-revocation interval (this is just a warning)).
A presentation verification may include multiple messages, for example:
...\n \"verified\": \"true\",\n \"verified_msgs\": [\n \"TS_OUT_NRI::18_uuid\",\n \"TS_OUT_NRI::18_id_GE_uuid\",\n \"TS_OUT_NRI::18_busid_GE_uuid\"\n ],\n ...\n
... or it may include a single message, for example:
...\n \"verified\": \"false\",\n \"verified_msgs\": [\n \"VALUE_ERROR::Encoded representation mismatch for 'Preferred Name'\"\n ],\n ...\n
... or the verified_msgs
may be null or an empty array.
The following modifications/warnings may be made by ACA-Py, which shouldn't affect the verification of the received proof:
The following pre-verification checks are performed, which will cause the proof to fail (before calling anoncreds) and result in the following message:
VALUE_ERROR::<description of the failed validation>\n
These validations are all performed within the Indy verifier class - to see the detailed validation, look for any occurrences of raise ValueError(...)
in the code.
A summary of the possible errors includes:
Typically, when you call the anoncreds verifier_verify_proof()
method, it will return a True
or False
based on whether the presentation cryptographically verifies. However, in the case where anoncreds throws an exception, the exception text will be included in a verification message as follows:
VERIFY_ERROR::<the exception text>\n
"},{"location":"features/DIDMethods/","title":"DID Methods in ACA-Py","text":"Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID methods support specific types of keys and may or may not require the holder to specify the DID itself.
ACA-Py provides a DIDMethods
registry holding all the DID methods supported for storage in a wallet
Askar and InMemory are the only wallets supporting this registry.
"},{"location":"features/DIDMethods/#registering-a-did-method","title":"Registering a DID method","text":"By default, ACA-Py supports did:key
and did:sov
. Plugins can register DID additional methods to make them available to holders. Here's a snippet adding support for did:web
to the registry from a plugin setup
method.
WEB = DIDMethod(\n name=\"web\",\n key_types=[ED25519, BLS12381G2],\n rotation=True,\n holder_defined_did=HolderDefinedDid.REQUIRED # did:web is not derived from key material but from a user-provided repository name\n)\n\nasync def setup(context: InjectionContext):\n methods = context.inject(DIDMethods)\n methods.register(WEB)\n
"},{"location":"features/DIDMethods/#creating-a-did","title":"Creating a DID","text":"POST /wallet/did/create
can be provided with parameters for any registered DID method. Here's a follow-up to the did:web
method example:
{\n \"method\": \"web\",\n \"options\": {\n \"did\": \"did:web:doma.in\",\n \"key_type\": \"ed25519\"\n }\n}\n
"},{"location":"features/DIDMethods/#resolving-dids","title":"Resolving DIDs","text":"For specifics on how DIDs are resolved in ACA-Py, see: DID Resolution.
"},{"location":"features/DIDResolution/","title":"DID Resolution in ACA-Py","text":"Decentralized Identifiers, or DIDs, are URIs that point to documents that describe cryptographic primitives and protocols used in decentralized identity management. DIDs include methods that describe where and how documents can be retrieved. DID resolution is the process of \"resolving\" a DID Document from a DID as dictated by the DID method.
A DID Resolver is a piece of software that implements the methods for resolving a document from a DID.
For example, given the DID did:example:1234abcd
, a DID Resolver that supports did:example
might return:
{\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n \"id\": \"did:example:1234abcd#keys-1\",\n \"type\": \"Ed25519VerificationKey2018\",\n \"controller\": \"did:example:1234abcd\",\n \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n \"id\": \"did:example:1234abcd#did-communication\",\n \"type\": \"did-communication\",\n \"serviceEndpoint\": \"https://agent.example.com/8377464\"\n }]\n}\n
For more details on DIDs and DID Resolution, see the W3C DID Specification.
In practice, DIDs and DID Documents are used for a variety of purposes but especially to help establish connections between Agents and verify credentials.
"},{"location":"features/DIDResolution/#didresolver","title":"DIDResolver
","text":"In ACA-Py, the DIDResolver
provides the interface to resolve DIDs using registered method resolvers. Method resolver registration happens on startup in a did_resolvers
list. This registry enables additional resolvers to be loaded via plugin.
class ExampleMessageHandler:\n async def handle(context: RequestContext, responder: BaseResponder):\n \"\"\"Handle example message.\"\"\"\n resolver = await context.inject(DIDResolver)\n\n doc: dict = await resolver.resolve(\"did:example:123\")\n assert doc[\"id\"] == \"did:example:123\"\n\n verification_method = await resolver.dereference(\"did:example:123#keys-1\")\n\n # ...\n
"},{"location":"features/DIDResolution/#method-resolver-selection","title":"Method Resolver Selection","text":"On DIDResolver.resolve
or DIDResolver.dereference
, the resolver interface will select the most appropriate method resolver to handle the given DID. In this selection process, method resolvers are distinguished from each other by:
supports
method or a supported_did_regex
method. These methods are used to determine whether the given DID can be handled by the method resolver.The selection algorithm roughly follows the following steps:
resolver.supports(did)
returns false
.Extending ACA-Py with additional Method Resolvers should be relatively simple. Supposing that you want to resolve DIDs for the did:cool
method, this should be as simple as installing a method resolver into your python environment and loading the resolver on startup. If no method resolver exists yet for did:cool
, writing your own should require minimal overhead.
Method resolver plugins are composed of two primary pieces: plugin injection and resolution logic. The resolution logic dictates how a DID becomes a DID Document, following the given DID Method Specification. This logic is implemented using the BaseDIDResolver
class as the base. BaseDIDResolver
is an abstract base class that defines the interface that the core DIDResolver
expects for Method resolvers.
The following is an example method resolver implementation. In this example, we have 2 files, one for each piece (injection and resolution). The __init__.py
will be in charge of injecting the plugin, and example_resolver.py
will have the logic implementation to resolve for a fabricated did:example
method.
__init __.py
","text":"```python= from aries_cloudagent.config.injection_context import InjectionContext from ..resolver.did_resolver import DIDResolver
from .example_resolver import ExampleResolver
async def setup(context: InjectionContext): \"\"\"Setup the plugin.\"\"\" registry = context.inject(DIDResolver) resolver = ExampleResolver() await resolver.setup(context) registry.append(resolver)
#### `example_resolver.py`\n\n```python=\nimport re\nfrom typing import Pattern\nfrom aries_cloudagent.resolver.base import BaseDIDResolver, ResolverType\n\nclass ExampleResolver(BaseDIDResolver):\n \"\"\"ExampleResolver class.\"\"\"\n\n def __init__(self):\n super().__init__(ResolverType.NATIVE)\n # Alternatively, ResolverType.NON_NATIVE\n self._supported_did_regex = re.compile(\"^did:example:.*$\")\n\n @property\n def supported_did_regex(self) -> Pattern:\n \"\"\"Return compiled regex matching supported DIDs.\"\"\"\n return self._supported_did_regex\n\n async def setup(self, context):\n \"\"\"Setup the example resolver (none required).\"\"\"\n\n async def _resolve(self, profile: Profile, did: str) -> dict:\n \"\"\"Resolve example DIDs.\"\"\"\n if did != \"did:example:1234abcd\":\n raise DIDNotFound(\n \"We only actually resolve did:example:1234abcd. Sorry!\"\n )\n\n return {\n \"@context\": \"https://www.w3.org/ns/did/v1\",\n \"id\": \"did:example:1234abcd\",\n \"verificationMethod\": [{\n \"id\": \"did:example:1234abcd#keys-1\",\n \"type\": \"Ed25519VerificationKey2018\",\n \"controller\": \"did:example:1234abcd\",\n \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n }],\n \"service\": [{\n \"id\": \"did:example:1234abcd#did-communication\",\n \"type\": \"did-communication\",\n \"serviceEndpoint\": \"https://agent.example.com/\"\n }]\n }\n
"},{"location":"features/DIDResolution/#errors","title":"Errors","text":"There are 3 different errors associated with resolution in ACA-Py that could be used for development purposes.
In this section, the Github Resolver Plugin found here will be used as an example plugin to work with. This resolver resolves did:github
DIDs.
The resolution algorithm is simple: for the github DID did:github:dbluhm
, the method specific identifier dbluhm
(a GitHub username) is used to lookup an index.jsonld
file in the ghdid
repository in that GitHub users profile. See GitHub DID Method Specification for more details.
To use this plugin, first install it into your project's python environment:
pip install git+https://github.com/dbluhm/acapy-resolver-github\n
Then, invoke ACA-Py as you normally do with the addition of:
$ aca-py start \\\n --plugin acapy_resolver_github \\\n # ... the remainder of your startup arguments\n
Or add the following to your configuration file:
plugin:\n - acapy_resolver_github\n
The following is a fully functional Dockerfile encapsulating this setup:
```dockerfile= FROM ghcr.io/openwallet-foundation/acapy:py3.9-0.12.1 RUN pip3 install git+https://github.com/dbluhm/acapy-resolver-github
CMD [\"aca-py\", \"start\", \"-it\", \"http\", \"0.0.0.0\", \"3000\", \"-ot\", \"http\", \"-e\", \"http://localhost:3000\", \"--admin\", \"0.0.0.0\", \"3001\", \"--admin-insecure-mode\", \"--no-ledger\", \"--plugin\", \"acapy_resolver_github\"]
To use the above dockerfile:\n\n```shell\ndocker build -t resolver-example .\ndocker run --rm -it -p 3000:3000 -p 3001:3001 resolver-example\n
"},{"location":"features/DIDResolution/#directory-of-resolver-plugins","title":"Directory of Resolver Plugins","text":"https://www.w3.org/TR/did-core/ https://w3c-ccg.github.io/did-resolution/
"},{"location":"features/DevReadMe/","title":"Developer's Read Me for ACA-Py","text":"See the README for details about this repository and information about how the Aries Cloud Agent - Python fits into the Aries project and relates to Indy.
"},{"location":"features/DevReadMe/#table-of-contents","title":"Table of Contents","text":"ACA-Py is a configurable, extensible, non-mobile Aries agent that implements an easy way for developers to build decentralized identity services that use verifiable credentials.
The information on this page assumes you are developer with a background in decentralized identity, Aries, DID Methods, and verifiable credentials, especially AnonCreds. If you aren't familiar with those concepts and projects, please use our Getting Started Guide to learn more.
"},{"location":"features/DevReadMe/#developer-demos","title":"Developer Demos","text":"To put ACA-Py through its paces at the command line, checkout our demos page.
"},{"location":"features/DevReadMe/#running","title":"Running","text":""},{"location":"features/DevReadMe/#configuring-aca-py-environment-variables","title":"Configuring ACA-PY: Environment Variables","text":"All CLI parameters in ACA-PY have equivalent environment variables. To convert a CLI argument to an environment variable:
Basic Conversion: Convert the CLI argument to uppercase and prefix it with ACAPY_
. For example, --admin
becomes ACAPY_ADMIN
.
Multiple Parameters: Arguments that take multiple parameters, such as --admin 0.0.0.0 11000
, should be wrapped in an array. For example, ACAPY_ADMIN=\"[0.0.0.0, 11000]\"
-it <module> <host> <port>
, which can be repeated, must be wrapped inside another array and string escaped. For example, instead of: -it http 0.0.0.0 11000 ws 0.0.0.0 8023
use: ACAPY_INBOUND_TRANSPORT=[[\\\"http\\\",\\\"0.0.0.0\\\",\\\"11000\\\"],[\\\"ws\\\",\\\"0.0.0.0\\\",\\\"8023\\\"]]
For a comprehensive list of all arguments, argument groups, CLI args, and their environment variable equivalents, please see the argparse.py file.
"},{"location":"features/DevReadMe/#configuring-aca-py-command-line-parameters","title":"Configuring ACA-PY: Command Line Parameters","text":"ACA-Py agent instances are configured through the use of command line parameters, environment variables and/or YAML files. All of the configurations settings can be managed using any combination of the three methods (command line parameters override environment variables override YAML). Use the --help
option to discover the available command line parameters. There are a lot of them--for good and bad.
To run a docker container based on the code in the current repo, use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:
scripts/run_docker --version\nscripts/run_docker --help\nscripts/run_docker provision --help\nscripts/run_docker start --help\n
"},{"location":"features/DevReadMe/#locally-installed","title":"Locally Installed","text":"If you installed the PyPi package, the executable aca-py
should be available on your PATH.
Use the following commands from the root folder of the repository to check the version, list the available modes of operation, and see all of the command line parameters:
aca-py --version\naca-py --help\naca-py provision --help\naca-py start --help\n
If you get an error about a missing module indy
(e.g. ModuleNotFoundError: No module named 'indy'
) when running aca-py
, you will need to install the Indy libraries from the command line:
pip install python3_indy\n
Once that completes successfully, you should be able to run aca-py --version
and the other examples above.
ACA-Py invocations are separated into two types - initially provisioning an agent (provision
) and starting a new agent process (start
). This separation enables not having to pass in some encryption-related parameters required for provisioning when starting an agent instance. This improves security in production deployments.
When starting an agent instance, at least one inbound and one outbound transport MUST be specified.
For example:
aca-py start --inbound-transport http 0.0.0.0 8000 \\\n --outbound-transport http\n
or
aca-py start --inbound-transport http 0.0.0.0 8000 \\\n --inbound-transport ws 0.0.0.0 8001 \\\n --outbound-transport ws \\\n --outbound-transport http\n
ACA-Py ships with both inbound and outbound transport drivers for http
and ws
(websockets). Additional transport drivers can be added as pluggable implementations. See the existing implementations in the transports module for getting started on adding a new transport.
Most configuration parameters are provided to the agent at startup. Refer to the Running
sections above for details on listing the available command line parameters.
It is possible to provision a secure storage (sometimes called a wallet--but not the same as a mobile wallet app) before running an agent to avoid passing in the secure storage seed on every invocation of an agent (e.g. on every aca-py start ...
).
aca-py provision --wallet-type askar --seed $SEED\n
For additional provision
options, execute aca-py provision --help
.
Additional information about secure storage options and configuration settings can be found here.
"},{"location":"features/DevReadMe/#mediation","title":"Mediation","text":"ACA-Py can also run in mediator mode - ACA-Py can be run as a mediator (it can mediate connections for other agents), or it can connect to an external mediator to mediate its own connections. See the docs on mediation for more info.
"},{"location":"features/DevReadMe/#multi-tenancy","title":"Multi-tenancy","text":"ACA-Py can also be started in multi-tenant mode. This allows the agent to serve multiple tenants, that each have their own wallet. See the docs on multi-tenancy for more info.
"},{"location":"features/DevReadMe/#json-ld-credentials","title":"JSON-LD Credentials","text":"ACA-Py can issue W3C Verifiable Credentials using Linked Data Proofs. See the docs on JSON-LD Credentials for more info.
"},{"location":"features/DevReadMe/#developing","title":"Developing","text":""},{"location":"features/DevReadMe/#prerequisites","title":"Prerequisites","text":"Docker must be installed to run software locally and to run the test suite.
"},{"location":"features/DevReadMe/#running-in-a-dev-container","title":"Running In A Dev Container","text":"The dev container environment is a great way to deploy agents quickly with code changes and an interactive debug session. Detailed information can be found in the Docs On Devcontainers. It is specific for vscode, so if you prefer another code editor or IDE you will need to figure it out on your own, but it is highly recommended to give this a try.
One thing to be aware of is, unlike the demo, none of the steps are automated. You will need to create public dids, connections and all the other steps yourself. Using the demo and studying the flow and then copying them with your dev container debug session is a great way to learn how everything works.
"},{"location":"features/DevReadMe/#running-locally","title":"Running Locally","text":"Another way to develop locally is by using the provided Docker scripts to run the ACA-Py software.
./scripts/run_docker start <args>\n
For example:
./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n
To enable the Debug Adapter Protocol using the debugpy implementation for Python 3 Python debugger for Visual Studio/VSCode use the --debug
command line parameter.
When debugging an agent running within a docker container, you will need to set the DAP_HOST environment variable (defaults to localhost
) to 0.0.0.0
to allow forwarding from within your docker container.
Note that you may still find references to PTVSD, the deprecated implementation of DAP. PTVSD_HOST and PTVSD_PORT are interchangeable with DAP_HOST and DAP_PORT.
Example:
ENV_VARS=\"DAP_HOST=0.0.0.0\" scripts/run_docker provision --log-level debug --wallet-type askar --wallet-name $(whoami) --wallet-key mysecretkey --endpoint http://localhost:8080 --no-ledger --debug\n
Any ports you will be using from the docker container should be published using the PORTS
environment variable. For example:
PORTS=\"5000:5000 8000:8000 10000:10000\" ./scripts/run_docker start --inbound-transport http 0.0.0.0 10000 --outbound-transport http --debug --log-level DEBUG\n
Refer to the previous section for instructions on how to run ACA-Py.
"},{"location":"features/DevReadMe/#logging","title":"Logging","text":"You can find more details about logging and log levels here.
"},{"location":"features/DevReadMe/#running-tests","title":"Running Tests","text":"To run the ACA-Py test suite, use the following script:
./scripts/run_tests\n
To run the ACA-Py test suite with ptvsd debugger enabled:
./scripts/run_tests --debug\n
To run specific tests pass parameters as defined by pytest:
./scripts/run_tests aries_cloudagent/protocols/connections\n
"},{"location":"features/DevReadMe/#running-aries-agent-test-harness-tests","title":"Running Aries Agent Test Harness Tests","text":"You can run a full suite of integration tests using the Aries Agent Test Harness (AATH).
Check out and run AATH tests as follows (this tests the aca-py main
branch):
git clone https://github.com/hyperledger/aries-agent-test-harness.git\ncd aries-agent-test-harness\n./manage build -a acapy-main\n./manage run -d acapy-main -t @AcceptanceTest -t ~@wip\n
The manage
script is described in detail here, including how to modify the AATH code to run the tests against your aca-py repo/branch.
We use Ruff to enforce a coding style guide.
Please write tests for the work that you submit.
Tests should reside in a directory named tests
alongside the code under test. Generally, there is one test file for each file module under test. Test files must have a name starting with test_
to be automatically picked up the test runner.
There are some good examples of various test scenarios for you to work from including mocking external imports and working with async code so take a look around!
The test suite also displays the current code coverage after each run so you can see how much of your work is covered by tests. Use your best judgement for how much coverage is sufficient.
Please also refer to the contributing guidelines and code of conduct.
"},{"location":"features/DevReadMe/#publishing-releases","title":"Publishing Releases","text":"The publishing document provides information on tagging a release and publishing the release artifacts to PyPi.
"},{"location":"features/DevReadMe/#dynamic-injection-of-services","title":"Dynamic Injection of Services","text":"The Agent employs a dynamic injection system whereby providers of base classes are registered with the RequestContext
instance, currently within conductor.py
. Message handlers and services request an instance of the selected implementation using context.inject(BaseClass)
; for instance the wallet instance may be injected using wallet = context.inject(BaseWallet)
. The inject
method normally throws an exception if no implementation of the base class is provided, but can be called with required=False
for optional dependencies (in which case a value of None
may be returned).
Providers are registered with either context.injector.bind_instance(BaseClass, instance)
for previously-constructed (singleton) object instances, or context.injector.bind_provider(BaseClass, provider)
for dynamic providers. In some cases it may be desirable to write a custom provider which switches implementations based on configuration settings, such as the wallet provider.
The BaseProvider
classes in the config.provider
module include ClassProvider
, which can perform dynamic module inclusion when given the combined module and class name as a string (for instance aries_cloudagent.wallet.indy.IndyWallet
). ClassProvider
accepts additional positional and keyword arguments to be passed into the class constructor. Any of these arguments may be an instance of ClassProvider.Inject(BaseClass)
, allowing dynamic injection of dependencies when the class instance is instantiated.
ACA-Py supports an Endorser Protocol, that allows an un-privileged agent (an \"Author\") to request another agent (the \"Endorser\") to sign their transactions so they can write these transactions to the ledger. This is required on Indy ledgers, where new agents will typically be granted only \"Author\" privileges.
Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation, and endorsements can be explicitly requested, or ACA-Py can be configured to automate the endorsement workflow.
"},{"location":"features/Endorser/#setting-up-connections-between-authors-and-endorsers","title":"Setting up Connections between Authors and Endorsers","text":"Since endorsement involves message exchange between two agents, these agents must establish and configure a connection before any endorsements can be provided or requested.
Once the connection is established and active
, the \"role\" (either Author or Endorser) is attached to the connection using the /transactions/{conn_id}/set-endorser-role
endpoint. For Authors, they must additionally configure the DID of the Endorser as this is required when the Author signs the transaction (prior to sending to the Endorser for endorsement) - this is done using the /transactions/{conn_id}/set-endorser-info
endpoint.
Transaction Endorsement is built into the protocols for Schema, Credential Definition and Revocation. When executing one of the endpoints that will trigger a ledger write, an endorsement protocol can be explicitly requested by specifying the connection_id
(of the Endorser connection) and create_transaction_for_endorser
.
(Note that endorsement requests can be automated, see the section on \"Configuring ACA-Py\" below.)
If transaction endorsement is requested, then ACA-Py will create a transaction record (this will be returned by the endpoint, rather than the Schema, Cred Def, etc) and the following endpoints must be invoked:
Protocol Step Author Endorser Request Endorsement/transactions/create-request
Endorse Transaction /transactions/{tran_id}/endorse
Write Transaction /transactions/{tran_id}/write
Additional endpoints allow the Endorser to reject the endorsement request, or for the Author to re-submit or cancel a request.
Web hooks will be triggered to notify each ACA-Py agent of any transaction request, endorsements, etc to allow the controller to react to the event, or the process can be automated via command-line parameters (see below).
"},{"location":"features/Endorser/#configuring-aca-py-for-auto-or-manual-endorsement","title":"Configuring ACA-Py for Auto or Manual Endorsement","text":"The following start-up parameters are supported by ACA-Py:
Endorsement:\n --endorser-protocol-role <endorser-role>\n Specify the role ('author' or 'endorser') which this agent will participate. Authors will request transaction endorsement from an Endorser. Endorsers will endorse transactions from\n Authors, and may write their own transactions to the ledger. If no role (or 'none') is specified then the endorsement protocol will not be used and this agent will write transactions to\n the ledger directly. [env var: ACAPY_ENDORSER_ROLE]\n --endorser-public-did <endorser-public-did>\n For transaction Authors, specify the public DID of the Endorser agent who will be endorsing transactions. Note this requires that the connection be made using the Endorser's public\n DID. [env var: ACAPY_ENDORSER_PUBLIC_DID]\n --endorser-alias <endorser-alias>\n For transaction Authors, specify the alias of the Endorser connection that will be used to endorse transactions. [env var: ACAPY_ENDORSER_ALIAS]\n --auto-request-endorsement\n For Authors, specify whether to automatically request endorsement for all transactions. (If not specified, the controller must invoke the request endorse operation for each\n transaction.) [env var: ACAPY_AUTO_REQUEST_ENDORSEMENT]\n --auto-endorse-transactions\n For Endorsers, specify whether to automatically endorse any received endorsement requests. (If not specified, the controller must invoke the endorsement operation for each transaction.)\n [env var: ACAPY_AUTO_ENDORSE_TRANSACTIONS]\n --auto-write-transactions\n For Authors, specify whether to automatically write any endorsed transactions. (If not specified, the controller must invoke the write transaction operation for each transaction.) [env\n var: ACAPY_AUTO_WRITE_TRANSACTIONS]\n --auto-create-revocation-transactions\n For Authors, specify whether to automatically create transactions for a cred def's revocation registry. (If not specified, the controller must invoke the endpoints required to create\n the revocation registry and assign to the cred def.) [env var: ACAPY_CREATE_REVOCATION_TRANSACTIONS]\n --auto-promote-author-did\n For Authors, specify whether to automatically promote a DID to the wallet public DID after writing to the ledger. [env var: ACAPY_AUTO_PROMOTE_AUTHOR_DID]\n
"},{"location":"features/Endorser/#how-aca-py-handles-endorsements","title":"How Aca-py Handles Endorsements","text":"Internally, the Endorsement functionality is implemented as a protocol, and is implemented consistently with other protocols:
The Endorser makes use of the Event Bus (links to the PR which links to a hackmd doc) to notify other protocols of any Endorser events of interest. For example, after a Credential Definition endorsement is received, the TransactionManager writes the endorsed transaction to the ledger and uses the Event Bus to notify the Credential Definition manager that it can do any required post-processing (such as writing the cred def record to the wallet, initiating the revocation registry, etc.).
The overall architecture can be illustrated as:
"},{"location":"features/Endorser/#create-credential-definition-and-revocation-registry","title":"Create Credential Definition and Revocation Registry","text":"An example of an Endorser flow is as follows, showing how a credential definition endorsement is received and processed, and optionally kicks off the revocation registry process:
You can see that there is a standard endorser flow happening each time there is a ledger write (illustrated in the \"Endorser\" process).
At the end of each endorse sequence, the TransactionManager sends a notification via the EventBus so that any dependant processing can continue. Each Router is responsible for listening and responding to these notifications if necessary.
For example:
Using the EventBus decouples the event sequence. Any functions triggered by an event notification are typically also available directly via Admin endpoints.
"},{"location":"features/Endorser/#create-did-and-promote-to-public","title":"Create DID and Promote to Public","text":"... and an example of creating a DID and promoting it to public (and creating an ATTRIB for the endpoint:
You can see the same endorsement processes in this sequence.
Once the DID is written, the DID can (optionally) be promoted to the public DID, which will also invoke an ATTRIB transaction to write the endpoint.
"},{"location":"features/JsonLdCredentials/","title":"JSON-LD Credentials in ACA-Py","text":"By design ACA-Py is credential format agnostic. This means you can use it for any credential format, as long as an RFC is defined for the specific credential format. ACA-Py currently supports two types of credentials, AnonCreds and JSON-LD credentials. This document describes how to use the latter by making use of W3C Verifiable Credentials using Linked Data Proofs.
"},{"location":"features/JsonLdCredentials/#table-of-contents","title":"Table of Contents","text":"did:sov
did:key
The rest of this guide assumes some basic understanding of W3C Verifiable Credentials, JSON-LD and Linked Data Proofs. If you're not familiar with some of these concepts, the following resources can help you get started:
BBS+ credentials offer a lot of privacy preserving features over non-ZKP credentials. Therefore we recommend to always use BBS+ credentials over non-ZKP credentials. To get started with BBS+ credentials it is recommended to at least read RFC 0646: W3C Credential Exchange using BBS+ Signatures for a general overview.
Some other resources that can help you get started with BBS+ credentials:
Contrary to Indy credentials, JSON-LD credentials do not need a schema or credential definition to issue credentials. Everything required to issue the credential is embedded into the credential itself using Linked Data Contexts.
"},{"location":"features/JsonLdCredentials/#json-ld-context","title":"JSON-LD Context","text":"It is required that every property key in the document can be mapped to an IRI. This means the property key must either be an IRI by default, or have the shorthand property mapped in the @context
of the document. If you have properties that are not mapped to IRIs, the Issue Credential API will throw the following error:
<x> attributes dropped. Provide definitions in context to correct. [<missing-properties>]
For credentials the https://www.w3.org/2018/credentials/v1
context MUST always be the first context. In addition, when issuing BBS+ credentials the https://w3id.org/security/bbs/v1
URL MUST be present in the context. For convenience this URL will be automatically added to the @context
of the credential if not present.
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://other-contexts.com\"\n ]\n}\n
"},{"location":"features/JsonLdCredentials/#writing-json-ld-contexts","title":"Writing JSON-LD Contexts","text":"Writing JSON-LD contexts can be a daunting task and is out of scope of this guide. Generally you should try to make use of already existing vocabularies. Some examples are the vocabularies defined in the W3C Credentials Community Group:
Verifiable credentials are not around that long, so there aren't that many vocabularies ready to use. If you can't use one of the existing vocabularies it is still beneficial to lean on already defined lower level contexts. http://schema.org has a large registry of definitions that can be used to build new contexts. The example vocabularies linked above all make use of types from http://schema.org.
For the remainder of this guide, we will be using the example UniversityDegreeCredential
type and https://www.w3.org/2018/credentials/examples/v1
context from the Verifiable Credential Data Model. You should not use this for production use cases.
Before issuing a credential you must determine a signature suite to use. ACA-Py currently supports three signature suites for issuing credentials:
Ed25519Signature2018
- Very well supported. No zero knowledge proofs or selective disclosure.Ed25519Signature2020
- Updated version of 2018 suite.BbsBlsSignature2020
- Newer, but supports zero knowledge proofs and selective disclosure.Generally you should always use BbsBlsSignature2020
as it allows the holder to derive a new credential during the proving, meaning it doesn't have to disclose all fields and doesn't have to reveal the signature.
Besides the JSON-LD context, we need a DID to use for issuing the credential. ACA-Py currently supports two did methods for issuing credentials:
did:sov
- Can only be used for Ed25519Signature2018
signature suite.did:key
- Can be used for both Ed25519Signature2018
and BbsBlsSignature2020
signature suites.did:sov
","text":"When using did:sov
you need to make sure to use a public did so other agents can resolve the did. It is also important the other agent is using the same indy ledger for resolving the did. You can get the public did using the /wallet/did/public
endpoint. For backwards compatibility the did is returned without did:sov
prefix. When using the did for issuance make sure this prepend this to the did. (so DViYrCMPWfuLiY7LLs8giB
becomes did:sov:DViYrCMPWfuLiY7LLs8giB
)
did:key
","text":"A did:key
did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.
You can create a did:key
using the /wallet/did/create
endpoint with the following body. Use ed25519
for Ed25519Signature2018
, bls12381g2
for BbsBlsSignature2020
.
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"bls12381g2\" // or ed25519\n }\n}\n
The above call will return a did that looks something like this: did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj
Issuing JSON-LD credentials is only possible with the issue credential v2 protocol (/issue-credential-2.0
)
The format used for exchanging JSON-LD credentials is defined in RFC 0593: JSON-LD Credential Attachment format. The API in ACA-Py exactly matches the formats as described in this RFC, with the most important (from the ACA-Py API perspective) being aries/ld-proof-vc-detail@v1.0
. Read the RFC to see the exact properties required to construct a valid Linked Data Proof VC Detail.
All endpoints in API use the aries/ld-proof-vc-detail@v1.0
. We'll use the /issue-credential-2.0/send
as an example, but it works the same for the other endpoints. In contrary to issuing indy credentials, JSON-LD credentials do not require a credential preview. All properties should be directly embedded in the credentials.
The detail should be included under the filter.ld_proof
property. To issue a credential call the /issue-credential-2.0/send
endpoint, with the example body below and the connection_id
and issuer
keys replaced. The value of issuer
should be the did that you created in the Did Method paragraph above.
If you don't have auto-respond-credential-offer
and auto-store-credential
enabled in the ACA-Py config, you will need to call /issue-credential-2.0/records/{cred_ex_id}/send-request
and /issue-credential-2.0/records/{cred_ex_id}/store
to finalize the credential issuance.
{\n \"connection_id\": \"ddc23de9-359f-465c-b66e-f7c5a0cc9a57\",\n \"filter\": {\n \"ld_proof\": {\n \"credential\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n }\n },\n \"options\": {\n \"proofType\": \"BbsBlsSignature2020\"\n }\n }\n }\n}\n
"},{"location":"features/JsonLdCredentials/#retrieving-issued-credentials","title":"Retrieving Issued Credentials","text":"After issuing the credential, the credentials should be stored inside the wallet. Because the structure of JSON-LD credentials is so different from indy credentials a new endpoint is added to retrieve W3C credentials.
Call the /credentials/w3c
endpoint to retrieve all JSON-LD credentials in your wallet. See the detail below for an example response based on the issued credential from the Issuing Credentials paragraph above.
{\n \"results\": [\n {\n \"contexts\": [\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"types\": [\"UniversityDegreeCredential\", \"VerifiableCredential\"],\n \"schema_ids\": [],\n \"issuer_id\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"subject_ids\": [],\n \"proof_types\": [\"BbsBlsSignature2020\"],\n \"cred_value\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://www.w3.org/2018/credentials/examples/v1\",\n \"https://w3id.org/security/bbs/v1\"\n ],\n \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n \"issuer\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"issuanceDate\": \"2020-01-01T12:00:00Z\",\n \"credentialSubject\": {\n \"degree\": {\n \"type\": \"BachelorDegree\",\n \"name\": \"Bachelor of Science and Arts\"\n },\n \"college\": \"Faber College\"\n },\n \"proof\": {\n \"type\": \"BbsBlsSignature2020\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj#zUC7FsmhhifDTuYXdwYES2UpCpWwYieJRapC6oEWqyt5KfJ3ztfLzYnbWjuXQ5drYaKaho3FjxrfDB81gtAJKjbM4yAmBuNoj3YKDXqW151KkkYarpEoEVWMMcN5zPfjCrQ8Saj\",\n \"created\": \"2021-05-03T12:31:28.561945\",\n \"proofValue\": \"iUFtRGdLLCWxKx8VD3oiFBoRMUFKhSitTzMsfImXm6OF0d8il+Z40aLz8S7m8EcXPQhRjcWWL9jkfcf1SDifD4CvxVg69NvB7hZyIIz9hwAyi3LmTm0ez4NDRCKyieBuzqKbfM2eACWn/ilhOJBm6w==\"\n }\n },\n \"cred_tags\": {},\n \"record_id\": \"541ddbce5760497d98e68917be8c05bd\"\n }\n ]\n}\n
"},{"location":"features/JsonLdCredentials/#present-proof","title":"Present Proof","text":"\u26a0\ufe0f TODO: https://github.com/openwallet-foundation/acapy/pull/1125
"},{"location":"features/JsonLdCredentials/#vc-api","title":"VC-API","text":"In order to support these functions outside of the respective DIDComm protocols, a set of endpoints conforming to the vc-api specification are available. These endpoints should be used by a controller when building an identity platform.
These endpoints include:
GET /vc/credentials
-> returns a list of all stored json-ld credentialsGET /vc/credentials/{id}
-> returns a json-ld credential based on it's IDPOST /vc/credentials/issue
-> signs a credentialPOST /vc/credentials/verify
-> verifies a credentialPOST /vc/credentials/store
-> stores an issued credentialPOST /vc/presentations/prove
-> proves a presentationPOST /vc/presentations/verify
-> verifies a presentationTo learn more about using these endpoints, please refer to the available postman collection.
"},{"location":"features/JsonLdCredentials/#external-suite-provider","title":"External Suite Provider","text":"It is possible to extend the signature suite support, including outsourcing signing JSON-LD Credentials to some other component (KMS, HSM, etc.), using the ExternalSuiteProvider
interface. This interface can be implemented and registered via plugin. The plugged in provider will be used by ACA-Py's LDP-VC subsystem to create a LinkedDataProof
object, which is responsible for signing normalized credential values.
This interface enables taking advantage of ACA-Py's JSON-LD processing to construct and format the credential while exposing a simple interface to a plugin to make it responsible for signatures. This can also be combined with plugged in DID Methods, VerificationKeyStrategy
, and other pluggable components.
See this example project here for more details on the interface and its usage: https://github.com/dbluhm/acapy-ld-signer
"},{"location":"features/Mediation/","title":"Mediation docs","text":""},{"location":"features/Mediation/#concepts","title":"Concepts","text":"--open-mediation
- Instructs mediators to automatically grant all incoming mediation requests.--mediator-invitation
- Receive invitation, send mediation request and set as default mediator.--mediator-connections-invite
- Connect to mediator through a connection invitation. If not specified, connect using an OOB invitation.--default-mediator-id
- Set pre-existing mediator as default mediator.--clear-default-mediator
- Clear the stored default mediator.The minimum set of arguments required to enable mediation are:
aca-py start ... \\\n --open-mediation\n
To automate the mediation process on startup, additionally specify the following argument on the mediated agent (not the mediator):
aca-py start ... \\\n --mediator-invitation \"<a multi-use invitation url from the mediator>\"\n
If a default mediator has already been established, then the --default-mediator-id
argument can be used instead of the --mediator-invitation
.
See Aries RFC 0211: Coordinate Mediation Protocol.
"},{"location":"features/Mediation/#admin-api","title":"Admin API","text":"GET mediation/requests
conn_id
, state
, mediator_terms
and recipient_terms
.GET mediation/requests/{mediation_id}
DELETE mediation/requests/{mediation_id}
POST mediation/requests/{mediation_id}/grant
granted
message to client.POST mediation/requests/{mediation_id}/deny
denied
message to client.POST mediation/request/{conn_id}
GET mediation/keylists
client
for keys mediated by other agents and server
for keys mediated by this agent.POST mediation/keylists/{mediation_id}/send-keylist-update
POST mediation/keylists/{mediation_id}/send-keylist-query
GET mediation/default-mediator
(PR pending)PUT mediation/{mediation_id}/default-mediator
(PR pending)DELETE mediation/default-mediator
(PR pending)After establishing a connection with a mediator also having mediation granted, you can use that mediator id for future did_comm connections. When creating, receiving or accepting an invitation intended to be Mediated, you provide mediation_id
with the desired mediator id. if using a single mediator for all future connections, You can set a default mediation id. If no mediation_id is provided the default mediation id will be used instead.
It is a known fact that multiple AnonCreds can be combined to present a presentation proof with an \"and\" logical operator: For instance, a verifier can ask for the \"name\" claim from an eID and the \"address\" claim from a bank statement to have a single proof that is either valid or invalid. With the Present Proof Protocol v2, it is possible to have \"and\" and \"or\" logical operators for AnonCreds and/or W3C Verifiable Credentials.
With the Present Proof Protocol v2, verifiers can ask for a combination of credentials as proof. For instance, a Verifier can ask a claim from an AnonCreds and a verifiable presentation from a W3C Verifiable Credential, which would open the possibilities of ACA-Py being used for rather complex presentation proof requests that wouldn't be possible without the support of AnonCreds or W3C Verifiable Credentials.
Moreover, it is possible to make similar presentation proof requests using the or logical operator. For instance, a verifier can ask for either an eID in AnonCreds format or an eID in W3C Verifiable Credential format. This has the potential to solve the interoperability problem of different credential formats and ecosystems from a user point of view by shifting the requirement of holding/accepting different credential formats from identity holders to verifiers. Here again, using ACA-Py as the underlying verifier agent can tackle such complex presentation proof requests since the agent is capable of verifying both type of credential formats and proof types.
In the future, it would be even possible to put mDoc as an attachment with an and or or logical operation, along with AnonCreds and/or W3C Verifiable Credentials. For this to happen, Aca-Py either needs the capabilities to validate mDocs internally or to connect third-party endpoints to validate and get a response.
"},{"location":"features/Multiledger/","title":"Multi-ledger in ACA-Py","text":"Ability to use multiple Indy ledgers (both IndySdk and IndyVdr) for resolving a DID
by the ACA-Py agent. For read requests, checking of multiple ledgers in parallel is done dynamically according to logic detailed in Read Requests Ledger Selection. For write requests, dynamic allocation of write_ledger
is supported. Configurable write ledgers can be assigned using is_write
in the configuration or using any of the --genesis-url
, --genesis-file
, and --genesis-transactions
startup (ACA-Py) arguments. If no write ledger is assigned then a ConfigError
is raised.
More background information including problem statement, design (algorithm) and more can be found here.
"},{"location":"features/Multiledger/#table-of-contents","title":"Table of Contents","text":"Multi-ledger is disabled by default. You can enable support for multiple ledgers using the --genesis-transactions-list
startup parameter. This parameter accepts a string which is the path to the YAML
configuration file. For example:
--genesis-transactions-list ./acapy_agent/config/multi_ledger_config.yml
If --genesis-transactions-list
is specified, then --genesis-url, --genesis-file, --genesis-transactions
should not be specified.
- id: localVON\n is_production: false\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
- id: localVON\n is_production: false\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n endorser_did: \"9QPa6tHvBHttLg6U4xvviv\"\n endorser_alias: \"endorser_test\"\n- id: greenlightDev\n is_production: true\n is_write: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
Note: is_write
property means that the ledger is write configurable. With reference to the above config example, both bcovrinTest
and (the no longer available -- in the above its pointing to BCovrin Test as well) greenlightDev
ledgers are write configurable. By default, on startup bcovrinTest
will be the write ledger as it is the topmost write configurable production ledger, more details regarding the selection rule. Using PUT /ledger/{ledger_id}/set-write-ledger
endpoint, either greenlightDev
and bcovrinTest
can be set as the write ledger.
Note 2: The greenlightDev
ledger is no longer available, so both ledger entries in the example above and below intentionally point to the same ledger URL.
- id: localVON\n is_production: false\n is_write: true\n genesis_url: \"http://host.docker.internal:9000/genesis\"\n- id: bcovrinTest\n is_production: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n- id: greenlightDev\n is_production: true\n genesis_url: \"http://test.bcovrin.vonx.io/genesis\"\n
Note: For instance with regards to example config above, localVON
will be the write ledger, as there are no production ledgers which are configurable it will choose the topmost write configurable non production ledger.
For each ledger, the required properties are as following:
id
*: The id (or name) of the ledger, can also be used as the pool name if none providedis_production
*: Whether the ledger is a production ledger. This is used by the pool selector algorithm to know which ledger to use for certain interactions (i.e. prefer production ledgers over non-production ledgers)For connecting to ledger, one of the following needs to be specified:
genesis_file
: The path to the genesis file to use for connecting to an Indy ledger.genesis_transactions
: String of genesis transactions to use for connecting to an Indy ledger.genesis_url
: The url from which to download the genesis transactions to use for connecting to an Indy ledger.is_write
: Whether this ledger is writable. At least one write ledger must be specified, unless running in read-only mode. Multiple write ledgers can be specified in config.Optional properties:
pool_name
: name of the indy pool to be openedkeepalive
: how many seconds to keep the ledger opensocks_proxy
endorser_did
: Endorser public DID registered on the ledger, needed for supporting Endorser protocol at multi-ledger level.endorser_alias
: Endorser alias for this ledger, needed for supporting Endorser protocol at multi-ledger level.Note: Both endorser_did
and endorser_alias
are part of the endorser info. Whenever a write ledger is selected using PUT /ledger/{ledger_id}/set-write-ledger
, the endorser info associated with that ledger in the config updates the endorser.endorser_public_did
and endorser.endorser_alias
profile setting respectively.
Multi-ledger related actions are grouped under the ledger
topic in the SwaggerUI.
/ledger/config
: Returns the multiple ledger configuration currently in use/ledger/get-write-ledger
: Returns the current active/set write_ledger's
ledger_id
/ledger/get-write-ledgers
: Returns list of available write_ledger's
ledger_id
/ledger/{ledger_id}/set-write-ledger
: Set active write_ledger's
ledger_id
The following process is executed for these functions in ACA-Py:
get_schema
get_credential_definition
get_revoc_reg_def
get_revoc_reg_entry
get_key_for_did
get_all_endpoints_for_did
get_endpoint_for_did
get_nym_role
get_revoc_reg_delta
If multiple ledgers are configured then IndyLedgerRequestsExecutor
service extracts DID
from the record identifier and executes the check below, else it returns the BaseLedger
instance.
lookup_did_in_configured_ledgers
functionDID
in cache
for a corresponding applicable ledger_id
. If found, return the ledger info, else continue._get_ledger_by_did
tasks for each of the configured ledgers.applicable_prod_ledgers
and applicable_non_prod_ledgers
dictionaries, each with self_certified
and non_self_certified
inner dict which are sorted by the original order or index.self_certified
> production
> non_production
production
ledger where the DID
is self_certified
non_production
ledger where the DID
is self_certified
production
ledger where the DID
is not self_certified
non_production
ledger where the DID
is not self_certified
_get_ledger_by_did
functionGET_NYM
DID
is self certifiedlookup_did_in_configured_ledgers
On startup, the first configured applicable ledger is assigned as the write_ledger
(BaseLedger
), the selection is dependent on the order (top-down) and whether it is production
or non_production
. For instance, considering this example configuration, ledger bcovrinTest
will be set as write_ledger
as it is the topmost production
ledger. If no production
ledgers are included in configuration then the topmost non_production
ledger is selected.
When you run in multi-ledger mode, ACA-Py will use the pool-name
(or id
) specified in the ledger configuration file for each ledger.
(When running in single-ledger mode, ACA-Py uses default
as the ledger name.)
If you are running against a ledger in write
mode, and the ledger requires you to accept a Transaction Author Agreement (TAA), ACA-Py stores the TAA acceptance status in the wallet in a non-secrets record, using the ledger's pool_name
as a key.
This means that if you are upgrading from single-ledger to multi-ledger mode, you will need to either:
id
for your writable ledger to default
(in your ledgers.yaml
file)or:
Once you re-start ACA-Py, you can check the GET /ledger/taa
endpoint to verify your TAA acceptance status.
There should be no impact/change in functionality to any ACA-Py protocols.
IndySdkLedger
was refactored by replacing wallet: IndySdkWallet
instance variable with profile: Profile
and accordingly .acapy_agent/indy/credex/verifier
, .acapy_agent/indy/models/pres_preview
, .acapy_agent/indy/sdk/profile.py
, .acapy_agent/indy/sdk/verifier
, ./acapy_agent/indy/verifier
were also updated.
Added build_and_return_get_nym_request
and submit_get_nym_request
helper functions to IndySdkLedger
and IndyVdrLedger
.
Best practice/feedback emerging from Askar session deadlock
issue and endorser refactoring
PR was also addressed here by not leaving sessions open unnecessarily and changing context.session
to context.profile.session
, etc.
These changes are made here:
./acapy_agent/ledger/routes.py
./acapy_agent/messaging/credential_definitions/routes.py
./acapy_agent/messaging/schemas/routes.py
./acapy_agent/protocols/actionmenu/v1_0/routes.py
./acapy_agent/protocols/actionmenu/v1_0/util.py
./acapy_agent/protocols/basicmessage/v1_0/routes.py
./acapy_agent/protocols/coordinate_mediation/v1_0/handlers/keylist_handler.py
./acapy_agent/protocols/coordinate_mediation/v1_0/routes.py
./acapy_agent/protocols/endorse_transaction/v1_0/routes.py
./acapy_agent/protocols/introduction/v0_1/handlers/invitation_handler.py
./acapy_agent/protocols/introduction/v0_1/routes.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_issue_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_offer_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_proposal_handler.py
./acapy_agent/protocols/issue_credential/v1_0/handlers/credential_request_handler.py
./acapy_agent/protocols/issue_credential/v1_0/routes.py
./acapy_agent/protocols/issue_credential/v2_0/routes.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_handler.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_proposal_handler.py
./acapy_agent/protocols/present_proof/v1_0/handlers/presentation_request_handler.py
./acapy_agent/protocols/present_proof/v1_0/routes.py
./acapy_agent/protocols/trustping/v1_0/routes.py
./acapy_agent/resolver/routes.py
./acapy_agent/revocation/routes.py
Most deployments of ACA-Py use a single wallet for all operations. This means all connections, credentials, keys, and everything else is stored in the same wallet and shared between all controllers of the agent. Multi-tenancy in ACA-Py allows multiple tenants to use the same ACA-Py instance with a different context. All tenants get their own encrypted wallet that only holds their own data.
This allows ACA-Py to be used for a wider range of use cases. One use case could be a company that creates a wallet for each department. Each department has full control over the actions they perform while having a shared instance for easy maintenance. Another use case could be for a Issuer-Hosted Custodial Agent. Sometimes it is required to host the agent on behalf of someone else.
"},{"location":"features/Multitenancy/#table-of-contents","title":"Table of Contents","text":"When multi-tenancy is enabled in ACA-Py there is still a single agent running, however, some of the resources are now shared between the tenants of the agent. Each tenant has their own wallet, with their own DIDs, connections, and credentials. Transports and most of the settings are still shared between agents. Each wallet uses the same endpoint, so to the outside world, it is not obvious multiple tenants are using the same agent.
"},{"location":"features/Multitenancy/#base-and-sub-wallets","title":"Base and Sub Wallets","text":"Multi-tenancy in ACA-Py makes a distinction between a base wallet and sub wallets.
The wallets used by the different tenants are called sub wallets. A sub wallet is almost identical to a wallet when multi-tenancy is disabled. This means that you can do everything with it that a single-tenant ACA-Py instance can also do.
The base wallet however, takes on a different role and has limited functionality. Its main function is to manage the sub wallets, which can be done using the Multi-tenant Admin API. It stores all settings and information about the different sub wallets and will route incoming messages to the corresponding sub wallets. See Message Routing for more details. All other features are disabled for the base wallet. This means it cannot issue credentials, present proof, or do any of the other actions sub wallets can do. This is to keep a clear hierarchical difference between base and sub wallets. For this reason, the base wallet should generally not be provisioned using the --wallet-seed
argument as not only it is not necessary for sub wallet management operations, but it will also require this DID to be correctly registered on the ledger for the service to start-up correctly.
Multi-tenancy is disabled by default. You can enable support for multiple wallets using the --multitenant
startup parameter. To also be able to manage wallets for the tenants, the multi-tenant admin API can be enabled using the --multitenant-admin
startup parameter. See Multi-tenant Admin API below for more info on the admin API.
The --jwt-secret
startup parameter is required when multi-tenancy is enabled. This is used for JWT creation and verification. See Authentication below for more info.
Example:
# This enables multi-tenancy in ACA-Py\nmultitenant: true\n\n# This enables the admin API for multi-tenancy. More information below\nmultitenant-admin: true\n\n# This sets the secret used for JWT creation/verification for sub wallets\njwt-secret: Something very secret\n
"},{"location":"features/Multitenancy/#single-wallet-vs-multiple-wallets","title":"Single Wallet vs Multiple Wallets","text":"With askar wallets it's possible to have all tenant wallets in a single wallet or each have an individual wallet. The default is to have each tenant in a separate wallet. This is done to keep the wallets separate and to allow for more flexibility in the future. If you want to have all tenants in a single wallet you can set the multitenancy-config
with the value {\"wallet_type\": \"single-wallet-askar\"}
. If you want to explicitly set the wallet type for each tenant you can do so by setting the multitenancy-config
with the value {\"wallet_type\": \"basic\"}
. See .vscode-sample/multitenant-admin.yml for an example.
## Multi-tenant Admin API\n\nThe multi-tenant admin API allows you to manage wallets in ACA-Py. Only the base wallet can manage wallets, so you can't for example create a wallet in the context of sub wallet (using the `Authorization` header as specified in [Authentication](#authentication)).\n\nMulti-tenancy related actions are grouped under the `/multitenancy` path or the `multitenancy` topic in the SwaggerUI. As mentioned above, the multi-tenant admin API is disabled by default, event when multi-tenancy is enabled. This is to allow for more flexible agent configuration (e.g. horizontal scaling where only a single instance exposes the admin API). To enable the multi-tenant admin API, the `--multitenant-admin` startup parameter can be used.\n\nSee the SwaggerUI for the exact API definition for multi-tenancy.\n\n## Managed vs Unmanaged Mode\n\nMulti-tenancy in ACA-Py is designed with two key management modes in mind.\n\n### Managed Mode\n\nIn **`managed`** mode, ACA-Py will manage the key for the wallet. This is the easiest configuration as it allows ACA-Py to fully control the wallet. When a message is received from another agent it can immediately unlock the wallet and process the message. The wallet key is stored encrypted in the base wallet.\n\n### Unmanaged Mode\n\nIn **`unmanaged`** mode, ACA-Py won't manage the key for the wallet. The key is not stored in the base wallet, which means the key to unlock the wallet needs to be provided whenever the wallet is used. When a message from another agent is received, ACA-Py cannot immediately unlock the wallet and process the message. See [Authentication](#authentication) for more info.\n\nIt is important to note unmanaged mode doesn't provide a lot of security over managed mode. The key is still processed by the agent, and therefore trust is required. It could however provide some benefit in the case a multi-tenant agent is compromised, as the agent doesn't store the key to unlock the wallet.\n\n> :warning: Although support for unmanaged mode is mostly in place, the receiving of messages from other agents in unmanaged mode is not supported yet. This means unmanaged mode can not be used yet.\n\n### Mode Usage\n\nThe mode used can be specified when creating a wallet using the `key_management_mode` parameter.\n\n```jsonc\n// POST /multitenancy/wallet\n{\n // ... other params ...\n \"key_management_mode\": \"managed\" // or \"unmanaged\"\n}\n
"},{"location":"features/Multitenancy/#message-routing","title":"Message Routing","text":"In multi-tenant mode, when ACA-Py receives a message from another agent, it will need to determine which tenant to route the message to. ACA-Py defines two types of routing methods, mediation and relaying.
See the Mediators and Relays RFC for an in-depth description of the difference between the two concepts.
"},{"location":"features/Multitenancy/#relaying","title":"Relaying","text":"In multi-tenant mode, ACA-Py still exposes a single endpoint for each transport. This means it can't route messages to sub wallets based on the endpoint. To resolve this the base wallet acts as a relay for all sub wallets. As can be seen in the architecture diagram above, all messages go through the base wallet. whenever a sub wallet creates a new key or connection, it will be registered at the base wallet. This allows the base wallet to look at the recipient keys for a message and determine which wallet it needs to route to.
"},{"location":"features/Multitenancy/#mediation","title":"Mediation","text":"ACA-Py allows messages to be routed through a mediator, and multi-tenancy can be used in combination with external mediators. The following scenarios are possible:
--mediator-invitation
to connect to the mediator, request mediation, and set it as the default mediatordefault-mediator-id
if you're already connected to the mediator and mediation is granted (e.g. after restart).The main tradeoff between option 1. and 2. is redundancy and control. Option 1. doesn't require every sub wallet to create a new connection with the mediator and request mediation. When all sub wallets are going to use the same mediator, this can be a huge benefit. Option 2. gives more control over the mediator being used. This could be useful if e.g. all wallets use a different mediator.
A combination of option 1. and 2. is also possible. In this case, two mediators will be used and the sub wallet mediator will forward to the base wallet mediator, which will, in turn, forward to the ACA-Py instance.
+---------------------+ +----------------------+ +--------------------+\n| Sub wallet mediator | ---> | Base wallet mediator | ---> | Multi-tenant agent |\n+---------------------+ +----------------------+ +--------------------+\n
"},{"location":"features/Multitenancy/#webhooks","title":"Webhooks","text":""},{"location":"features/Multitenancy/#webhook-urls","title":"Webhook URLs","text":"ACA-Py makes use of webhook events to call back to the controller. Multiple webhook targets can be specified, however, in multi-tenant mode, it may be desirable to specify different webhook targets per wallet.
When creating a wallet wallet_dispatch_type
be used to specify how webhooks for the wallet should be dispatched. The options are:
default
: Dispatch only to webhooks associated with this wallet.base
: Dispatch only to webhooks associated with the base wallet.both
: Dispatch to both webhook targets.If either default
or both
is specified you can set the webhook URLs specific to this wallet using the wallet.webhook_urls
option.
Example:
// POST /multitenancy/wallet\n{\n // ... other params ...\n \"wallet_dispatch_type\": \"default\",\n \"wallet_webhook_urls\": [\n \"https://webhook-url.com/path\",\n \"https://another-url.com/site\"\n ]\n}\n
"},{"location":"features/Multitenancy/#identifying-the-wallet","title":"Identifying the wallet","text":"When the webhook URLs of the base wallet are used or when multiple wallets specify the same webhook URL it can be hard to identify the wallet an event belongs to. To resolve this each webhook event will include the wallet id the event corresponds to.
For HTTP events the wallet id is included as the x-wallet-id
header. For WebSockets, the wallet id is included in the enclosing JSON object.
HTTP example:
POST <webhook-url>/{topic} [headers=x-wallet-id]\n{\n // event payload\n}\n
WebSocket example:
{\n \"topic\": \"{topic}\",\n \"wallet_id\": \"{wallet_id}\",\n \"payload\": {\n // event payload\n }\n}\n
"},{"location":"features/Multitenancy/#authentication","title":"Authentication","text":"When multi-tenancy is not enabled you can authenticate with the agent using the x-api-key
header. As there is only a single wallet, this provides sufficient authentication and authorization.
For sub wallets, an additional authentication method is introduced using JSON Web Tokens (JWTs). A token
parameter is returned after creating a wallet or calling the get token endpoint. This token must be provided for every admin API call you want to perform for the wallet using the Bearer authorization scheme.
Example
GET /connections [headers=\"Authorization: Bearer {token}]\n
The Authorization
header is in addition to the Admin API key. So if the admin-api-key
is enabled (which should be enabled in production) both the Authorization
and the x-api-key
headers should be provided when making calls to a sub wallet. For calls to a base wallet, only the x-api-key
should be provided.
A token can be obtained in two ways. The first method is the token
parameter from the response of the create wallet (POST /multitenancy/wallet
) endpoint. The second option is using the get wallet token endpoint (POST /multitenancy/wallet/{wallet_id}/token
) endpoint.
This is the method you use to obtain a token when you haven't already registered a tenant. In this process you will first register a tenant then an object containing your tenant token
as well as other useful information like your wallet id
will be returned to you.
Example
new_tenant='{\n \"image_url\": \"https://aries.ca/images/sample.png\",\n \"key_management_mode\": \"managed\",\n \"label\": \"example-label-02\",\n \"wallet_dispatch_type\": \"default\",\n \"wallet_key\": \"example-encryption-key-02\",\n \"wallet_name\": \"example-name-02\",\n \"wallet_type\": \"askar\",\n \"wallet_webhook_urls\": [\n \"https://example.com/webhook\"\n ]\n}'\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
Response
{\n \"settings\": {\n \"wallet.type\": \"askar\",\n \"wallet.name\": \"example-name-02\",\n \"wallet.webhook_urls\": [\n \"https://example.com/webhook\"\n ],\n \"wallet.dispatch_type\": \"default\",\n \"default_label\": \"example-label-02\",\n \"image_url\": \"https://aries.ca/images/sample.png\",\n \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n },\n \"key_management_mode\": \"managed\",\n \"updated_at\": \"2022-04-01T15:12:35.474975Z\",\n \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n \"created_at\": \"2022-04-01T15:12:35.474975Z\",\n \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
"},{"location":"features/Multitenancy/#method-2-get-tenant-token","title":"Method 2: Get tenant token","text":"This method allows you to retrieve a tenant token
for an already registered tenant. To retrieve a token you will need an Admin API key (if your admin is protected with one), wallet_key
and the wallet_id
of the tenant. Note that calling the get tenant token endpoint will invalidate the old token. This is useful if the old token needs to be revoked, but does mean that you can't have multiple authentication tokens for the same wallet. Only the last generated token will always be valid.
Example
curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/token\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d { \"wallet_key\": \"example-encryption-key-02\" }\n
Response
{\n \"token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ3YWxsZXRfaWQiOiIzYjY0YWQwZC1mNTU2LTRjMDQtOTJiYy1jZDk1YmZkZTU4Y2QifQ.A4eWbSR2M1Z6mbjcSLOlciBuUejehLyytCVyeUlxI0E\"\n}\n
In unmanaged mode, the get token endpoint also requires the wallet_key
parameter to be included in the request body. The wallet key will be included in the JWT so the wallet can be unlocked when making requests to the admin API.
{\n \"wallet_id\": \"wallet_id\",\n // \"wallet_key\" in only present in unmanaged mode\n \"wallet_key\": \"wallet_key\"\n}\n
In unmanaged mode, sending the wallet_key
to unlock the wallet in every request is not \u201csecure\u201d but keeps it simple at the moment. Eventually, the authentication method should be pluggable, and unmanaged mode would just mean that the key to unlock the wallet is not managed by ACA-Py.
For deterministic JWT creation and verification between restarts and multiple instances, the same JWT secret would need to be used. Therefore a --jwt-secret
param is added to the ACA-Py agent that will be used for JWT creation and verification.
When using the SwaggerUI you can click the icon next to each of the endpoints or the Authorize
button at the top to set the correct authentication headers. Make sure to also include the Bearer
part in the input field. This won't be automatically added.
After registering a tenant which effectively creates a subwallet, you may need to update the tenant information or delete it. The following describes how to accomplish both goals.
"},{"location":"features/Multitenancy/#update-a-tenant","title":"Update a tenant","text":"The following properties can be updated: image_url
, label
, wallet_dispatch_type
, and wallet_webhook_urls
for tenants of a multitenancy wallet. To update these properties you will PUT
a request json containing the properties you wish to update along with the updated values to the /multitenancy/wallet/${TENANT_WALLET_ID}
admin endpoint. If the Admin API endpoint is protected, you will also include the Admin API Key in the request header.
Example
update_tenant='{\n \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n \"label\": \"example-label-02-updated\",\n \"wallet_webhook_urls\": [\n \"https://example.com/webhook/updated\"\n ]\n}'\n
echo $update_tenant | curl -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${TENANT_WALLET_ID}\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
Response
{\n \"settings\": {\n \"wallet.type\": \"askar\",\n \"wallet.name\": \"example-name-02\",\n \"wallet.webhook_urls\": [\n \"https://example.com/webhook/updated\"\n ],\n \"wallet.dispatch_type\": \"default\",\n \"default_label\": \"example-label-02-updated\",\n \"image_url\": \"https://aries.ca/images/sample-updated.png\",\n \"wallet.id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\"\n },\n \"key_management_mode\": \"managed\",\n \"updated_at\": \"2022-04-01T16:23:58.642004Z\",\n \"wallet_id\": \"3b64ad0d-f556-4c04-92bc-cd95bfde58cd\",\n \"created_at\": \"2022-04-01T15:12:35.474975Z\"\n}\n
An Admin API Key is all that is ALLOWED to be included in a request header during an update. Including the Bearer token header will result in a 404: Unauthorized error
"},{"location":"features/Multitenancy/#remove-a-tenant","title":"Remove a tenant","text":"The following information is required to delete a tenant:
Example
curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet/{wallet_id}/remove\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d '{ \"wallet_key\": \"example-encryption-key-02\" }'\n
Response
{}\n
"},{"location":"features/Multitenancy/#per-tenant-settings","title":"Per tenant settings","text":"To allow the configuring of ACA-Py startup parameters/environment variables at a tenant/subwallet level. PR#2233 will provide the ability to update the following subset of settings when creating or updating the subwallet:
Labels Setting ACAPY_LOG_LEVEL log-level log.level ACAPY_INVITE_PUBLIC invite-public debug.invite_public ACAPY_PUBLIC_INVITES public-invites public_invites ACAPY_AUTO_ACCEPT_INVITES auto-accept-invites debug.auto_accept_invites ACAPY_AUTO_ACCEPT_REQUESTS auto-accept-requests debug.auto_accept_requests ACAPY_AUTO_PING_CONNECTION auto-ping-connection auto_ping_connection ACAPY_MONITOR_PING monitor-ping debug.monitor_ping ACAPY_AUTO_RESPOND_MESSAGES auto-respond-messages debug.auto_respond_messages ACAPY_AUTO_RESPOND_CREDENTIAL_OFFER auto-respond-credential-offer debug.auto_respond_credential_offer ACAPY_AUTO_RESPOND_CREDENTIAL_REQUEST auto-respond-credential-request debug.auto_respond_credential_request ACAPY_AUTO_VERIFY_PRESENTATION auto-verify-presentation debug.auto_verify_presentation ACAPY_NOTIFY_REVOCATION notify-revocation revocation.notify ACAPY_AUTO_REQUEST_ENDORSEMENT auto-request-endorsement endorser.auto_request ACAPY_AUTO_WRITE_TRANSACTIONS auto-write-transactions endorser.auto_write ACAPY_CREATE_REVOCATION_TRANSACTIONS auto-create-revocation-transactions endorser.auto_create_rev_reg ACAPY_ENDORSER_ROLE endorser-protocol-role endorser.protocol_rolePOST /multitenancy/wallet
Added extra_settings
dict field to request schema. extra_settings
can be configured in the request body as below:
Example Request
{\n \"wallet_name\": \" ... \",\n \"default_label\": \" ... \",\n \"wallet_type\": \" ... \",\n \"wallet_key\": \" ... \",\n \"key_management_mode\": \"managed\",\n \"wallet_webhook_urls\": [],\n \"wallet_dispatch_type\": \"base\",\n \"extra_settings\": {\n \"ACAPY_LOG_LEVEL\": \"INFO\",\n \"ACAPY_INVITE_PUBLIC\": true,\n \"public-invites\": true\n },\n}\n
echo $new_tenant | curl -X POST \"${ACAPY_ADMIN_URL}/multitenancy/wallet\" \\\n -H \"Content-Type: application/json\" \\\n -H \"X-Api-Key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
PUT /multitenancy/wallet/{wallet_id}
Added extra_settings
dict field to request schema.
Example Request
{\n \"wallet_webhook_urls\": [ ... ],\n \"wallet_dispatch_type\": \"default\",\n \"label\": \" ... \",\n \"image_url\": \" ... \",\n \"extra_settings\": {\n \"ACAPY_LOG_LEVEL\": \"INFO\",\n \"ACAPY_INVITE_PUBLIC\": true,\n \"ACAPY_PUBLIC_INVITES\": false\n },\n }\n
echo $update_tenant | curl -X PUT \"${ACAPY_ADMIN_URL}/multitenancy/wallet/${WALLET_ID}\" \\\n -H \"Content-Type: application/json\" \\\n -H \"x-api-key: $ACAPY_ADMIN_URL_API_KEY\" \\\n -d @-\n
"},{"location":"features/PlugIns/","title":"Deeper Dive: ACA-Py Plug-Ins","text":"ACA-Py plugins enable standardized extensibility without overloading the core ACA-Py code base. Plugins may be features that you create specific to your deployment, or that you deploy from the ACA-Py Plugins \"Store\". Visit the Plugins Store to find all of the open source plugins that have been contributed.
"},{"location":"features/PlugIns/#whats-in-a-plug-in-and-how-does-they-work","title":"What's in a Plug-In and How Does They Work?","text":"Plug-ins are loaded on ACA-Py startup based on the following parameters:
--plugin
- identifies the plug-in library to load--block-plugin
- identifies plug-ins (including built-ins) that are not to be loaded--plugin-config
- identify a configuration parameter for a plug-in--plugin-config-value
- identify a value for a plug-in configurationThe --plug-in
parameter specifies a package that is loaded by ACA-Py at runtime, and extends ACA-Py by adding support for additional protocols and message types, and/or extending the Admin API with additional endpoints.
The original plug-in design (which we will call the \"old\" model) explicitly included message_types.py
routes.py
(to add Admin API's). But functionality was added later (we'll call this the \"new\" model) to allow the plug-in to include a generic setup
package that could perform arbitrary initialization. The \"new\" model also includes support for a definition.py
file that can specify plug-in version information (major/minor plug-in version, as well as the minimum supported version (if another agent is running an older version of the plug-in)).
You can discover which plug-ins are installed in an ACA-Py instance by calling (in the \"server\" section) the GET /plugins
endpoint. (Note that this will return all loaded protocols, including the built-ins. You can call the GET /status/config
to inspect the ACA-Py configuration, which will include the configuration for the external plug-ins.)
If a setup method is provided, it will be called. If not, the message_types.py
and routes.py
will be explicitly loaded.
This would be in the package/module __init__.py
:
async def setup(context: InjectionContext):\n pass\n
TODO I couldn't find an implementation of a custom setup
in any of the existing plug-ins, so I'm not completely sure what are the best practices for this option.
When loading a plug-in, if there is a message_types.py
available, ACA-Py will check the following attributes to initialize the protocol(s):
MESSAGE_TYPES
- identifies message types supported by the protocolCONTROLLERS
- identifies protocol controllersIf routes.py
is available, then ACA-Py will call the following functions to initialize the Admin endpoints:
register()
- registers routes for the new Admin endpointsregister_events()
- registers an events this package will listen for/respond toIf definition.py
is available, ACA-Py will read this package to determine protocol version information. An example follows (this is an example that specifies two protocol versions):
versions = [\n {\n \"major_version\": 1,\n \"minimum_minor_version\": 0,\n \"current_minor_version\": 0,\n \"path\": \"v1_0\",\n },\n {\n \"major_version\": 2,\n \"minimum_minor_version\": 0,\n \"current_minor_version\": 0,\n \"path\": \"v2_0\",\n },\n]\n
The attributes are:
major_version
- specifies the protocol major versioncurrent_minor_version
- specifies the protocol minor versionminimum_minor_version
- specifies the minimum supported version (if a lower version is installed in another agent)path
- specifies the sub-path within the package for this versionThe load sequence for a plug-in (the \"Startup\" class depends on how ACA-Py is running - upgrade
, provision
or start
):
sequenceDiagram\n participant Startup\n Note right of Startup: Configuration is loaded on startup<br/>from ACA-Py config params\n Startup->>+ArgParse: configure\n ArgParse->>settings: [\"external_plugins\"]\n ArgParse->>settings: [\"blocked_plugins\"]\n\n Startup->>+Conductor: setup()\n Note right of Conductor: Each configured plug-in is validated and loaded\n Conductor->>DefaultContext: build_context()\n DefaultContext->>DefaultContext: load_plugins()\n DefaultContext->>+PluginRegistry: register_package() (for built-in protocols)\n PluginRegistry->>PluginRegistry: register_plugin() (for each sub-package)\n DefaultContext->>PluginRegistry: register_plugin() (for non-protocol built-ins)\n loop for each external plug-in\n DefaultContext->>PluginRegistry: register_plugin()\n alt if a setup method is provided\n PluginRegistry->>ExternalPlugIn: has setup\n else if routes and/or message_types are provided\n PluginRegistry->>ExternalPlugIn: has routes\n PluginRegistry->>ExternalPlugIn: has message_types\n end\n opt if definition is provided\n PluginRegistry->>ExternalPlugIn: definition()\n end\n end\n DefaultContext->>PluginRegistry: init_context()\n loop for each external plug-in\n alt if a setup method is provided\n PluginRegistry->>ExternalPlugIn: setup()\n else if a setup method is NOT provided\n PluginRegistry->>PluginRegistry: load_protocols()\n PluginRegistry->>PluginRegistry: load_protocol_version()\n PluginRegistry->>ProtocolRegistry: register_message_types()\n PluginRegistry->>ProtocolRegistry: register_controllers()\n end\n PluginRegistry->>PluginRegistry: register_protocol_events()\n end\n\n Conductor->>Conductor: load_transports()\n\n Note right of Conductor: If the admin server is enabled, plug-in routes are added\n Conductor->>AdminServer: create admin server if enabled\n\n Startup->>Conductor: start()\n Conductor->>Conductor: start_transports()\n Conductor->>AdminServer: start()\n\n Note right of Startup: the following represents an<br/>admin server api request\n Startup->>AdminServer: setup_context() (called on each request)\n AdminServer->>PluginRegistry: register_admin_routes()\n loop for each external plug-in\n PluginRegistry->>ExternalPlugIn: routes.register() (to register endpoints)\n end
"},{"location":"features/PlugIns/#developing-a-new-plug-in","title":"Developing a New Plug-In","text":"When developing a new plug-in:
definition.py
file.message_types.py
file.routes.py
file.setup.py
file to initialize the custom functionality. No guidance is currently available for this option.Most ACA-Py plug-ins provide support for installing the plug-in using poetry. It is recommended to include support in your package for installing using either pip or poetry, to provide maximum support for users of your plug-in.
"},{"location":"features/PlugIns/#plug-in-demo","title":"Plug-In Demo","text":"TBD
"},{"location":"features/PlugIns/#aca-py-plug-ins-repository","title":"ACA-Py Plug-ins Repository","text":"Checkout the \"Plugins\" tab in the ACA-Py Plugins \"Store\" to find a list of plugins that might be useful in your deployment. Instructions are included for how you can contribute your plugin to the list.
"},{"location":"features/PlugIns/#references","title":"References","text":"The following links may be helpful or provide additional context for the current plug-in support. (These are links to issues or pull requests that were raised during plug-in development.)
Configuration params:
Loading plug-ins:
Versioning for plug-ins:
In the past, ACA-Py has used \"unqualified\" DIDs by convention established early on in the Aries ecosystem, before the concept of Peer DIDs, or DIDs that existed only between peers and were not (necessarily) published to a distributed ledger, fully matured. These \"unqualified\" DIDs were effectively Indy Nyms that had not been published to an Indy network. Key material and service endpoints were communicated by embedding the DID Document for the \"DID\" in DID Exchange request and response messages.
For those familiar with the DID Core Specification, it is a stretch to refer to these unqualified DIDs as DIDs. Usage of these DIDs will be phased out, as dictated by Aries RFC 0793: Unqualified DID Transition. These DIDs will be phased out in favor of the did:peer
DID Method. ACA-Py's support for this method and it's use in DID Exchange and DID Rotation is dictated below.
When using DID Exchange as initiated by an Out-of-Band invitation:
POST /out-of-band/create-invitation
accepts two parameters (in addition to others):use_did_method
: a DID Method (options: did:peer:2
did:peer:4
) indicating that a DID of that type is created (if necessary), and used in the invitation. If a DID of the type has to be created, it is flagged as the \"invitation\" DID and used in all future invitations so that connection reuse is the default behaviour.did:peer:4
.use_did
: a complete DID, which will be used for the invitation being established. This supports the edge case of an entity wanting to use a new DID for every invitation. It is the responsibility of the controller to create the DID before passing it in.use_did_method=\"did:peer:4\"
is the default, which is created and (re)used.didexchange/1.1
. Optionally, didexchage/1.0
may also be provided, thus enabling backwards compatibility with agents that do not yet support didexchage/1.0
and use of unqualified DIDs.When receiving an OOB invitation or creating a DID Exchange request to a known Public DID:
POST /didexchange/create-request
and POST /didexchange/{conn_id}/accept-invitation
accepts two parameters (in addition to others):use_did_method
: a DID Method (options: did:peer:2
did:peer:4
) indicating that a DID of that type should be created and used for the connection.did:peer:4
.use_did
: a complete DID, which will be used for the connection being established. This supports the edge case of an entity wanting to use the same DID for more than one connection. It is the responsibility of the controller to create the DID before passing it in.did:peer:4
is created and DID Exchange 1.1 is always used.auto-accept
is used with DID Exchange, then an unqualified DID is created if DID Exchange 1.0 is being used, and a DID Peer 4 is used if DID Exchange 1.1 is used.With these changes, an existing ACA-Py installation using unqualified DIDs can upgrade to use qualified DIDs:
use_did
or use_did_method
parameter on the POST /out-of-band/create-invitation
, POST /didexchange/create-request
. and POST /didexchange/{conn_id}/accept_invitation
endpoints and specifying did:peer:2
or did_peer:4
.auto-accept
the connection.As part of the transition to qualified DIDs, existing connections may be updated to qualified DIDs using the DID Rotate protocol. This is not strictly required; since DIDComm v1 depends on recipient keys for correlating a received message back to a connection, the DID itself is mostly ignored. However, as we transition to DIDComm v2 or if it is desired to update the keys associated with a connection, DID Rotate may be used to update keys and service endpoints.
The steps to do so are:
POST /wallet/did/create
(or through the endpoints provided by a plugged in DID Method, if relevant).did:peer:4
.POST /did-rotate/{conn_id}/rotate
providing the created DID as the to_did
in the body of the Admin API request.did_rotate
webhook will be emitted indicating success.This document describes the implementation of SD-JWTs in ACA-Py according to the Selective Disclosure for JWTs (SD-JWT) Specification, which defines a mechanism for selective disclosure of individual elements of a JSON object used as the payload of a JSON Web Signature structure.
This implementation adds an important privacy-preserving feature to JWTs, since the receiver of an unencrypted JWT can view all claims within. This feature allows the holder to present only a relevant subset of the claims for a given presentation. The issuer includes plaintext claims, called disclosures, outside of the JWT. Each disclosure corresponds to a hidden claim within the JWT. When a holder prepares a presentation, they include along with the JWT only the disclosures corresponding to the claims they wish to reveal. The verifier verifies that the disclosures in fact correspond to claim values within the issuer-signed JWT. The verifier cannot view the claim values not disclosed by the holder.
In addition, this implementation includes an optional mechanism for key binding, which is the concept of binding an SD-JWT to a holder's public key and requiring that the holder prove possession of the corresponding private key when presenting the SD-JWT.
"},{"location":"features/SelectiveDisclosureJWTs/#issuer-instructions","title":"Issuer Instructions","text":"The issuer determines which claims in an SD-JWT can be selectively disclosable. In this implementation, all claims at all levels of the JSON structure are by default selectively disclosable. If the issuer wishes for certain claims to always be visible, they can indicate which claims should not be selectively disclosable, as described below. Essential verification data such as iss
, iat
, exp
, and cnf
are always visible.
The issuer creates a list of JSON paths for the claims that will not be selectively disclosable. Here is an example payload:
{\n \"birthdate\": \"1940-01-01\",\n \"address\": {\n \"street_address\": \"123 Main St\",\n \"locality\": \"Anytown\",\n \"region\": \"Anystate\",\n \"country\": \"US\",\n },\n \"nationalities\": [\"US\", \"DE\", \"SA\"],\n}\n
Attribute to access JSON path \"birthdate\" \"birthdate\" The country attribute within the address dictionary \"address.country\" The second item in the nationalities list \"nationalities[1] All items in the nationalities list \"nationalities[0:2]\" The specification defines options for how the issuer can handle nested structures with respect to selective disclosability. As mentioned, all claims at all levels of the JSON structure are by default selectively disclosable.
"},{"location":"features/SelectiveDisclosureJWTs/#option-1-flat-sd-jwt","title":"Option 1: Flat SD-JWT","text":"The issuer can decide to treat the address
claim in the above example payload as a block that can either be disclosed completely or not at all.
The issuer lists out all the claims inside \"address\" in the non_sd_list
, but not address
itself:
non_sd_list = [\n \"address.street_address\",\n \"address.locality\",\n \"address.region\",\n \"address.country\",\n]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-2-structured-sd-jwt","title":"Option 2: Structured SD-JWT","text":"The issuer may instead decide to make the address
claim contents selectively disclosable individually.
The issuer lists only \"address\" in the non_sd_list
.
non_sd_list = [\"address\"]\n
"},{"location":"features/SelectiveDisclosureJWTs/#option-3-sd-jwt-with-recursive-disclosures","title":"Option 3: SD-JWT with Recursive Disclosures","text":"The issuer may also decide to make the address
claim contents selectively disclosable recursively, i.e., the address
claim is made selectively disclosable as well as its sub-claims.
The issuer lists neither address
nor the subclaims of address
in the non_sd_list
, leaving all with their default selective disclosability. If all claims can be selectively disclosable, the non_sd_list
need not be defined explicitly.
/wallet/sd-jwt/sign
endpoint","text":"{\n \"did\": \"WpVJtxKVwGQdRpQP8iwJZy\",\n \"headers\": {},\n \"payload\": {\n \"sub\": \"user_42\",\n \"given_name\": \"John\",\n \"family_name\": \"Doe\",\n \"email\": \"johndoe@example.com\",\n \"phone_number\": \"+1-202-555-0101\",\n \"phone_number_verified\": true,\n \"address\": {\n \"street_address\": \"123 Main St\",\n \"locality\": \"Anytown\",\n \"region\": \"Anystate\",\n \"country\": \"US\"\n },\n \"birthdate\": \"1940-01-01\",\n \"updated_at\": 1570000000,\n \"nationalities\": [\"US\", \"DE\", \"SA\"],\n \"iss\": \"https://example.com/issuer\",\n \"iat\": 1683000000,\n \"exp\": 1883000000\n },\n \"non_sd_list\": [\n \"given_name\",\n \"family_name\",\n \"nationalities\"\n ]\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#output","title":"Output","text":"\"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJmWURNM1FQcnZicnZ6YlN4elJsUHFnIiwgIlNBIl0~WyI0UGc2SmZ0UnRXdGFPcDNZX2tscmZRIiwgIkRFIl0~WyJBcDh1VHgxbVhlYUgxeTJRRlVjbWV3IiwgIlVTIl0~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~WyIxODVTak1hM1k3QlFiWUpabVE3U0NRIiwgInBob25lX251bWJlcl92ZXJpZmllZCIsIHRydWVd~WyJRN1FGaUpvZkhLSWZGV0kxZ0Vaal93IiwgInBob25lX251bWJlciIsICIrMS0yMDItNTU1LTAxMDEiXQ~WyJOeWtVcmJYN1BjVE1ubVRkUWVxZXl3IiwgImVtYWlsIiwgImpvaG5kb2VAZXhhbXBsZS5jb20iXQ~WyJlemJwQ2lnVlhrY205RlluVjNQMGJ3IiwgImJpcnRoZGF0ZSIsICIxOTQwLTAxLTAxIl0~WyJvd3ROX3I5Z040MzZKVnJFRWhQU05BIiwgInN0cmVldF9hZGRyZXNzIiwgIjEyMyBNYWluIFN0Il0~WyJLQXktZ0VaWmRiUnNHV1dNVXg5amZnIiwgInJlZ2lvbiIsICJBbnlzdGF0ZSJd~WyJPNnl0anM2SU9HMHpDQktwa0tzU1pBIiwgImxvY2FsaXR5IiwgIkFueXRvd24iXQ~WyI0Nzg5aG5GSjhFNTRsLW91RjRaN1V3IiwgImNvdW50cnkiLCAiVVMiXQ~WyIyaDR3N0FuaDFOOC15ZlpGc2FGVHRBIiwgImFkZHJlc3MiLCB7Il9zZCI6IFsiTXhKRDV5Vm9QQzFIQnhPRmVRa21TQ1E0dVJrYmNrellza1Z5RzVwMXZ5SSIsICJVYkxmVWlpdDJTOFhlX2pYbS15RHBHZXN0ZDNZOGJZczVGaVJpbVBtMHdvIiwgImhsQzJEYVBwT2t0eHZyeUFlN3U2YnBuM09IZ193Qk5heExiS3lPRDVMdkEiLCAia2NkLVJNaC1PaGFZS1FPZ2JaajhmNUppOXNLb2hyYnlhYzNSdXRqcHNNYyJdfV0~\"\n
The sd_jwt_sign()
method:
non_sd_list
compared against the list of JSON paths for all claims to create the list of JSON paths for selectively disclosable claimssd_list
so that the claims deepest in the structure are handled firstsd_list
to find each selectively disclosable claim and wrap it in the SDObj
defined by the sd-jwt Python library and removes/replaces the original entrySDJWTIssuerACAPy.issue()
method:SDJWTIssuerACAPy._create_signed_jws()
, which is redefined in order to use the ACA-Py jwt_sign
method and which creates the JWT/wallet/sd-jwt/verify
endpoint","text":"Using the output from the /wallet/sd-jwt/sign
example above, we have decided to only reveal two of the selectively disclosable claims (user
and updated_at
) and achieved this by only including the disclosures for those claims. We have also included a key binding JWT following the disclosures.
{\n \"sd_jwt\": \"eyJ0eXAiOiAiSldUIiwgImFsZyI6ICJFZERTQSIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJfc2QiOiBbIkR0a21ha3NkZGtHRjFKeDBDY0kxdmxRTmZMcGFnQWZ1N3p4VnBGRWJXeXciLCAiSlJLb1E0QXVHaU1INWJIanNmNVV4YmJFeDh2YzFHcUtvX0l3TXE3Nl9xbyIsICJNTTh0TlVLNUstR1lWd0swX01kN0k4MzExTTgwVi13Z0hRYWZvRkoxS09JIiwgIlBaM1VDQmdadVRMMDJkV0pxSVY4elUtSWhnalJNX1NTS3dQdTk3MURmLTQiLCAiX294WGNuSW5Yai1SV3BMVHNISU5YaHFrRVAwODkwUFJjNDBISWE1NElJMCIsICJhdnRLVW5Sdnc1clV0TnZfUnAwUll1dUdkR0RzcnJPYWJfVjR1Y05RRWRvIiwgInByRXZJbzBseTVtNTVsRUpTQUdTVzMxWGdVTElOalo5ZkxiRG81U1pCX0UiXSwgImdpdmVuX25hbWUiOiAiSm9obiIsICJmYW1pbHlfbmFtZSI6ICJEb2UiLCAibmF0aW9uYWxpdGllcyI6IFt7Ii4uLiI6ICJPdU1wcEhpYzEySjYzWTBIY2Ffd1BVeDJCTGdUQVdZQjJpdXpMY3lvcU5JIn0sIHsiLi4uIjogIlIxczlaU3NYeVV0T2QyODdEYy1DTVYyMEdvREF3WUVHV3c4ZkVKd1BNMjAifSwgeyIuLi4iOiAid0lJbjdhQlNDVkFZcUF1Rks3Nmpra3FjVGFvb3YzcUhKbzU5WjdKWHpnUSJ9XSwgImlzcyI6ICJodHRwczovL2V4YW1wbGUuY29tL2lzc3VlciIsICJpYXQiOiAxNjgzMDAwMDAwLCAiZXhwIjogMTg4MzAwMDAwMCwgIl9zZF9hbGciOiAic2hhLTI1NiJ9.cIsuGTIPfpRs_Z49nZcn7L6NUgxQumMGQpu8K6rBtv-YRiFyySUgthQI8KZe1xKyn5Wc8zJnRcWbFki2Vzw6Cw~WyJ4dkRYMDBmalpmZXJpTmlQb2Q1MXFRIiwgInVwZGF0ZWRfYXQiLCAxNTcwMDAwMDAwXQ~WyJYOTlzM19MaXhCY29yX2hudFJFWmNnIiwgInN1YiIsICJ1c2VyXzQyIl0~eyJhbGciOiAiRWREU0EiLCAidHlwIjogImtiK2p3dCIsICJraWQiOiAiZGlkOnNvdjpXcFZKdHhLVndHUWRScFFQOGl3Slp5I2tleS0xIn0.eyJub25jZSI6ICIxMjM0NTY3ODkwIiwgImF1ZCI6ICJodHRwczovL2V4YW1wbGUuY29tL3ZlcmlmaWVyIiwgImlhdCI6IDE2ODgxNjA0ODN9.i55VeR7bNt7T8HWJcfj6jSLH3Q7vFk8N0t7Tb5FZHKmiHyLrg0IPAuK5uKr3_4SkjuGt1_iNl8Wr3atWBtXMDA\"\n}\n
"},{"location":"features/SelectiveDisclosureJWTs/#verify-output","title":"Verify Output","text":"Note that attributes in the non_sd_list
(given_name
, family_name
, and nationalities
), as well as essential verification data (iss
, iat
, exp
) are visible directly within the payload. The disclosures include only the values for the user
and updated_at
claims, since those are the only selectively disclosable claims that the holder presented. The corresponding hashes for those disclosures appear in the payload[\"_sd\"]
list.
{\n \"headers\": {\n \"typ\": \"JWT\",\n \"alg\": \"EdDSA\",\n \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\"\n },\n \"payload\": {\n \"_sd\": [\n \"DtkmaksddkGF1Jx0CcI1vlQNfLpagAfu7zxVpFEbWyw\",\n \"JRKoQ4AuGiMH5bHjsf5UxbbEx8vc1GqKo_IwMq76_qo\",\n \"MM8tNUK5K-GYVwK0_Md7I8311M80V-wgHQafoFJ1KOI\",\n \"PZ3UCBgZuTL02dWJqIV8zU-IhgjRM_SSKwPu971Df-4\",\n \"_oxXcnInXj-RWpLTsHINXhqkEP0890PRc40HIa54II0\",\n \"avtKUnRvw5rUtNv_Rp0RYuuGdGDsrrOab_V4ucNQEdo\",\n \"prEvIo0ly5m55lEJSAGSW31XgULINjZ9fLbDo5SZB_E\"\n ],\n \"given_name\": \"John\",\n \"family_name\": \"Doe\",\n \"nationalities\": [\n {\n \"...\": \"OuMppHic12J63Y0Hca_wPUx2BLgTAWYB2iuzLcyoqNI\"\n },\n {\n \"...\": \"R1s9ZSsXyUtOd287Dc-CMV20GoDAwYEGWw8fEJwPM20\"\n },\n {\n \"...\": \"wIIn7aBSCVAYqAuFK76jkkqcTaoov3qHJo59Z7JXzgQ\"\n }\n ],\n \"iss\": \"https://example.com/issuer\",\n \"iat\": 1683000000,\n \"exp\": 1883000000,\n \"_sd_alg\": \"sha-256\"\n },\n \"valid\": true,\n \"kid\": \"did:sov:WpVJtxKVwGQdRpQP8iwJZy#key-1\",\n \"disclosures\": [\n [\n \"xvDX00fjZferiNiPod51qQ\",\n \"updated_at\",\n 1570000000\n ],\n [\n \"X99s3_LixBcor_hntREZcg\",\n \"sub\",\n \"user_42\"\n ]\n ]\n}\n
The sd_jwt_verify()
method:
SDJWTVerifierACAPy._verify_sd_jwt
, which is redefined in order to use the ACA-Py jwt_verify
method, and which returns the verified JWTThis document provides a summary of the adherence of ACA-Py to the Aries Interop Profiles, and an overview of the ACA-Py feature set. This document is manually updated and as such, may not be up to date with the most recent release of ACA-Py or the repository main
branch. Reminders (and PRs!) to update this page are welcome! If you have any questions, please contact us on the #aries channel on OpenWallet Foundation Discord or through an issue in this repo.
Last Update: 2024-10-08, Release 1.0.1
The checklist version of this document was created as a joint effort between Northern Block, Animo Solutions and the Ontario government, on behalf of the Ontario government.
"},{"location":"features/SupportedRFCs/#aip-support-and-interoperability","title":"AIP Support and Interoperability","text":"See the Aries Agent Test Harness and the Aries Interoperability Status for daily interoperability test run results between ACA-Py and other decentralized trust Frameworks and Agents.
AIP Version Supported Notes AIP 1.0 Fully supported. Deprecation notices published AIP 2.0 Fully supported.A summary of the Aries Interop Profiles and Aries RFCs supported in ACA-Py can be found later in this document.
"},{"location":"features/SupportedRFCs/#platform-support","title":"Platform Support","text":"Platform Supported Notes Server Kubernetes BC Gov has extensive experience running ACA-Py on Red Hat's OpenShift Kubernetes Distribution. Docker Official docker images are published to the GitHub container repository at https://ghcr.io/openwallet-foundation/acapy. Desktop Could be run as a local service on the computer iOS Android Browser"},{"location":"features/SupportedRFCs/#agent-types","title":"Agent Types","text":"Role Supported Notes Issuer Holder Verifier Mediator Service See the aries-mediator-service, a pre-configured, production ready Aries Mediator Service based on a released version of ACA-Py. Mediator Client Indy Transaction Author Indy Transaction Endorser Indy Endorser Service See the aries-endorser-service, a pre-configured, production ready Aries Endorser Service based on a released version of ACA-Py."},{"location":"features/SupportedRFCs/#credential-types","title":"Credential Types","text":"Credential Type Supported Notes Hyperledger AnonCreds Includes full issue VC, present proof, and revoke VC support. W3C Verifiable Credentials Data Model Supports JSON-LD Data Integrity Proof Credentials using theEd25519Signature2018
, BbsBlsSignature2020
and BbsBlsSignatureProof2020
signature suites.Supports the DIF Presentation Exchange data format for presentation requests and presentation submissions.Work currently underway to add support for Hyperledger AnonCreds in W3C VC JSON-LD Format"},{"location":"features/SupportedRFCs/#did-methods","title":"DID Methods","text":"Method Supported Notes \"unqualified\" Deprecated Pre-DID standard identifiers. Used either in a peer-to-peer context, or as an alternate form of a did:sov
DID published on an Indy network. did:sov
did:web
Resolution only did:key
did:peer
Algorithms 2
/3
and 4
Universal Resolver A plug in from SICPA is available that can be added to an ACA-Py installation to support a universal resolver capability, providing support for most DID methods in the W3C DID Method Registry."},{"location":"features/SupportedRFCs/#secure-storage-types","title":"Secure Storage Types","text":"Secure Storage Types Supported Notes Aries Askar Recommended - Aries Askar provides equivalent/evolved secure storage and cryptography support to the \"indy-wallet\" part of the Indy SDK. When using Askar (via the --wallet-type askar
startup parameter), other functionality is handled by CredX (AnonCreds) and Indy VDR (Indy ledger interactions). Aries Askar-AnonCreds Recommended - When using Askar/AnonCreds (via the --wallet-type askar-anoncreds
startup parameter), other functionality is handled by AnonCreds RS (AnonCreds) and Indy VDR (Indy ledger interactions).This wallet-type
will eventually be the same as askar
when we have fully integrated the AnonCreds RS library into ACA-Py. Indy SDK Removed in ACA-Py Release 1.0.0rc5 Existing deployments using the Indy SDK MUST transition to Aries Askar and related components as soon as possible. See the Indy SDK to Askar Migration Guide for guidance.
"},{"location":"features/SupportedRFCs/#miscellaneous-features","title":"Miscellaneous Features","text":"Feature Supported Notes ACA-Py Plugins The ACA-Py Plugins repository contains a growing set of plugins that are maintained and (mostly) tested against new releases of ACA-Py. Multi use invitations Invitations using public did Invitations using peer dids supporting connection reuse Implicit pickup of messages in role of mediator Revocable AnonCreds Credentials Multi-Tenancy Documentation Multi-Tenant Management The Traction open source project from BC Gov is a layer on top of ACA-Py that enables the easy management of ACA-Py tenants, with an Administrative UI (\"The Innkeeper\") and a Tenant UI for using ACA-Py in a web UI (setting up, issuing, holding and verifying credentials) Connection-less (non OOB protocol / AIP 1.0) Only for issue credential and present proof Connection-less (OOB protocol / AIP 2.0) Only for present proof Signed Attachments Used for OOB Multi Indy ledger support (with automatic detection) Support added in the 0.7.3 Release. Persistence of mediated messages Plugins in the ACA-Py Plugins repository are available for persistent queue support using Redis and Kafka. Without persistent queue support, messages are stored in an in-memory queue and so are subject to loss in the case of a sudden termination of an ACA-Py process. The in-memory queue is properly handled in the case of a graceful shutdown of an ACA-Py process (e.g. processing of the queue completes and no new messages are accepted). Storage Import & Export Supported by directly interacting with the Aries Askar (e.g., no Admin API endpoint available for wallet import & export). Aries Askar support includes the ability to import storage exported from the Indy SDK's \"indy-wallet\" component. Documentation for migrating from Indy SDK storage to Askar can be found in the Indy SDK to Askar Migration Guide. SD-JWTs Signing and verifying SD-JWTs is supported"},{"location":"features/SupportedRFCs/#supported-rfcs","title":"Supported RFCs","text":""},{"location":"features/SupportedRFCs/#aip-10","title":"AIP 1.0","text":"All RFCs listed in AIP 1.0 are fully supported in ACA-Py, but deprecation and removal of some of the protocols has begun. The following table provides notes about the implementation of specific RFCs.
RFC Supported Notes 0025-didcomm-transports ACA-Py currently supports HTTP and WebSockets for both inbound and outbound messaging. Transports are pluggable and an agent instance can use multiple inbound and outbound transports. 0160-connection-protocol DEPRECATED In the next release, the protocol will be removed. The protocol will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible. 0036-issue-credential-v1.0 DEPRECATED In the next release, the protocol will be removed. The protocol will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible. 0037-present-proof-v1.0 DEPRECATED In the next release, the protocol will be removed. It will continue to be available as an ACA-Py plugin, but those upgrading to that pending release and continuing to use this protocol will need to include the plugin in their deployment configuration. Users SHOULD upgrade to the equivalent AIP 2.0 protocols as soon as possible."},{"location":"features/SupportedRFCs/#aip-20","title":"AIP 2.0","text":"All RFCs listed in AIP 2.0 (including the sub-targets) are fully supported in ACA-Py EXCEPT as noted in the table below.
RFC Supported Notes Fully Supported"},{"location":"features/SupportedRFCs/#other-supported-rfcs","title":"Other Supported RFCs","text":"RFC Supported Notes 0031-discover-features Rarely (never?) used, and in implementing the V2 version of the protocol, the V1 version was found to be incomplete and was updated as part of Release 0.7.3 0028-introduce 00509-action-menu"},{"location":"features/UsingOpenAPI/","title":"Aries Cloud Agent-Python (ACA-Py) - OpenAPI Code Generation Considerations","text":"ACA-Py provides an OpenAPI-documented REST interface for administering the agent's internal state and initiating communication with connected agents.
The running agent provides a Swagger User Interface
that can be browsed and used to test various scenarios manually (see the Admin API Readme for details). However, it is often desirable to produce native language interfaces rather than coding Controllers
using HTTP primitives. This is possible using several public code generation (codegen) tools. This page provides some suggestions based on experience with these tools when trying to generate Typescript
wrappers. The information should be useful to those trying to generate other languages. Updates to this page based on experience are encouraged.
ACA-Py uses aiohttp_apispec tags in code to produce the OpenAPI spec file at runtime dependent on what features have been loaded. How these tags are created is documented in the API Standard Behavior section of the Admin API Readme. The OpenAPI spec is available in raw, unformatted form from a running ACA-Py instance using a route of http://<acapy host and port>/api/docs/swagger.json
or from the browser Swagger User Interface
directly.
The ACA-Py Admin API evolves across releases. To track these changes and ensure conformance with the OpenAPI specification, we provide a tool located at scripts/generate-open-api-spec
. This tool starts ACA-Py, retrieves the swagger.json
file, and runs codegen tools to generate specifications in both Swagger and OpenAPI formats with json
language output. The output of this tool enables comparison with the checked-in open-api/swagger.json
and open-api/openapi.json
, and also serves as a useful resource for identifying any non-conformance to the OpenAPI specification. At the moment, validation
is turned off via the open-api/openAPIJSON.config
file, so warning messages are printed for non-conformance, but the json
is still output. Most of the warnings reported by generate-open-api-spec
relate to missing operationId
fields which results in manufactured method names being created by codegen tools. At the moment, aiohttp_apispec does not support adding operationId
annotations via tags.
The generate-open-api-spec
tool was initially created to help identify issues with method parameters not being sorted, resulting in somewhat random ordering each time a codegen operation was performed. This is relevant for languages which do not have support for named parameters such as Javascript
. It is recommended that the generate-open-api-spec
is run prior to each release, and the resulting open-api/openapi.json
file checked in to allow tracking of API changes over time. At the moment, this process is not automated as part of the release pipeline.
There are inevitably differences around best practice
for method naming based on coding language and organization standards.
Best practice for generating ACA-Py language wrappers is to obtain the raw OpenAPI file from a configured/running ACA-Py instance and then post-process it with a merge utility to match routes and insert desired operationId
fields. This allows the greatest flexibility in conforming to external naming requirements.
Two major open-source code generation tools are Swagger and OpenAPI Tools. Which of these to use can be very dependent on language support required and preference for the style of code generated.
The OpenAPI Tools was found to offer some nice features when generating Typescript
. It creates separate files for each class and allows the use of a .openapi-generator-ignore
file to override generation if there is a spec file issue that needs to be maintained manually.
If generating code for languages that do not support named parameters, it is recommended to specify the useSingleRequestParameter
or equivalent in your code generator of choice. The reason is that, as mentioned previously, there have been instances where parameters were not sorted when output into the raw ACA-Py API spec file, and this approach helps remove that risk.
Another suggestion for code generation is to keep the modelPropertyNaming
set to original
when generating code. Although it is tempting to try and enable marshalling into standard naming formats such as camelCase
, the reality is that the models represent what is sent on the wire and documented in the Aries Protocol RFCS. It has proven handy to be able to see code references correspond directly with protocol RFCs when debugging. It will also correspond directly with what the model
shows when looking at the ACA-Py Swagger UI
in a browser if you need to try something out manually before coding. One final point is that on occasions, it has been discovered that the code generation tools don't always get the marshalling correct in all circumstances when changing model name format.
This document outlines a new functionality within Aries Agent that facilitates the issuance of credentials and presentations in compliance with the W3C standard.
"},{"location":"features/W3cCredentials/#table-of-contents","title":"Table of Contents","text":"did:key
The introduction of VC-DI credentials in ACA-Py facilitates the issuance of credentials and presentations in adherence to the W3C standard.
"},{"location":"features/W3cCredentials/#prerequisites","title":"Prerequisites","text":"Before utilizing this feature, it is essential to have the following:
"},{"location":"features/W3cCredentials/#verifiable-credentials-data-model","title":"Verifiable Credentials Data Model","text":"A basic understanding of the Verifiable Credentials Data Model is required. Resources for reference include:
Familiarity with the Verifiable Presentations Data Model is necessary. Relevant resources can be found at:
Understanding the DIF Presentation Format is recommended. Access resources at:
To prepare for credential issuance, the following steps must be taken:
"},{"location":"features/W3cCredentials/#vc-di-context","title":"VC-DI Context","text":"Ensure that every property key in the document is mappable to an IRI. This requires either the property key to be an IRI by default or to have the shorthand property mapped in the @context
of the document.
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\",\n {\n \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n }\n ]\n}\n
"},{"location":"features/W3cCredentials/#signature-suite","title":"Signature Suite","text":"Select a signature suite for use. VC-DI format currently supports EdDSA signature suites for issuing credentials.
Ed25519Signature2020
Choose a DID method for issuing the credential. VC-DI format currently supports the did:key
method.
did:key
","text":"A did:key
did is not anchored to a ledger, but embeds the key directly in the identifier part of the did. See the did:key Method Specification for more information.
You can create a did:key
using the /wallet/did/create
endpoint with the following body.
{\n \"method\": \"key\",\n \"options\": {\n \"key_type\": \"ed25519\"\n }\n}\n
"},{"location":"features/W3cCredentials/#issue-a-credential","title":"Issue a Credential","text":"The issuance of W3C credentials is facilitated through the /issue-credential-2.0/send
endpoint. This process adheres to the formats described in RFC 0809 VC-DI and utilizes didcomm
for communication between agents.
To issue a W3C credential, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\",\n {\n \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n }\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n}\n
The format to change credential can be seen in the Demo Instruction
/issue-credential-2.0/send
endpoint to issue the credential.{\n \"auto_issue\": true,\n \"auto_remove\": false,\n \"comment\": \"Issuing a test credential\",\n \"credential_preview\": {\n \"@type\": \"https://didcomm.org/issue-credential/2.0/credential-preview\",\n \"attributes\": [\n {\"name\": \"name\", \"value\": \"John Doe\"}\n ]\n },\n \"filter\": {\n \"format\": {\n \"cred_def_id\": \"FMB5MqzuhR...\"\n }\n },\n \"trace\": false\n}\n
{\n \"state\": \"credential_issued\",\n \"credential_id\": \"12345\",\n \"thread_id\": \"abcde\",\n \"role\": \"issuer\"\n}\n
"},{"location":"features/W3cCredentials/#verify-a-credential","title":"Verify a Credential","text":"To verify a credential, follow these steps:
{\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof/send-request
endpoint.{\n \"presentation\": {\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
{\n \"verified\": true,\n \"presentation\": {\n \"type\": \"VerifiablePresentation\",\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ],\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"authentication\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n}\n
"},{"location":"features/W3cCredentials/#present-proof","title":"Present Proof","text":""},{"location":"features/W3cCredentials/#requesting-proof","title":"Requesting Proof","text":"To request proof, follow these steps:
{\n \"presentation_definition\": {\n \"id\": \"example-presentation-definition\",\n \"input_descriptors\": [\n {\n \"id\": \"example-input-descriptor\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials/v1\"\n }\n ],\n \"constraints\": {\n \"fields\": [\n {\n \"path\": [\"$.credentialSubject.name\"],\n \"filter\": {\n \"type\": \"string\",\n \"pattern\": \"John Doe\"\n }\n }\n ]\n }\n }\n ]\n }\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"comment\": \"Requesting proof of name\",\n \"presentation_request\": {\n \"presentation_definition\": {\n \"id\": \"example-presentation-definition\",\n \"input_descriptors\": [\n {\n \"id\": \"example-input-descriptor\",\n \"schema\": [\n {\n \"uri\": \"https://www.w3.org/2018/credentials/v1\"\n }\n ],\n \"constraints\": {\n \"fields\": [\n {\n \"path\": [\"$.credentialSubject.name\"],\n \"filter\": {\n \"type\": \"string\",\n \"pattern\": \"John Doe\"\n }\n }\n ]\n }\n }\n ]\n }\n }\n}\n
{\n \"state\": \"presentation_received\",\n \"thread_id\": \"abcde\",\n \"role\": \"verifier\"\n}\n
"},{"location":"features/W3cCredentials/#presenting-proof","title":"Presenting Proof","text":"To present proof, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"presentation\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n },\n \"comment\": \"Presenting proof of name\"\n}\n
{\n \"state\": \"presentation_sent\",\n \"thread_id\": \"abcde\",\n \"role\": \"prover\"\n}\n
"},{"location":"features/W3cCredentials/#verifying-proof","title":"Verifying Proof","text":"To verify presented proof, follow these steps:
{\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n}\n
/present-proof-2.0/send-request
endpoint.{\n \"presentation\": {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiablePresentation\"],\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
{\n \"verified\": true,\n \"presentation\": {\n \"type\": \"VerifiablePresentation\",\n \"verifiableCredential\": [\n {\n \"@context\": [\n \"https://www.w3.org/2018/credentials/v1\",\n \"https://w3id.org/security/data-integrity/v2\"\n ],\n \"type\": [\"VerifiableCredential\"],\n \"issuer\": \"did:key:z6MkqG......\",\n \"issuanceDate\": \"2023-01-01T00:00:00Z\",\n \"credentialSubject\": {\n \"id\": \"did:key:z6Mkh......\",\n \"name\": \"John Doe\"\n },\n \"proof\": {\n \"type\": \"Ed25519Signature2020\",\n \"created\": \"2023-01-01T00:00:00Z\",\n \"proofPurpose\": \"assertionMethod\",\n \"verificationMethod\": \"did:key:z6MkqG......#z6MkqG......\",\n \"proofValue\": \"eyJhbGciOiJFZERTQSJ9...\"\n }\n }\n ]\n }\n}\n
"},{"location":"features/W3cCredentials/#appendix","title":"Appendix","text":""},{"location":"features/W3cCredentials/#glossary-of-terms","title":"Glossary of Terms","text":"The following guide will get you up and running and developing/debugging ACA-Py as quickly as possible. We provide a devcontainer
and will use VS Code
to illustrate.
By no means is ACA-Py limited to these tools; they are merely examples.
For information on running demos and tests using provided shell scripts, see DevReadMe readme.
"},{"location":"features/devcontainer/#caveats","title":"Caveats","text":"The primary use case for this devcontainer
is for developing, debugging and unit testing (pytest) the aries_cloudagent source code.
There are limitations running this devcontainer, such as all networking is within this container. This container has docker-in-docker which allows running demos, building docker images, running docker compose
all within this container.
The .devcontainer
folder contains the devcontainer.json
file which defines this container. We are using a Dockerfile
and post-install.sh
to build and configure the container run image. The Dockerfile
is simple but in place for simplifying image enhancements (ex. adding poetry
to the image). The post-install.sh
will install some additional development libraries (including for BDD support).
What are Development Containers?
A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or runtimes needed for working with a codebase, and to aid in continuous integration and testing. Dev containers can be run locally or remotely, in a private or public cloud.
see https://containers.dev.
In this guide, we will use Docker and Visual Studio Code with the Dev Containers Extension installed, please set your machine up with those. As of writing, we used the following:
To open ACA-Py in a devcontainer, we open the root of this repository. We can open in 2 ways:
Dev Containers: Open Folder in Container...
File|Open Folder...
, you should be prompted to Reopen in Container
.NOTE follow any prompts to install Python Extension
or reload window for Pylance
when first building the container.
ADDITIONAL NOTE we advise that after each time you rebuild the container that you also perform: Developer: Reload Window
as some extensions seem to require this in order to work as expected.
When the .devcontainer/devcontainer.json is opened, you will see it building... it is building a Python 3.12 image (bash shell) and loading it with all the ACA-Py requirements. We also load a few Visual Studio settings (for running Pytests and formatting with Ruff).
"},{"location":"features/devcontainer/#poetry","title":"Poetry","text":"The Python libraries / dependencies are installed using poetry
. For the devcontainer, we DO NOT use virtual environments. This means you will not see or need venv prompts in the terminals and you will not need to run tasks through poetry (ie. poetry run ruff check .
). If you need to add new dependencies, you will need to add the dependency via poetry AND you should rebuild your devcontainer.
In VS Code, open a Terminal, you should be able to run the following commands:
python -m aries_cloudagent -v\ncd aries_cloudagent\nruff check .\npoetry --version\n
The first command should show you that acapy_agent
module is loaded (ACA-Py). The others are examples of code quality checks that ACA-Py does on commits (if you have precommit
installed) and Pull Requests.
When running ruff check .
in the terminal, you may see error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)
- that's ok. If there are actual ruff errors, you should see something like:
error: Failed to initialize cache at /.ruff_cache: Permission denied (os error 13)\nadmin/base_server.py:7:7: D101 Missing docstring in public class\nFound 1 error.\n
"},{"location":"features/devcontainer/#extensions","title":"extensions","text":"We have added Ruff extensions. Although we have added launch settings for both ruff
, you can also use the extension commands from the command palette.
ruff (format) - acapy_agent
More importantly, these extensions are now added to document save, so files will be formatted and checked. We advise that after each time you rebuild the container that you also perform: Developer: Reload Window
to ensure the extensions are loaded correctly.
Start by running a von-network inside your dev container. Or connect to a hosted ledger. You will need to adjust the ledger configurations if you do this.
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\n
If you want to have revocation then start up a tails server in your dev container. Or connect to a hosted tails server. Once again you will need to adjust the configurations.
git clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\n
# open a terminal in VS Code...\ncd demo\n./run_demo faber\n# open a second terminal in VS Code...\ncd demo\n./run_demo alice\n# follow the script...\n
"},{"location":"features/devcontainer/#further-reading-and-links","title":"Further Reading and Links","text":"To better illustrate debugging pytests and ACA-Py runtime code, let's add some run/debug configurations to VS Code. If you have your own launch.json
and settings.json
, please cut and paste what you want/need.
cp -R .vscode-sample .vscode\n
This will add a launch.json
, settings.json
and multiple ACA-Py configuration files for developing with different scenarios.
Having multiple agents is to demonstrate launching multiple agents in a debug session. Any of the config files and the launch file can be changed and customized to meet your needs. They are all setup to run on different ports so they don't interfere with each other. Running the debug session from inside the dev container allows you to contact other services such as a local ledger or tails server using localhost, while still being able to access the swagger admin api through your browser.
For all the agents if you want to use another ledger (von-network) other than localhost you will need to change the genesis-url
config. For all the agents if you don't want to support revocation you need to remove or comment out the tails-server-base-url
config. If you want to use a non localhost server then you will need to change the url.
./run_demo faber --endorser-role author
to see all the steps to become and endorser../run_demo faber --endorser-role author
to see all the steps to become and author. You need to uncomment the configurations for automating the connection to endorser.To run your ACA-Py code in debug mode, go to the Run and Debug
view, select the agent(s) you want to start and click Start Debugging (F5)
.
This will start your source code as a running ACA-Py instance, all configuration is in the *.yml
files. This is just a sample of a configuration. Note that we are not using a database and are joining to a local VON Network (by default, it would be http://localhost:9000
). You could change this or another ledger such as http://test.bcovrin.vonx.io
. These are purposefully, very simple configurations.
For example, open aries_cloudagent/admin/server.py
and set a breakpoint in async def status_handler(self, request: web.BaseRequest):
, then call GET /status
in the Admin Console and hit your breakpoint.
Pytest is installed and almost ready; however, we must build the test list. In the Command Palette, Test: Refresh Tests
will scan and find the tests.
See Python Testing for more details, and Test Commands for usage.
WARNING: our pytests include coverage, which will prevent the debugger from working. One way around this would be to have a .vscode/settings.json
that says not to use coverage (see above). This will allow you to set breakpoints in the pytest and code under test and use commands such as Test: Debug Tests in Current File
to start debugging.
WARNING: the project configuration found in pyproject.toml
include performing ruff
checks when we run pytest
. Including ruff
does not play nice with the Testing view. In order to have our pytests discoverable AND available in the Testing view, we create a .pytest.ini
when we build the devcontainer. This file will not be committed to the repo, nor does it impact ./scripts/run_tests
but it will impact if you manually run the pytest commands locally outside of the devcontainer. Just be aware that the file will stay on your file system after you shutdown the devcontainer.
At this point, you now have a development environment where you can add pytests, add ACA-Py code and run and debug it all. Be aware there are limitations with devcontainer
and other docker networks. You may need to adjust other docker-compose files not to start their own networks, and you may need to reference containers using host.docker.internal
. This isn't a panacea but should get you going in the right direction and provide you with some development tools.
This guide is to get you from (pretty much) zero to developing code for issuing (and verifying) credentials with your own ACA-Py agent. On the way, you'll look at Hyperledger Indy and how it works, find out about the architecture and components of an ACA-Py agent and its underlying messaging protocols. Scan the list of topics below and jump in as soon as you hit a topic you don't know.
Note that in the guidance we have here, we include not only the links to look at, but we recommend that you not look at certain material to which you might naturally gravitate. That's because the material is out of date and will take you down some unnecessary rabbit holes. Keep your eyes on the goal - developing with Aries to interact with other agents to (amongst other things) connect, issue, hold, present and verify verifiable credentials.
Want to help with this guide? Please add issues or submit a pull request to improve the document. Point out things that are missing, things to improve and especially things that are wrong.
"},{"location":"gettingStarted/ACA-PyAgentArchitecture/","title":"ACA-Py Internals: Agent and Controller","text":"This section talks in particular about the architecture of ACA-Py. An instance of an ACA-Py agent is actually made up of to two parts - the agent itself and a controller.
The agent handles all of the core Aries/non-Aries functionality such as interacting with other agents, managing secure storage, sending event notifications to, and receiving directions from, the controller. The controller provides the business logic that defines how that particular agent instance behaves--how to respond to events in the agent, and when to trigger the agent to initiate events. The controller might be a web or native user interface for a person or it might be coded business rules driven by an enterprise system.
Between the two is a simple interface. The agent sends event notifications to the controller and the controller sends administrator messages to the agent. The controller registers a webhook with the agent, and the event notifications are HTTP callbacks, and the agent exposes a REST API to the controller for all of the administrative messages it is configured to handle. Each of the DIDComm protocols supported by the agent adds a set of administrative messages for the controller to use in responding to events. The Aries cloud agent includes an OpenAPI (aka Swagger) user interface for a developer to use to explore the API for a specific agent.
As such, the agent is just a configured dependency in an ACA-Py deployment. Thus, the vast majority of ACA-Py developers will focus on building controllers (business logic) and perhaps some custom plugins (protocols, as we'll discuss soon) for the agent. Only a relatively small group of ACA-Py maintainers will focus on adding and maintaining the agent dependency.
Want more details about the agent and controller internals? Take a look at the ACA_Py deployment model document.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyBasics/","title":"What is ACA-Py?","text":"ACA-Py is a shared, reusable, interoperable tool kit designed for initiatives and solutions focused on creating, transmitting and storing verifiable digital credentials. It is infrastructure for trusted, decentralized, peer-to-peer interactions. It includes a shared secure storage and a key management service for clients, as well as communication protocols for trusted interaction between agents.
An ACA-Py agent (such as the one in this repository):
The some of the concepts and features that make up the ACA-Py project are documented in the aries-rfcs - but don't dive in there yet! We'll get to the features and concepts to be found there with a guided tour of the key RFCs.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyBigPicture/","title":"ACA-Py Agents in context: The Big Picture","text":"ACA-Py agents can be used in a lot of places. This classic Indy Architecture picture shows five agents - the four around the outside (on a phone, a tablet, a laptop and an enterprise server) are referred to as \"edge agents\", and many cloud agents in the blue circle.
The agents in the picture shares many attributes:
While there can be many other agent setups, the picture above shows the most common ones - mobile wallets for people, edge agents for organizations and cloud agents for routing messages (although cloud agents could be edge agents. Sigh...). A significant emerging use case missing from that picture are agents embedded within/associated with IoT devices. In the common IoT case, IoT device agents are just variants of other edge agents, connected to the rest of the ecosystem through a cloud agent. All the same principles apply.
Misleading in the picture is that (almost) all agents connect directly to the verifiable data repository. In this picture it's the Sovrin ledger, but that could be any ledger (e.g. set of nodes running ledger software) or non-ledger based verifiable data repositories -- such as web servers. That implies most agents embed a verifiable data registry client (usually, a DID Resolver) that makes calls to one or more types of verifiable data registries. Thus, unlike what is implied in the picture, edge agents (commonly) do not call a cloud agent to interact with the verifiable data registry - they do it directly. Super small IoT devices might be an exception to that - lacking compute/storage resources and/or connectivity, they might communicate with a cloud agent that would communicate with the verifiable data registry.
The three most common purposes of cloud agents are verifiable credential issuers, verifiers and \"mediators\" -- agents that route messages to mobile wallets that lack a persistent endpoint. For the latter, rather than messages going directly to mobile wallet (which is often impossible - for example sending to a mobile wallet), messages intended for the agent are routed through a mediator who hold the messages until the agent picks up its messages.
We also recommend not digging into all the layers described here. Just as you don't have to know how TCP/IP works to write a web app, you don't need to know how ledgers or the various protocols work to be able to build your first ACA-Py-based application. Later in this guide we'll covering the starting point you do need to know.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/","title":"Developer Demos and Samples of ACA-Py Agent","text":"Here are some demos that developers can use to get up to speed on ACA-Py. You don't have to be a developer to use these. If you can use docker and JSON, then that's enough to give these a try.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#open-api-demo","title":"Open API demo","text":"This demo uses agents (and an Indy ledger), but doesn't implement a controller at all. Instead it uses the OpenAPI (aka Swagger) user interface to let you be the controller to connect agents, issue a credential and then proof that credential.
Collaborating Agents OpenAPI Demo
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#python-controller-demo","title":"Python Controller demo","text":"Run this demo to see a couple of simple Python controller implementations for Alice and Faber. Like the previous demo, this shows the agents connecting, Faber issuing a credential to Alice and then requesting a proof based on the credential. Running the demo is simple, but there's a lot for a developer to learn from the code.
Python-based Alice/Faber Demo
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#mobile-app-and-web-sample-bc-gov-showcase","title":"Mobile App and Web Sample - BC Gov Showcase","text":"Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.
"},{"location":"gettingStarted/ACA-PyDeveloperDemos/#indicio-developer-demo","title":"Indicio Developer Demo","text":"Minimal Aca-Py demo that can be used by developers to isolate and test features:
Indicio Aca-Py Minimal Example
"},{"location":"gettingStarted/AgentConnections/","title":"Establishing a connection between ACA-Py Agents","text":"Use an ACA-Py issuer/verifier to establish a connection with a compatible mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/ConnectIndyNetwork/","title":"Connecting to an Indy Network","text":"To be completed.
"},{"location":"gettingStarted/CredentialRevocation/","title":"Credential Revocation in ACA-Py","text":""},{"location":"gettingStarted/CredentialRevocation/#overview","title":"Overview","text":"Revocation is perhaps the most difficult aspect of verifiable credentials to manage. This is true in AnonCreds, particularly in the management of AnonCreds revocation registries (RevRegs). Through experience in deploying use cases with ACA-Py we have found that it is very difficult for the controller (the application code) to manage revocation registries, and as such, we have changed the implementation in ACA-Py to ensure that it is handling almost all the work in revoking credentials. The only thing the controller writer has to do is track the minimum things necessary to the business rules around revocation, such as whose credentials should be revoked, and how close to real-time should revocations be published?
Here is a summary of all of the AnonCreds revocation activities performed by issuers. After this, we'll provide a (much shorter) list of what an ACA-Py issuer controller has to do. For those interested, there is a more complete overview of AnonCreds revocation, including all of the roles, and some details of the cryptography behind the approach:
Since managing RevRegs is really hard for an ACA-Py controller, we have tried to minimize what an ACA-Py Issuer controller has to do, leaving everything else to be handled by ACA-Py. Of the items in the previous list, here is what an ACA-Py issuer controller does:
That is the minimum amount of tracking the controller must do while still being able to execute the business rules around revoking credentials.
From experience, we\u2019ve added to two extra features to deal with unexpected conditions:
The following are the ACA-Py steps and APIs involved in handling credential revocation.
To try these out, use the ACA-Py Alice/Faber demo with tails server support enabled. You will need to have the URL of an running instance of https://github.com/bcgov/indy-tails-server.
Include the command line parameter --tails-server-base-url <indy-tails-server url>
Publish credential definition
Credential definition is created. All required revocation collateral is also created and managed including revocation registry definition, entry, and tails file.
POST /credential-definitions\n{\n \"schema_id\": schema_id,\n \"support_revocation\": true,\n # Only needed if support_revocation is true. Defaults to 100\n \"revocation_registry_size\": size_int,\n \"tag\": cred_def_tag # Optional\n\n}\nResponse:\n{\n \"credential_definition_id\": \"credential_definition_id\"\n}\n
Issue credential
This endpoint manages revocation data. If new revocation registry data is required, it is automatically managed in the background.
POST /issue-credential/send-offer\n{\n \"cred_def_id\": credential_definition_id,\n \"revoc_reg_id\": revocation_registry_id\n \"auto_remove\": False, # We need the credential exchange record when revoking\n ...\n}\nResponse\n{\n \"credential_exchange_id\": credential_exchange_id\n}\n
Revoking credential
POST /revocation/revoke\n{\n \"rev_reg_id\": <revocation_registry_id>\n \"cred_rev_id\": <credential_revocation_id>,\n \"publish\": <true|false>\n}\n
If publish=false, you must use \u200b/issue-credential\u200b/publish-revocations
to publish pending revocations in batches. Revocation are not written to ledger until this is called.
When asking for proof, specify the time span when the credential is NOT revoked
POST /present-proof/send-request\n {\n \"connection_id\": ...,\n \"proof_request\": {\n \"requested_attributes\": [\n {\n \"name\": ...\n \"restrictions\": ...,\n ...\n \"non_revoked\": # Optional, override the global one when specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n },\n ...\n ],\n \"requested_predicates\": [\n {\n \"name\": ...\n ...\n \"non_revoked\": # Optional, override the global one when specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n },\n ...\n ],\n \"non_revoked\": # Optional, only check revocation if specified\n {\n \"from\": <seconds from Unix Epoch> # Optional, default is 0\n \"to\": <seconds from Unix Epoch>\n }\n }\n }\n
ACA-Py supports Revocation Notification v1.0.
Note: The optional ~please_ack
is not currently supported.
To notify connections to which credentials have been issued, during step 2 above, include the following attributes in the request body:
notify
- A boolean value indicating whether or not a notification should be sent. If the argument --notify-revocation
is used on startup, this value defaults to true
. Otherwise, it will default to false
. This value overrides the --notify-revocation
flag; the value of notify
always takes precedence.connection_id
- Connection ID for the connection of the credential holder. This is required when notify
is true
.thread_id
- Message Thread ID of the credential exchange message that resulted in the credential now being revoked. This is required when notify
is true
comment
- An optional comment presented to the credential holder as part of the revocation notification. This field might contain the reason for revocation or some other human readable information about the revocation.Your request might look something like:
POST /revocation/revoke\n{\n \"rev_reg_id\": <revocation_registry_id>\n \"cred_rev_id\": <credential_revocation_id>,\n \"publish\": <true|false>,\n \"notify\": true,\n \"connection_id\": <connection id>,\n \"thread_id\": <thread id>,\n \"comment\": \"optional comment\"\n}\n
"},{"location":"gettingStarted/CredentialRevocation/#holder-role","title":"Holder Role","text":"On receipt of a revocation notification, an event with topic acapy::revocation-notification::received
and payload containing the thread ID and comment is emitted on the event bus. This can be handled in plugins to further customize notification handling.
If the argument --monitor-revocation-notification
is used on startup, a webhook with the topic revocation-notification
and a payload containing the thread ID and comment is emitted to registered webhook urls.
NOTE: This capability is deprecated and will likely be removed entirely in an upcoming release of ACA-Py.
The process for creating revocation registries is completely automated - when you create a Credential Definition with revocation enabled, a revocation registry is automatically created (in fact 2 registries are created), and when a registry fills up, a new one is automatically created.
However the ACA-Py admin api supports endpoints to explicitly create a new revocation registry, if you desire.
There are several endpoints that must be called, and they must be called in this order:
Create revoc registry POST /revocation/create-registry
you need to provide the credential definition id and the size of the registry
Fix the tails file URI PATCH /revocation/registry/{rev_reg_id}
here you need to provide the full URI that will be written to the ledger, for example:
{\n \"tails_public_uri\": \"http://host.docker.internal:6543/VDKEEMMSRTEqK4m7iiq5ZL:4:VDKEEMMSRTEqK4m7iiq5ZL:3:CL:8:faber.agent.degree_schema:CL_ACCUM:3cb5c439-928c-483c-a9a8-629c307e6b2d\"\n}\n
Post the revoc def to the ledger POST /revocation/registry/{rev_reg_id}/definition
if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser
Write the tails file PUT /revocation/registry/{rev_reg_id}/tails-file
the tails server will check that the registry definition is already written to the ledger
Post the initial accumulator value to the ledger POST /revocation/registry/{rev_reg_id}/entry
if you are an author (i.e. have a DID with restricted ledger write access) then this transaction may need to go through an endorser
From time to time an Issuer may want to issue credentials from a new Revocation Registry. That can be done by changing the Credential Definition, but that could impact verifiers. Revocation Registries go through a series of state changes: init
, generated
, posted
, active
, full
, decommissioned
. When issuing revocable credentials, the work is done with the active
registry record. There are always 2 active
registry records: one for tracking revocation until it is full, and the second to act as a \"hot swap\" in case issuance is done when the primary is full and being replaced. This ensures that there is always an active
registry. When rotating, all registry records (except records in init
state) are decommissioned
and a new pair of active
registry records are created.
Issuers can rotate their Credential Definition Revocation Registry records with a simple call: POST /revocation/active-registry/{cred_def_id}/rotate
It is advised that Issuers ensure the active registry is ready by calling GET /revocation/active-registry/{cred_def_id}
after rotation and before issuance (if possible).
ACA-Py Agents can communicate with each other via a message mechanism called DIDComm (DID Communication). DIDComm enables secure, asynchronous, end-to-end encrypted messaging between agents, with messages (usually) routed through some configuration of intermediary agents. ACA-Py agents use the did:peer DID method, which uses DIDs that are not published to a public verifiable data registry, but only shared privately between the communicating parties - usually just two agents.
Given the underlying secure messaging layer (routing and encryption covered later in the \"Deeper Dive\" sections), DIDComm protocols define standard sets of messages to accomplish a task. For example:
Each protocol has a specification that defines the protocol's messages, one or more roles for the different participants, and a state machine that defines the state transitions triggered by the messages. For example, in the connection protocol, the messages are \"invitation\", \"connectionRequest\" and \"connectionResponse\", the roles are \"inviter\" and \"invitee\", and the states are \"invited\", \"requested\" and \"connected\". Each participant in an instance of a protocol tracks the state based on the messages they've seen.
Code for protocols are implemented as externalized modules from the core agent code so that they can be included (or not) in an agent deployment. The protocol code must include the definition of a state object for the protocol, handlers for the protocol messages, and the events and administrative messages that are available to the controller to inject business logic into the running of the protocol. Each administrative message becomes part of the REST API exposed by the agent instance.
Developers building ACA-Py agents for a particular use case will generally focus on building controllers. They must understand the protocols that they are going to need, including the events the controller will receive, and the protocol's administrative messages exposed via the REST API. From time to time, such Aries agent developers might need to implement their own protocols.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/DIDCommRoutingExample/","title":"DIOComm Routing - an example","text":"In this example, we'll walk through an example of complex DIDComm routing, outlining some of the possibilities that can be implemented. Do realize that the vast majority of the work is already done for you if you are just using ACA-Py. You have to define the setup your agents will use, and ACA-Py will take care of all the messy details described below.
We'll start with the Alice and Bob example from the Cross Domain Messaging Aries RFC.
What are the DIDs involved, what's in their DIDDocs, and what communications are happening between the agents as the connections are made?
"},{"location":"gettingStarted/DIDCommRoutingExample/#the-scenario","title":"The Scenario","text":"Bob and Alice want to establish a connection so that they can communicate. Bob uses an Agency endpoint (https://agents-r-us.ca
), labelled as 9 and will have an agent used for routing, labelled as 3. We'll also focus on Bob's messages from his main iPhone, labelled as 4. We'll ignore Bob's other agents (5 and 6) and we won't worry about Alice's configuration (agents 1, 2 and 8). While the process below is all about Bob, Alice and her agents are doing the same interactions within her domain.
A DID and DIDDoc are generated by each participant in each relationship. For Bob's agents (iPhone and Routing), that includes:
That's a lot more than just the Bob and Alice relationship we usually think about!
"},{"location":"gettingStarted/DIDCommRoutingExample/#diddoc-data","title":"DIDDoc Data","text":"From a routing perspective the important information in the DIDDoc is the following (as defined in the DIDDoc Conventions Aries RFC):
services
of type did-communication
, including:serviceEndpoint
recipientKeys
array of referenced keys for the ultimate target(s) of the messageroutingKeys
array of referenced keys for the mediatorsLet's look at the did-communication
service data in the DIDDocs generated by Bob's iPhone and Routing agents, listed above:
The serviceEndpoint
that Bob tells Alice about is the endpoint for the Agency.
The recipientKeys
entry is a key reference for Bob's iPhone specifically for Alice.
The routingKeys
entries is a reference to the public key for the Routing Agent.
Bob and his Routing Agent:
serviceEndpoint
is empty because Bob's iPhone has no endpoint. See the note below for more on this.recipientKeys
entry is a key reference for Bob's iPhone specifically for the Routing Agent.The routingKeys
array is empty.
Bob and Agency:
serviceEndpoint
is the endpoint for Bob's Routing Agent.recipientKeys
entry is a key reference for Bob's iPhone specifically for the Agency.The routingKeys
is a single entry for the key reference for the Routing Agent key.
Bob's Routing Agent and Agency:
serviceEndpoint
is the endpoint for Bob's Routing Agent.recipientKeys
entry is a key reference for Bob's Routing Agent specifically for the Agency.routingKeys
array is empty.The null serviceEndpoint
for Bob's iPhone is worth a comment. Mobile apps work by sending requests to servers, but cannot be accessed directly from a server. A DIDComm mechanism (Transports Return Route) enables a server to send messages to a Mobile agent by putting the messages into the response to a request from the mobile agent. While not formalized in an Aries RFC (yet), cloud agents can use mobile platforms' (Apple and Google) notification mechanisms to trigger a user interface event.
Given that background, let's go through the sequence of events and messages that occur in building a DIDDoc for Bob's edge agent to send to Alice's edge agent. We'll start the sequence with all of the Agents in place as the bootstrapping of the Agency, Routing Agent and Bob's iPhone is trickier than we need to go through here. We'll call that an \"exercise left for the reader\".
We'll start the process with Alice sending an out of band connection invitation message to Bob, e.g. through a QR code or a link in an email. Here's one possible sequence for creating the DIDDoc. Note that there are other ways this could be done:
did-communication
service endpoint is set to the Agency public DID andNote: Instead of using the DID Bob created, the Agency and Routing Agent might use the public key used to encrypt the messages for their internal routing table look up for where to send a message. In that case, the Bob and the Routing Agent share the public key instead of the DID to their respective upstream routers.
With the DIDDoc ready, Bob uses the path provided in the invitation to send a connection-request
message to Alice with the new DID and DIDDoc. Alice now knows how to get any DIDComm message to Bob in a secure, end-to-end encrypted manner. Subsequently, when Alice sends messages to Bob's agent, she uses the information in the DIDDoc to securely send the message to the Agency endpoint, it is sent through to the Routing Agent and on to Bob's iPhone agent for processing. Now Bob has the information he needs to securely send any DIDComm message to Alice in a secure, end-to-end encrypted manner.
At this time, there are not specific DIDComm protocols for the \"set up the routing\" messages between the agents in Bob's domain (Agency, Routing and iPhone). Those could be implemented to be proprietary by each agent provider (since it's possible one vendor would write the code for each of those agents), but it's likely those will be specified as open standard DIDComm protocols.
Based on the DIDDoc that Bob has sent Alice, for her to send a DIDComm message to Bob, Alice must:
DIDComm peer-to-peer messages are asynchronous messages that one agent sends to another - for example, Faber would send to Alice. In between, there may be other agents and message processing, but at the edges, Faber appears to be messaging directly with Alice using encryption based on the DIDs and DIDDocs that the two shared when establishing a connection. The messages are JSON-LD-friendly messages with a \"type\" that defines the namespace, protocol, protocol version and type of the message, an \"id\" that is GUID for the message, and additional fields as required by the message type.
Link: Message Types
As protocols are executed, the data associated with the protocol is stored in the (currently named) wallet of the agent. The data primarily consists of the state object for that instance of the protocol, and any artifacts of running the protocol. For example, when establishing a connection, the metadata associated with the connection (DIDs, DID Documents and private keys) is stored in the agent's wallet. Likewise, ledger data is cached in the wallet (DIDs, schema, credential definitions, etc.) and credentials. This is taken care of by the Aries agent and the protocols configured into the agent.
"},{"location":"gettingStarted/DIDcommMsgs/#message-decorators","title":"Message Decorators","text":"In addition to protocol specific data elements in messages, messages can include \"decorators\", standardized message elements that define cross-cutting behavior. The most common example is the \"thread\" decorator, which is used to link the messages in a protocol instance. As messages go back and forth between agents to complete an instance of a protocol (e.g. issuing a credential), the thread decorator data elements let the agents know to which protocol instance the message belongs. Other currently defined examples of decorators include attachments, localization, tracing and timing. Decorators are often processed by the core of the agent, but some are processed by the protocol message handlers. For example, the thread decorator processed to retrieve the protocol state object for that instance (thread) of the protocol before control is passed to the protocol message handler.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/","title":"Decentralized Identity Use Case Demos","text":"The following are some demos that you can go through to see verifiable credentials in action. For each of the demos, we've included some guidance on what you should get out of the demo - and where you should stop exploring the demos. Later on in this guide we have some command line demos built on current generation code for developers wanting to look at what's going on under the hood.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#bc-gov-showcase","title":"BC Gov Showcase","text":"Try out the BC Gov Showcase to download a production Wallet for holding Verifiable Credentials, and then use your new wallet to get and present credentials in some sample scenarios. The end-to-end verifiable credential experience in 30 minutes or less.
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#traction-anoncreds-workshop","title":"Traction AnonCreds Workshop","text":"Now that you have a wallet, how about being an issuer, and experience what is needed on that side of an exchange? To do that, try the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/DecentralizedIdentityDemos/#more-demos-please","title":"More demos, please","text":"Interested in seeing your demos/use cases added to this list? Submit an issue or a PR and we'll see about including it in this list.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/","title":"What should I work on? Options for ACA-Py/Indy Developers","text":"Now that you know the basics of the ACA-Py/Indy eco-system, what do you want to work on? There are many projects at different levels of the eco-system you could choose to work on, and many ways to contribute to the community.
This is an important summary for newcomers, as often the temptation is to start at a level far below where you plan to focus your attention. Too often devs coming into the community start at \"the blockchain\"; at indy-node
(the Indy public ledger) or the indy-sdk
. That is far below where the majority of developers will work and is not really that helpful if what you really want to do is build decentralized identity applications.
In the following, we go through the layers from the top of the stack to the bottom. Our expectation is that the majority of developers will work at the application level, and there will be fewer contributing developers each layer down you go. This is not to dissuade anyone from contributing at the lower levels, but rather to say if you are not going to contribute at the lower levels, you don't need to everything about it. It's much like web development - you don't need to know TCP/IP to build web apps.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#building-decentralized-identity-applications","title":"Building Decentralized Identity Applications","text":"If you just want to build enterprise applications on top of the decentralized identity-related Hyperledger projects, you can start with building cloud-based controller apps using any language you want, and deploying your code with an instance of the code in the ACA-Py repository.
If you want to build a mobile agent, there are open source options available, including Bifold Wallet, which is built on Credo-TS. Both are OpenWallet Projects.
As a developer building applications that use/embed ACA-Py agents, you should join the Aries Working Group's weekly calls and watch the aries-rfcs repo to see what protocols are being added and extended. In some cases, you may need to create your own protocols to be added to this repository, and if you are looking for interoperability, you should specify those protocols in an open way, involving the community.
Note that if building apps is what you want to do, you don't need to do a deep dive into the inner workings of ACA-Py, ledgers or mobile wallets. You need to know the concepts, but it's not a requirement that you know the code base intimately.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#contributing-to-aca-py","title":"Contributing to ACA-Py","text":"Of course as you build applications using ACA-Py, you will no doubt find deficiencies in the code and features you want added. Contributions to this repo will always be welcome.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#supporting-additional-ledgers","title":"Supporting Additional Ledgers","text":"ACA-Py currently supports a handful of public verifiable data registries and verifiable credentials exchange. A project goals to be \"ledger\"-agnostic, and to support a range of verifiable data registries. We're making it easier and easier to support other verifiable data registries, and would welcome assistance in adding new ones.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#other-agent-frameworks","title":"Other Agent Frameworks","text":"Although controllers for an ACA-Py instance can be written in any language, there is definitely a place for functionality equivalent (and better) to what is in this repo in other languages. Use the example provided by the ACA-Py demo, evolve that using a different language, and as you discover better ways to do things, discuss and share those improvements in the broader ACA-Py community so that this and other code bases improve.
"},{"location":"gettingStarted/IndyACA-PyDevOptions/#working-at-the-cryptographic-layer","title":"Working at the Cryptographic Layer","text":"Finally, at the deepest level, and core to all of the projects is the cryptography underpinning ACA-Py. If you are a cryptographer, that's where you want to be - and we want you there.
"},{"location":"gettingStarted/IndyBasics/","title":"Indy, Verifiable Credentials and Decentralized Identity Basics","text":"NOTE: If you are developer building apps on top of ACA-Py and Indy, you DO NOT need to know the nuts and bolts of Indy to build applications. You need to know about verifiable credentials and the concepts of self-sovereign identity. But as an app developer, you don't need to do the Indy getting started pieces. ACA-Py takes care of those details for you. The introduction linked here should be sufficient.
If you are new to Indy and verifiable credentials and want to learn the core concepts, this link provides a solid foundation into the goals and purpose of Indy including verifiable credentials, DIDs, decentralized/self-sovereign identity, the Sovrin Foundation and more. The document is the content of the Indy chapter of the Hyperledger edX Blockchain for Business course (which you could also go through).
Feel free to do the demo that is referenced in the material, but we recommend that you not dig into that codebase. It's pretty old now - year old! We've got much more relevant examples later in this guide.
As well, don't use the guidance in the course to dive into the content about \"Getting Started\" with Indy. Come back here as this content is far more relevant to the current state of Indy and ACA-Py.
"},{"location":"gettingStarted/IndyBasics/#tldr","title":"tl;dr","text":"Indy provides an implementation of the basic functions required to implement a network for self-sovereign identity (SSI) - a ledger, client SDKs for interacting with the ledger, DIDs, and capabilities for issuing, holding and proving verifiable credentials.
Back to the ACA-Py Developer - Getting Started Guide.
"},{"location":"gettingStarted/IssuingAnonCredsCredentials/","title":"Issuing AnonCreds Credentials","text":"Become an issuer, and define, publish and issue verifiable credentials to a mobile wallet. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/PresentingAnonCredsProofs/","title":"Presenting AnonCreds Proofs","text":"Become a verifier, and construct a presentation request, send the request to a mobile wallet, get a presentation derived from AnonCreds verifiable credentials and verify the presentation. Run the Traction AnonCreds Workshop. Get your own (temporary -- it will be gone in a few weeks!) ACA-Py-based issuer/verifier agent. Connect to the wallet on your mobile phone, issue a credential and then present it back. Lots to learn, without ever leaving your browser!
"},{"location":"gettingStarted/RoutingEncryption/","title":"Deeper Dive: DIDComm Message Routing and Encryption","text":"Many Aries edge agents do not directly receive messages from a peer edge agent - they have agents in between that route messages to them. This is done for many reasons, such as:
https://agents-R-Us.ca
) that they are \"hidden in a crowd\".Thus, when a DIDComm message is sent from one edge agent to another, it is routed per the instructions of the receiver and for the needs of the sender. For example, in the following picture, Alice might be told by Bob to send messages to his phone (agent 4) via agents 9 and 3, and Alice might always send out messages via agent 2.
The following looks at how those requirements are met with mediators (for example, agents 9 and 3) and relays (agent 2).
"},{"location":"gettingStarted/RoutingEncryption/#inbound-routing-mediators","title":"Inbound Routing - Mediators","text":"To tell a sender how to get a message to it, an agent puts into the DIDDoc for that sender a service endpoint for the recipient (with an encryption key) and an ordered list (possibly empty) of routing keys (called \"mediators\") to use when sending the message. To send the message, the sender must:
Note that when an agent uses mediators, it is there responsibility to notify any mediators that need to know of the new relationship that has been formed using the connection protocol and the routing needs of that relationship - where to send messages that arrive destined for a given verkey. Mediator agents have what amounts to a routing table to know when they receive a forward message for a given verkey, where it should go.
Link: DIDDoc conventions for inbound routing
"},{"location":"gettingStarted/RoutingEncryption/#relays","title":"Relays","text":"Inbound routing described above covers mediators for the receiver that the sender must know about. In addition, either the sender or the receiver may also have relays they use for outbound messages. Relays are routing agents not known to other parties, but that participate in message routing. For example, an enterprise agent might send all outbound traffic to a single gateway in the organization. When sending to a relay, the sender just wraps the message in another \"forward\" message envelope.
Link: Mediators and Relays
"},{"location":"gettingStarted/RoutingEncryption/#message-encryption","title":"Message Encryption","text":"The DIDComm encryption handling is handling within the ACA-Py agent, and not really something a developer building applications using an agent needs to worry about. Further, within an ACA-Py agent, the handling of the encryption is left to various cryptographic libraries to handle. To encrypt a message, the agent code calls a pack()
function to handle the encryption, and to decrypt a message, the agent code calls a corresponding unpack()
function. The \"wire messages\" (as originally called) are described in detail here, including variations for sender authenticated and anonymous encrypting. Wire messages were meant to indicate the handling of a message from one agent directly to another, versus the higher level concept of routing a message from an edge agent to a peer edge agent.
Much thought has also gone into repudiable and non-repudiable messaging, as described here.
"},{"location":"gettingStarted/YourOwnACA-PyAgent/","title":"Creating Your Own Aries Agent","text":"Use the \"next steps\" in the Traction AnonCreds Workshop and create your own controller. The Aries ACA-Py Controllers repository has some samples to get you started.
"},{"location":"testing/AgentTracing/","title":"Using Tracing in ACA-PY","text":"ACA-Py supports message tracing, according to the Tracing RFC.
Tracing can be enabled globally, for all messages/events, or it can be enabled on an exchange-by-exchange basis.
Tracing is configured globally for the agent.
"},{"location":"testing/AgentTracing/#aca-py-configuration","title":"ACA-PY Configuration","text":"The following options can be specified when starting the aca-py agent:
--trace Generate tracing events.\n --trace-target <trace-target>\n Target for trace events (\"log\", \"message\", or http\n endpoint).\n --trace-tag <trace-tag>\n Tag to be included when logging events.\n --trace-label <trace-label>\n Label (agent name) used logging events.\n
The --trace
option enables tracing globally for the agent, the other options can configure the trace destination and content (default is log
).
Tracing can be enabled on an exchange-by-exchange basis, by including { ... \"trace\": True, ...}
in the JSON payload to the API call (for credential and proof exchanges).
The run_demo
script supports the following parameters and environment variables.
Environment variables:
TRACE_ENABLED Flag to enable tracing\n\nTRACE_TARGET_URL Host:port of endpoint to log trace events (e.g. logstash:9700)\n\nDOCKER_NET Docker network to join (must be used if ELK stack is running in docker)\n\nTRACE_TAG Tag to be included in all logged trace events\n
Parameters:
--trace-log Enables tracing to the standard log output\n (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n\n--trace-http Enables tracing to an HTTP endpoint (specified by TRACE_TARGET_URL)\n (sets TRACE_ENABLED, TRACE_TARGET, TRACE_TAG)\n
When running the Faber controller, tracing can be enabled using the T
menu option:
Faber | Connected\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is ON\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n\n[1/2/3/T/X] t\n\n>>> Credential/Proof Exchange Tracing is OFF\n (1) Issue Credential\n (2) Send Proof Request\n (3) Send Message\n (T) Toggle tracing on credential/proof exchange\n (X) Exit?\n\n[1/2/3/T/X]\n
When Exchange Tracing
is ON
, all exchanges will include tracing.
You can use the ELK
stack in the ELK Stack sub-directory as a target for trace events, just start the ELK stack using the docker-compose file and then in two separate bash shells, startup the demo as follows:
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo faber --trace-http\n
DOCKER_NET=elknet TRACE_TARGET_URL=logstash:9700 ./run_demo alice --trace-http\n
"},{"location":"testing/AgentTracing/#hooking-into-event-messaging","title":"Hooking into event messaging","text":"ACA-PY supports sending events to web hooks, which allows the demo agents to display them in the CLI. To also send them to another end point, use the --webhook-url
option, which requires the WEBHOOK_URL
environment variable. Configure an end point running on the docker host system, port 8888, use the following:
WEBHOOK_URL=host.docker.internal:8888 ./run_demo faber --webhook-url\n
"},{"location":"testing/BDDTests/","title":"Integration Tests for ACA-Py using Behave","text":"Integration tests for ACA-Py are implemented using Behave functional tests to drive ACA-Py agents based on the alice/faber demo framework.
If you are new to the ACA-Py integration test suite, this video from ACA-Py Maintainer @ianco describes the Integration Tests in ACA-Py, how to run them and how to add more tests. See also the video at the end of this document about running Aries Agent Test Harness (AATH) tests before you submit your pull requests. Note that the relevant AATH tests are now run as part of the tests run when submitting a code PR for ACA-Py.
"},{"location":"testing/BDDTests/#getting-started","title":"Getting Started","text":"To run the ACA-Py Behave tests, open a bash shell run the following:
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start\ncd ..\ngit clone https://github.com/bcgov/indy-tails-server.git\ncd indy-tails-server/docker\n./manage build\n./manage start\ncd ../..\ngit clone \"https://github.com/openwallet-foundation/acapy\"\ncd acapy/demo\n./run_bdd -t ~@taa_required\n
Note that an Indy ledger and tails server are both required (these can also be specified using environment variables).
Note also that some tests require a ledger with Indy the \"TAA\" (Transaction Author Agreement) concept enabled, how to run these tests will be described later.
By default the test suite runs using a default (SQLite) wallet, to run the tests using postgres run the following:
# run the above commands, up to cd acapy/demo\ndocker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres:10\nACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n
To run the tests against the back-end askar
libraries (as opposed to indy-sdk) run the following:
BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t ~@taa_required\n
(Note that wallet-type
is currently the only extra argument supported.)
You can run individual tests by specifying the tag(s):
./run_bdd -t @T001-AIP10-RFC0037\n
"},{"location":"testing/BDDTests/#running-integration-tests-which-require-taa","title":"Running Integration Tests which require TAA","text":"To run a local von-network with TAA enabled,run the following:
git clone https://github.com/bcgov/von-network\ncd von-network\n./manage build\n./manage start --taa-sample --logs\n
You can then run the TAA-enabled tests as follows:
./run_bdd -t @taa_required\n
or:
BDD_EXTRA_AGENT_ARGS=\"{\\\"wallet-type\\\":\\\"askar\\\"}\" ./run_bdd -t @taa_required\n
The agents run on a pre-defined set of ports, however occasionally your local system may already be using one of these ports. (For example MacOS recently decided to use 8021 for the ftp proxy service.)
To override the default port settings:
AGENT_PORT_OVERRIDE=8030 ./run_bdd -t <some tag>\n
(Note that since the test run multiple agents you require up to 60 available ports.)
"},{"location":"testing/BDDTests/#note-on-bbs-signatures","title":"Note on BBS Signatures","text":"ACA-Py does not come installed with the bbs
library by default therefore integration tests involving BBS signatures (tagged with @BBS) will fail unless excluded.
You can exclude BBS tests from running with the tag ~@BBS
:
run_bdd -t ~@BBS\n
If you want to run all tests including BBS tests you should include the --all-extras
flag:
run_bdd --all-extras\n
Note: The bbs
library may not install on ARM (i.e. aarch64 or arm64) architecture therefore YMMV with testing BBS Signatures on ARM based devices.
ACA-Py Behave tests are based on the interoperability tests that are implemented in the Aries Agent Test Harness (AATH). Both use Behave (Gherkin) to execute tests against a running ACA-Py agent (or in the case of AATH, against any compatible Aries agent), however the ACA-Py integration tests focus on ACA-Py specific features.
AATH:
As of around the publication of ACA-Py 1.0.0 (Summer 2024), the ACA-Py CI/CD Pipeline for code PRs includes running a useful subset of AATH tests.
ACA-Py integration tests:
ACA-Py integration tests use the same configuration approach as AATH, documented here.
In addition to support for external schemas, credential data etc, the ACA-Py integration tests support configuration of the ACA-Py agents that are used to run the test. For example:
Scenario Outline: Present Proof where the prover does not propose a presentation of the proof and is acknowledged\n Given \"3\" agents\n | name | role | capabilities |\n | Acme | issuer | <Acme_capabilities> |\n | Faber | verifier | <Acme_capabilities> |\n | Bob | prover | <Bob_capabilities> |\n And \"<issuer>\" and \"Bob\" have an existing connection\n And \"Bob\" has an issued <Schema_name> credential <Credential_data> from <issuer>\n ...\n\n Examples:\n | issuer | Acme_capabilities | Bob_capabilities | Schema_name | Credential_data | Proof_request |\n | Acme | --public-did | | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n | Faber | --public-did --mediator | --mediator | driverslicense | Data_DL_NormalizedValues | DL_age_over_19 |\n
In the above example, the test will run twice using the parameters specified in the \"Examples\" section. The Acme, Faber and Bob agents will be started for the test and then shut down when the test is completed.
The agent's \"capabilities\" are specified using the same command-line parameters that are supported for the Alice/Faber demo agents.
"},{"location":"testing/BDDTests/#global-configuration-for-all-aca-py-agents-under-test","title":"Global Configuration for All ACA-Py Agents Under Test","text":"You can specify parameters that are applied to all ACA-Py agents using the ACAPY_ARG_FILE
environment variable, for example:
ACAPY_ARG_FILE=postgres-indy-args.yml ./run_bdd\n
... will apply the parameters in the postgres-indy-args.yml
file (which just happens to configure a postgres wallet) to all agents under test.
Or the following:
ACAPY_ARG_FILE=askar-indy-args.yml ./run_bdd\n
... will run all the tests against an askar wallet (the new shared components, which replace indy-sdk).
Any ACA-Py argument can be included in the yml file, and order-of-precedence applies (see https://pypi.org/project/ConfigArgParse/).
"},{"location":"testing/BDDTests/#specifying-environment-parameters-when-running-integration-tests","title":"Specifying Environment Parameters when Running Integration Tests","text":"ACA-Py integration tests support the following environment-driven configuration:
LEDGER_URL
- specify the ledger urlTAILS_NETWORK
- specify the docker network the tails server is running onPUBLIC_TAILS_URL
- specify the public url of the tails serverACAPY_ARG_FILE
- specify global ACA-Py parameters (see above)Behave tests are tagged using the same standard tags as used in AATH.
To run a specific set of ACA-Py integration tests (or exclude specific tests):
./run_bdd -t tag1 -t ~tag2\n
(All command line parameters are passed to the behave
command, so all parameters supported by behave can be used.)
This video is a presentation by ACA-Py developer @ianco about using the Aries Agent Test Harness for local pre-release testing of ACA-Py. Have a big change that you want to test with other Aries Frameworks? Following this guidance to run AATH tests with your under-development branch of ACA-Py.
"},{"location":"testing/IntegrationTests/","title":"Integration Test Plan","text":"Integration testing in ACA-Py consists of 3 different levels or types.
Interoperability is extremely important in the decentralized trust/SSI community. for example, when implementing or changing features that are included in the Aries Interop Profile the developer should try to add tests to this test suite.
These tests are contained in a separate repo AATH. They use the gherkin syntax and a http back channel. Changes to the tests need to be added and merged into this repo before they will be reflected in the automatic testing workflows. There has been a lot of work to make developing and debugging tests easier. See (AATH Dev Containers)[https://github.com/hyperledger/aries-agent-test-harness/blob/main/AATH_DEV_CONTAINERS.md#dev-containers-in-aath].
The tests will then be ran for PR's and scheduled workflows for ACA-Py \u2194 ACA-Py agents. These tests are important because having them allows the AATH project to more easily test Credo-TS \u2194 ACA-Py scenarios and ensure interoperability with mobile agents interacting with ACA-Py agents.
"},{"location":"testing/IntegrationTests/#aca-py-specific-bdd-tests","title":"ACA-Py specific BDD tests","text":"These tests leverage the demo agent and also use gherkin syntax and a back channel. See README.
These tests are another tool for leveraging the demo agent and the gherkin syntax. They should not be used to test features that involve the interop profile, as they can not be used to test against other frameworks. None of the tests that are covered by the AATH tests will be ran automatically. They are here because some developers may prefer the testing strategy and can be useful for explicit testing steps and protocols not included in the interop profile.
"},{"location":"testing/IntegrationTests/#scenario-testing","title":"Scenario testing","text":"These tests utilize the minimal example agent produced by Indicio. They exist in the scenarios
directory. They are very useful for running specific test plans and checking webhooks.
ACA-Py supports multiple configurations of logging.
"},{"location":"testing/Logging/#log-level","title":"Log level","text":"ACA-Py's logging is based on python's logging lib. Log levels DEBUG
, INFO
and WARNING
are available. Other log levels fall back to WARNING
.
Supports writing of log messages to a file with wallet_id
as the tenant identifier for each. To enable this, both multitenant mode (--multitenant
) and writing to log file option (--log-file
) are required. If both --multitenant
and --log-file
are not passed when starting up ACA-Py, then it will use default_logging_config.ini
config (backward compatible) and not log at a per tenant level.
--log-level
- The log level to log on std out--log-file
- Enables writing of logs to file. The provided value becomes path to a file to log to. If no value or empty string is provided then it will try to get the path from the config file--log-config
- Specifies a custom logging configuration fileExample:
./bin/aca-py start --log-level debug --log-file acapy.log --log-config acapy_agent.config:default_per_tenant_logging_config.ini\n\n./bin/aca-py start --log-level debug --log-file --multitenant --log-config ./acapy_agent/config/default_per_tenant_logging_config.yml\n
"},{"location":"testing/Logging/#environment-variables","title":"Environment Variables","text":"The log level can be configured using the environment variable ACAPY_LOG_LEVEL
. The log file can be set by ACAPY_LOG_FILE
. The log config can be set by ACAPY_LOG_CONFIG
.
Example:
ACAPY_LOG_LEVEL=info ACAPY_LOG_FILE=./acapy.log ACAPY_LOG_CONFIG=./acapy_log.ini ./bin/aca-py start\n
"},{"location":"testing/Logging/#aca-py-config-file","title":"ACA-Py Config File","text":"Following parameters can be used in a configuration file like this.
log-level: WARNING\ndebug-connections: false\ndebug-presentations: false\n
Warning: debug-connections and debug-presentations must not be used in a production environment as they log also credential claims values. Both parameters are independent of the log level, which means: Also if log-level is set to WARNING, connections and presentations will be logged like in debug log level.
"},{"location":"testing/Logging/#log-config-file","title":"Log config file","text":"The path to config file is provided via --log-config
.
Find an example in default_logging_config.ini.
You can find more detail description in the logging documentation.
For per tenant logging, find an example in default_per_tenant_logging_config.ini, which sets up TimedRotatingFileMultiProcessHandler
and StreamHandler
handlers. Custom TimedRotatingFileMultiProcessHandler
handler supports the ability to cleanup logs by time and maintain backup logs and a custom JSON formatter for logs. The arguments for it such as file name
, when
, interval
and backupCount
can be passed as args=('acapy.log', 'd', 7, 1,)
(also shown below). Note: backupCount
of 0 will mean all backup log files will be retained and not deleted at all. More details about these attributes can be found here
[loggers]\nkeys=root\n\n[handlers]\nkeys=stream_handler, timed_file_handler\n\n[formatters]\nkeys=formatter\n\n[logger_root]\nlevel=ERROR\nhandlers=stream_handler, timed_file_handler\n\n[handler_stream_handler]\nclass=StreamHandler\nlevel=DEBUG\nformatter=formatter\nargs=(sys.stderr,)\n\n[handler_timed_file_handler]\nclass=logging.handlers.TimedRotatingFileMultiProcessHandler\nlevel=DEBUG\nformatter=formatter\nargs=('acapy.log', 'd', 7, 1,)\n\n[formatter_formatter]\nformat=%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s\n
For DictConfig
(dict
logging config file), find an example in default_per_tenant_logging_config.yml with same attributes as default_per_tenant_logging_config.ini
file.
version: 1\nformatters:\n default:\n format: '%(asctime)s %(wallet_id)s %(levelname)s %(pathname)s:%(lineno)d %(message)s'\nhandlers:\n console:\n class: logging.StreamHandler\n level: DEBUG\n formatter: default\n stream: ext://sys.stderr\n rotating_file:\n class: logging.handlers.TimedRotatingFileMultiProcessHandler\n level: DEBUG\n filename: 'acapy.log'\n when: 'd'\n interval: 7\n backupCount: 1\n formatter: default\nroot:\n level: INFO\n handlers:\n - console\n - rotating_file\n
"},{"location":"testing/Troubleshooting/","title":"Troubleshooting ACA-Py","text":"This document contains some troubleshooting information that contributors to the community think may be helpful. Most of the content here assumes the reader has gotten started with ACA-Py and has arrived here because of an issue that came up in their use of ACA-Py.
Contributions (via pull request) to this document are welcome. Topics added here will mostly come from reported issues that contributors think would be helpful to the larger community.
"},{"location":"testing/Troubleshooting/#table-of-contents","title":"Table of Contents","text":"The most common issue hit by first time users is getting an error on startup \"unable to connect to ledger\". Here are a list of things to check when you see that error.
"},{"location":"testing/Troubleshooting/#local-ledger-running","title":"Local ledger running?","text":"Unless you specify via startup parameters or environment variables that you are using a public Hyperledger Indy ledger, ACA-Py assumes that you are running a local ledger -- an instance of von-network. If that is the cause -- have you started your local ledger, and did it startup properly. Things to check:
https:/localhost:9000
) accessible? If so, can you click on and see the Genesis File?LEDGER_URL=http://test.bcovrin.vonx.io
. For example, when running the Alice-Faber demo in the demo folder, you can run (for example), the Faber agent using the command: LEDGER_URL=http://test.bcovrin.vonx.io ./run_demo faber
Do you have any firewalls in play that might be blocking the ports that are used by the ledger, notably 9701-9708? To access a ledger the ACA-Py instance must be able to get to those ports of the ledger, regardless if the ledger is local or remote.
"},{"location":"testing/Troubleshooting/#damaged-unpublishable-revocation-registry","title":"Damaged, Unpublishable Revocation Registry","text":"We have discovered that in the ACA-Py AnonCreds implementation, it is possible to get into a state where the publishing of updates to a Revocation Registry (RevReg) is impossible. This can happen where ACA-Py starts to publish an update to the RevReg, but the write transaction to the Hyperledger Indy ledger fails for some reason. When a credential revocation is published, ACA-Py (via indy-sdk or askar/credx) updates the revocation state in the wallet as well as on the ledger. The revocation state is dependant on whatever the previous revocation state is/was, so if the ledger and wallet are mis-matched the publish will fail. (PR #1804 (merged) mitigates but probably doesn't completely eliminate this from happening).
For example, in case we've seen, the write RevRegEntry transaction failed at the ledger because there was a problem with accepting the TAA (Transaction Author Agreement). Once the error occurred, the RevReg state held by the ACA-Py agent, and the RevReg state on the ledger were different. Even after the ability to write to the ledger was restored, the RevReg could still not be published because of the differences in the RevReg state. Such a situation can now be corrected, as follows:
To address this issue, some new endpoints were added to ACA-Py in Release 0.7.4, as follows:
/revocation/registry/<id>/issued
- counts of the number of issued/revoked within a registry/revocation/registry/<id>/issued/details
- details of all credentials issued/revoked within a registry/revocation/registry/<id>/issued/indy_recs
- calculated rev_reg_delta from the ledger/revocation/registry/<id>/fix-revocation-entry-state
- publish an update to the RevReg state on the ledger to bring it into alignment with what is in the ACA-Py instance.apply_ledger_update
) to control whether the ledger entry actually gets published so, if you are so inclined, you can call the endpoint to see what the transaction would be, before you actually try to do a ledger update. This will return:rev_reg_delta
- same as the \".../indy_recs\" endpointaccum_calculated
- transaction to write to ledgeraccum_fixed
- If apply_ledger_update
, the transaction actually written to the ledgerNote that there is (currently) a backlog item to prevent the wallet and ledger from getting out of sync (e.g. don't update the ACA-Py RevReg state if the ledger write fails), but even after that change is made, having this ability will be retained for use if needed.
We originally ran into this due to the TAA acceptance getting lost when switching to multi-ledger (as described here. Note that this is one reason how this \"out of sync\" scenario can occur, but there may be others.
We add an integration test that demonstrates/tests this issue here.
To run the scenario either manually or using the integration tests, you can do the following:
./manage start --taa-sample --logs
./manage start --logs
./run_demo faber --revocation --taa-accept
, and then you can run through all the transactions using the Swagger page../run_bdd -t @taa_required
The following covers the Unit Testing framework in ACA-Py, how to run the tests, and how to add unit tests.
This video is a presentation of the material covered in this document.
"},{"location":"testing/UnitTests/#running-unit-tests-in-aca-py","title":"Running unit tests in ACA-Py","text":"./scripts/run_tests
./scripts/run_tests aries_cloudagent/protocols/out_of_band/v1_0/tests
Note: The bbs
library is not installed with ACA-Py by default, therefore unit tests involving BBS Signatures are disabled. To run BBS tests add the --all-extras
flag:
./scripts/run_tests --all-extras\n
Note: The bbs
library may not install on ARM (i.e. aarch64 or arm64) architecture therefore YMMV with testing BBS Signatures on ARM based devices.
Example: acapy_agent/core/tests/test_event_bus.py
@pytest.fixture\ndef event_bus():\n yield EventBus()\n\n\n@pytest.fixture\ndef profile():\n yield async_mock.MagicMock()\n\n\n@pytest.fixture\ndef event():\n event = Event(topic=\"anything\", payload=\"payload\")\n yield event\n\nclass MockProcessor:\n def __init__(self):\n self.profile = None\n self.event = None\n\n async def __call__(self, profile, event):\n self.profile = profile\n self.event = event\n\n\n@pytest.fixture\ndef processor():\n yield MockProcessor()\n
def test_sub_unsub(event_bus: EventBus, processor):\n \"\"\"Test subscribe and unsubscribe.\"\"\"\n event_bus.subscribe(re.compile(\".*\"), processor)\n assert event_bus.topic_patterns_to_subscribers\n assert event_bus.topic_patterns_to_subscribers[re.compile(\".*\")] == [processor]\n event_bus.unsubscribe(re.compile(\".*\"), processor)\n assert not event_bus.topic_patterns_to_subscribers\n
From aries_cloudagent/core/event_bus.py
class EventBus:\n def __init__(self):\n self.topic_patterns_to_subscribers: Dict[Pattern, List[Callable]] = {}\n\ndef subscribe(self, pattern: Pattern, processor: Callable):\n if pattern not in self.topic_patterns_to_subscribers:\n self.topic_patterns_to_subscribers[pattern] = []\n self.topic_patterns_to_subscribers[pattern].append(processor)\n\ndef unsubscribe(self, pattern: Pattern, processor: Callable):\n if pattern in self.topic_patterns_to_subscribers:\n try:\n index = self.topic_patterns_to_subscribers[pattern].index(processor)\n except ValueError:\n return\n del self.topic_patterns_to_subscribers[pattern][index]\n if not self.topic_patterns_to_subscribers[pattern]:\n del self.topic_patterns_to_subscribers[pattern]\n
@pytest.mark.asyncio\nasync def test_sub_notify(event_bus: EventBus, profile, event, processor):\n \"\"\"Test subscriber receives event.\"\"\"\n event_bus.subscribe(re.compile(\".*\"), processor)\n await event_bus.notify(profile, event)\n assert processor.profile == profile\n assert processor.event == event\n
async def notify(self, profile: \"Profile\", event: Event):\n partials = []\n for pattern, subscribers in self.topic_patterns_to_subscribers.items():\n match = pattern.match(event.topic)\n\n if not match:\n continue\n\n for subscriber in subscribers:\n partials.append(\n partial(\n subscriber,\n profile,\n event.with_metadata(EventMetadata(pattern, match)),\n )\n )\n\n for processor in partials:\n try:\n await processor()\n except Exception:\n LOGGER.exception(\"Error occurred while processing event\")\n
"},{"location":"testing/UnitTests/#asynctest","title":"asynctest","text":"From: acapy_agent/protocols/didexchange/v1_0/tests/test.manager.py
class TestDidExchangeManager(AsyncTestCase, TestConfig):\n async def setUp(self):\n self.responder = MockResponder()\n\n self.oob_mock = async_mock.MagicMock(\n clean_finished_oob_record=async_mock.AsyncMock(return_value=None)\n )\n\n self.route_manager = async_mock.MagicMock(RouteManager)\n ...\n self.profile = InMemoryProfile.test_profile(\n {\n \"default_endpoint\": \"http://aries.ca/endpoint\",\n \"default_label\": \"This guy\",\n \"additional_endpoints\": [\"http://aries.ca/another-endpoint\"],\n \"debug.auto_accept_invites\": True,\n \"debug.auto_accept_requests\": True,\n \"multitenant.enabled\": True,\n \"wallet.id\": True,\n },\n bind={\n BaseResponder: self.responder,\n OobMessageProcessor: self.oob_mock,\n RouteManager: self.route_manager,\n ...\n },\n )\n ...\n\n async def test_receive_invitation_no_auto_accept(self):\n async with self.profile.session() as session:\n mediation_record = MediationRecord(\n role=MediationRecord.ROLE_CLIENT,\n state=MediationRecord.STATE_GRANTED,\n connection_id=self.test_mediator_conn_id,\n routing_keys=self.test_mediator_routing_keys,\n endpoint=self.test_mediator_endpoint,\n )\n await mediation_record.save(session)\n with async_mock.patch.object(\n self.multitenant_mgr, \"get_default_mediator\"\n ) as mock_get_default_mediator:\n mock_get_default_mediator.return_value = mediation_record\n invi_rec = await self.oob_manager.create_invitation(\n my_endpoint=\"testendpoint\",\n hs_protos=[HSProto.RFC23],\n )\n\n invitee_record = await self.manager.receive_invitation(\n invi_rec.invitation,\n auto_accept=False,\n )\n assert invitee_record.state == ConnRecord.State.INVITATION.rfc23\n
async def receive_invitation(\n self,\n invitation: OOBInvitationMessage,\n their_public_did: Optional[str] = None,\n auto_accept: Optional[bool] = None,\n alias: Optional[str] = None,\n mediation_id: Optional[str] = None,\n) -> ConnRecord:\n ...\n accept = (\n ConnRecord.ACCEPT_AUTO\n if (\n auto_accept\n or (\n auto_accept is None\n and self.profile.settings.get(\"debug.auto_accept_invites\")\n )\n )\n else ConnRecord.ACCEPT_MANUAL\n )\n service_item = invitation.services[0]\n # Create connection record\n conn_rec = ConnRecord(\n invitation_key=(\n DIDKey.from_did(service_item.recipient_keys[0]).public_key_b58\n if isinstance(service_item, OOBService)\n else None\n ),\n invitation_msg_id=invitation._id,\n their_label=invitation.label,\n their_role=ConnRecord.Role.RESPONDER.rfc23,\n state=ConnRecord.State.INVITATION.rfc23,\n accept=accept,\n alias=alias,\n their_public_did=their_public_did,\n connection_protocol=DIDX_PROTO,\n )\n\n async with self.profile.session() as session:\n await conn_rec.save(\n session,\n reason=\"Created new connection record from invitation\",\n log_params={\n \"invitation\": invitation,\n \"their_role\": ConnRecord.Role.RESPONDER.rfc23,\n },\n )\n\n # Save the invitation for later processing\n ...\n\n return conn_rec\n
"},{"location":"testing/UnitTests/#other-details","title":"Other details","text":" with self.assertRaises(DIDXManagerError) as ctx:\n ...\n assert \" ... error ...\" in str(ctx.exception)\n
function.assert_called_once_with(parameters)
function.assert_called_once()
pytest.mark setup in setup.cfg
can be attributed at function or class level. Example, @pytest.mark.askar
Code coverage