Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test definitions - implementations guidelines and artifacts #94

Closed
rartych opened this issue Nov 17, 2023 · 5 comments · Fixed by #117
Closed

Test definitions - implementations guidelines and artifacts #94

rartych opened this issue Nov 17, 2023 · 5 comments · Fixed by #117
Labels
enhancement New feature or request

Comments

@rartych
Copy link
Collaborator

rartych commented Nov 17, 2023

Problem description
Delivery of API test cases and documentation is one of criteria in API Readiness Checklist.
Each subproject needs to develop a Gherkin feature file for specified API.
If subprojects also intend to add test implementations, an aligned single implementation that is agreed amongst all provider implementors could be added to the main subproject repo.

Possible evolution
Definition of common guidelines and artifacts that would simplify development of test definition, documentation and implementation based on current experience in subprojects and within participating organizations.

Additional context
As listed in camaraproject/SimSwap#63 following projects already provide Gherkin feature files:

@bigludo7
Copy link
Collaborator

Hello @rartych and team,
How we can move forward on this? as I assume this is the only missing point in a lot of APIs to reach completion.

Looking in details of the Test Case already contributed and seems we are not far from one WG to another - need few alignements.

We need to agree on:

  • Having one feature file per API or Resource ? or we keep this flexible and it's up to each project to manage this? (I prefer this later)
  • probably good for consistency to use all same pronoun (I or impersonal)
  • Scenario identifier. To trigger the discussion I propose '@'<(mandatory)name of the resource><(mandatory)number XX><(optional)short detail> (like @check_simswap_01_Verify_Swap_True_Default_MaxAge)
  • Do we agree to have several when/then in one scenario ?
  • Providing either API literal request value (with exemple) or payload 'ready to use' (as in home device QoD) is fine?
  • ...

Definitely looking for consumer of these file to get their perspective. The less they have to do to use these files the better it will be.

@rartych rartych changed the title Test definitions and implementations guidelines and artifacts Test definitions - implementations guidelines and artifacts Dec 5, 2023
@mdomale
Copy link

mdomale commented Dec 11, 2023

@bigludo7
Having one feature file per API or Resource ? or we keep this flexible and it's up to each project to manage this? (I prefer this later)

probably good for consistency to use all same pronoun (I or impersonal)

  • Third Person pronoun is advisable although there is no restriction to use First(I)person pronoun(Originator of BDD suggests First person pronoun usage).
    It is important to preserve consistency between the description of the scenario and its steps (do not switch grammatical persons), follow the criteria used if we are adding scenarios to an existing project, and favor clarity of what is written.
    (Ref-https://www.testquality.com/blog/tpost/v79acjttj1-cucumber-and-gherkin-language-best-pract)

Scenario identifier. To trigger the discussion I propose '@'<(mandatory)name of the resource>
<(mandatory)number XX><(optional)short detail> (like @check_simswap_01_Verify_Swap_True_Default_MaxAge)

Do we agree to have several when/then in one scenario ?

  • Cucumber official documentation at https://cucumber.io/docs/gherkin/reference/ recommends to use only one When due to the fact that only behavior is listed in one acceptance criteria.
    This is just a recommendation, but not a rule. And Cucumber documentation is mostly focused on the Acceptance criteria specification, but not the test automation. So, it is definitely possible to follow how it best suits your requirement when it comes to automation testing. I'd recommend using fewer "And" keywords by merging different steps. Below points I recommend (It is a recommendation, but not a rule :) ):
    Use only one Given, When and Then keywords in one flow of a scenario, use "And" keyword if you need to specify extra steps at the appropriate event
    If you come across using too many "And" keywords, try to merge multiple such steps.
    On side note When there is only one When in the test automation, then there are slight chances of :
  • Number of tests in the automation increased.
  • Total execution time for the automation goes up.
  • We may be doing a lot of redundant actions across multiple scenarios.

Providing either API literal request value (with example) or payload 'ready to use' (as in home device QoD) is fine?

@bigludo7
Copy link
Collaborator

Thanks @mdomale for your comments and the url sources which are very helpful

To sump-up:

  • Proposal is to manage single feature file (Having multiple feature inside single feature file is unattainable)
  • Third Person pronoun is advisable as using the third person, particularly in English, conveys information in a more official and impartial manner
  • about scenario identifier, following your recommendation it should be '@'<(mandatory)name of the resource>
    -<(mandatory)number XX>-<(optional)short detail in lower case and using - separator> (like @check_simswap-01-verify-swap-true-default-maxAge) - As we use '_' in resource using then "-" could be confusing - WDYT?
  • I'm not sure we can enforce any rule about using only one Given, When and Then keywords in one flow of a scenario but got your point to provide this a a recommendation.

Expecting also comments from others companies involved in day to day CAMARA APIs crafting :)

@shilpa-padgaonkar
Copy link
Collaborator

Thanks @mdomale and @bigludo7 for your inputs and feedback. While we wait for few more days to get more feedback on the inputs, I would also propose to create a short Camara-API-Testing-Guidelines.md doc with the below content:

With this doc in place, we would not need to search for old issues regarding these discussed points when needed later, and would also allow the new Camara members to get an overview on the topic.

@jlurien
Copy link
Contributor

jlurien commented Dec 20, 2023

Answering also to @bigludo7's questions. I think we are quite aligned with previous responses:

  • Having one feature file per API or Resource ? or we keep this flexible and it's up to each project to manage this? (I prefer this later)

We also prefer one feature file per operationId.

  • probably good for consistency to use all same pronoun (I or impersonal)

Yes, we should agree on a common wording. The simpler the better.

As a example, to be refined:

Background: (assuming one feature file per operationId)
  Given:
    the environment 
    the path "path"
    the method "method"

Given:
  the (path param|query param|header) "name" is set to "value"
  the request body is set to
    """
    """
When:
  a "operationId" request is sent

Then:
  the status code is "X"
  the response value at "JSON-path" is "value"
  • Scenario identifier. To trigger the discussion I propose '@'<(mandatory)name of the resource><
    (mandatory)number XX><(optional)short detail> (like @check_simswap_01_Verify_Swap_True_Default_MaxAge)

We think it's good to include the operationId of the endpoint, along with some number to allow unique identification, and some description may also help, so a proposal could be:

@.[.<optional_short_description>]
(separator here could be other than "." but we should choose one that cannot be part of the components)

  • Do we agree to have several when/then in one scenario ?

It is preferable to have simple scenarios with just 1 when. While testing APIs, the structure is usually:

Given: request setup
When: request is sent
Then: response is validated

If some complex scenario requires that filling a request depends on the response from a previous one, we may allow several when

  • Providing either API literal request value (with exemple) or payload 'ready to use' (as in home device QoD) is fine?

For the Given steps, in some cases we may give exact values for some parameters, for example providing wrong values to provoke an error, while in other cases the exact value cannot be known in advance and we have to ask the tester to fill the value. For example, in location-verification we should ask the tester to provide the well known latitude, longitude of a device to test a verificationResult=true scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants