Test results of serenity/cucumber can't be uploaded to Jira/Xray Test Execution task.
Tests executed with mvn clean verify.
Upload of Json file results in "No tests found in execution result".
Upload of XML file results in "description -> Description is required.". Even if I add a description to testsuite and testcase, the same error occurs.
How can I import the test results?
Xray version: 3.4.2_j7
Dependencies:
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>net.serenity-bdd</groupId>
<artifactId>serenity-cucumber</artifactId>
<version>1.9.37</version>
</dependency>
<dependency>
<groupId>net.serenity-bdd</groupId>
<artifactId>serenity-junit</artifactId>
<version>2.0.55</version>
</dependency>
<dependency>
<groupId>net.serenity-bdd</groupId>
<artifactId>serenity-core</artifactId>
<version>2.0.55</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.7.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>net.serenity-bdd.maven.plugins</groupId>
<artifactId>serenity-maven-plugin</artifactId>
<version>2.0.55</version>
<configuration>
<tags></tags>
</configuration>
<executions>
<execution>
<id>serenity-reports</id>
<phase>post-integration-test</phase>
<goals>
<goal>aggregate</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
XML result file:
<?xml version="1.0" encoding="UTF-8"?>
<testsuite errors="0" failures="0" name="simple text search" skipped="0" tests="10" time="77.58" timestamp="2019-07-30 02:16:13">
<testcase name="find a document by using the simple text search"/>
</testsuite>
JSON result file:
{
"name": "find a document by using the simple text search",
"id": "simple-text-search;find-a-document-by-using-the-simple-text-search",
"testSteps": [
{
"number": 1,
"description": "Example #1: {searchTerm\u003dquality assurance, documentTitle\u003dBRPM.LO Assurance Mgmt.}",
"duration": 7118,
"startTime": "2019-07-30T14:16:13.374+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 0,
"children": [
{
"number": 2,
"description": "Login: user, password",
"duration": 5897,
"startTime": "2019-07-30T14:16:13.395+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1
},
{
"number": 3,
"description": "Is dashboard opened",
"duration": 135,
"startTime": "2019-07-30T14:16:19.293+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1
},
{
"number": 4,
"description": "Given the user opens the search",
"duration": 208,
"startTime": "2019-07-30T14:16:19.428+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1,
"children": [
{
"number": 5,
"description": "Open search",
"duration": 201,
"startTime": "2019-07-30T14:16:19.434+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 2
}
]
},
{
"number": 6,
"description": "When documents are filtered with simple text search \"quality assurance\"",
"duration": 395,
"startTime": "2019-07-30T14:16:19.636+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1,
"children": [
{
"number": 7,
"description": "Filter documents by text: quality assurance",
"duration": 393,
"startTime": "2019-07-30T14:16:19.637+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 2
}
]
},
{
"number": 8,
"description": "Then the document with title \"BRPM.LO Assurance Mgmt.\" must be found on first place",
"duration": 357,
"startTime": "2019-07-30T14:16:20.032+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1,
"children": [
{
"number": 9,
"description": "Is document on first place: BRPM.LO Assurance Mgmt.",
"duration": 356,
"startTime": "2019-07-30T14:16:20.033+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 2
}
]
},
{
"number": 10,
"description": "Logout",
"duration": 101,
"startTime": "2019-07-30T14:16:20.390+02:00[Europe/Berlin]",
"result": "SUCCESS",
"precondition": false,
"level": 1
}
]
}
],
"userStory": {
"id": "simple-text-search",
"storyName": "simple text search",
"path": "src/test/resources/features/search/SimpleTextSearch.feature",
"narrative": "\tWithin the Search you can use a simple full text search to find documents.",
"type": "feature"
},
"featureTag": {
"name": "search/simple text search",
"type": "feature",
"displayName": "search/simple text search"
},
"title": "find a document by using the simple text search",
"tags": [
{
"name": "search",
"type": "capability",
"displayName": "Search"
},
{
"name": "TPD-3150",
"type": "tag",
"displayName": "TPD-3150"
},
{
"name": "search/simple text search",
"type": "feature",
"displayName": "simple text search"
}
],
"startTime": "2019-07-30T14:16:13.371+02:00[Europe/Berlin]",
"duration": 77582,
"projectKey": "",
"sessionId": "8e2f1927ce30efaa140c889e54081a9b",
"driver": "chrome",
"dataTable": {
"headers": [
"searchTerm",
"documentTitle"
],
"rows": [
{
"values": [
"quality assurance",
"BRPM.LO Assurance Mgmt."
],
"result": "SUCCESS"
}
],
"predefinedRows": true,
"scenarioOutline": "Given the user opens the search\n\rWhen documents are filtered with simple text search \"\u003csearchTerm\u003e\"\n\rThen the document with title \"\u003cdocumentTitle\u003e\" must be found on first place\n\r",
"dataSetDescriptors": [
{
"startRow": 0,
"rowCount": 0,
"name": ""
}
]
},
"manual": false,
"testSource": "Cucumber",
"result": "SUCCESS",
"scenarioOutline": "Given the user opens the search\r\nWhen documents are filtered with simple text search \"\u003csearchTerm\u003e\"\r\nThen the document with title \"\u003cdocumentTitle\u003e\" must be found on first place"
}
I have recently written this Serenity BDD tutorial which describes two possible flows in case you aim to have full visibily of Gherkin.
https://confluence.xpand-it.com/display/XRAY/Testing+using+Serenity+BDD+and+Cucumber+in+Java
If you aim to report against existing Gherkin tests in Xray (i.e. "Cucumber Tests"), you need to export first the tests out of Jira so they can be tagged with the corresponding issue keys.
If you just submit the cucumber JSON report, Xray cannot do the autoprovisioning of the corresponding Scenearios/Tests.
If you do use JUnit XML reports as means to submit back the results, then Xray will do autoprovisioning of "generic" (i.e. unstructured) test cases, that don't contain the gherkin sentences. Even though this alternate flow is simpler just to have visibility of automation results, it won't provide gherkin sentences details nor ensures that the autoprovisioned tests will be reused correctly afterwards (it depends on how Serenity BDD maps the results to the elements on the JUnit XML report).
In sum, to have full visibility choose one of the supported flows (either Xray or git as master), and then the corresponding steps which will require you to export the features of Jira so they get properly tagged.
Related
My DRF JSON API backend responds with a JSON on OPTIONS request.
The OPTIONS response includes field choices declared on Django model.
On my frontend, in my Ember 4.1 app with default JSONApiAdapter I want to use these exact same choices on my form select. Is there a way to access these choices on my Ember model or somewhere else? and if so, How do I do it?
Here's an example OPTIONS response:
{
"data": {
"name": "Names List",
"description": "API endpoint for Names",
"renders": [
"application/vnd.api+json",
"text/html"
],
"parses": [
"application/vnd.api+json",
"application/x-www-form-urlencoded",
"multipart/form-data"
],
"allowed_methods": [
"GET",
"POST",
"HEAD",
"OPTIONS"
],
"actions": {
"POST": {
"name": {
"type": "String",
"required": true,
"read_only": false,
"write_only": false,
"label": "Name",
"help_text": "name",
"max_length": 255
},
"name-type": {
"type": "Choice",
"required": true,
"read_only": false,
"write_only": false,
"label": "Name type",
"help_text": "Name type",
"choices": [
{
"value": "S",
"display_name": "Short"
},
{
"value": "L",
"display_name": "Long"
]
}
}
}
}
}
I'm not seeing my posixAccounts information from the following link:
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/get
{
"kind": "admin#directory#user",
"id": "8675309",
"etag": "\"UUID\"",
"primaryEmail": "email#example.com",
"name": {
"givenName": "Email",
"familyName": "Account",
"fullName": "Email Account"
},
"isAdmin": true,
"isDelegatedAdmin": false,
"lastLoginTime": "2021-08-04T21:11:17.000Z",
"creationTime": "2021-06-16T14:32:35.000Z",
"agreedToTerms": true,
"suspended": false,
"archived": false,
"changePasswordAtNextLogin": false,
"ipWhitelisted": false,
"emails": [
{
"address": "email#example.com",
"primary": true
},
{
"address": "email#example.com.test-google-a.com"
}
],
"phones": [
{
"value": "123-456-7890",
"type": "work"
}
],
"nonEditableAliases": [
"email#example.com.test-google-a.com"
],
"customerId": "id12345",
"orgUnitPath": "/path/to/org",
"isMailboxSetup": true,
"isEnrolledIn2Sv": false,
"isEnforcedIn2Sv": false,
"includeInGlobalAddressList": true
}
As you can see from the above output, there's no posixAccount information. I can open the ldap information in Apache Directory studio, so I know it's there, but I can't see it from the above output. Since I can see it though, I tried to update this using the update function in the API.
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/update
I used this for the payload as I'm just testing updating the gid information. I used the documentation below to get the entry details needed. At least as far as I could tell.
{
"posixAccounts": [
{
"gid": "12345",
}
]
}
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users
I'm getting a 200 response, but nothing is actually changing for the user when doing a PUT to update.
I tried a similar update method from another user on here, but no avail: Google Admin SDK - Create posix attributes on existing user
I was able to get this resolved by supplying additional details in my PUT request:
{
"posixAccounts": [
{
"username": "email(excluding #domain.com)",
"uid": "1234",
"gid": "12345",
"operatingSystemType": "unspecified",
"shell": "/bin/bash",
"gecos": "Firstname Lastname"
"systemId": ""
}
]
}
The above wouldn't reflect in LDAP until I put "systemId" in there. So that part is required.
I have an API gateway with the following schema:
{
"swagger": "2.0",
"info": {
"description": "This is a sample server Petstore server. You can find out more about Swagger at [http://swagger.io](http://swagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/). For this sample, you can use the api key `special-key` to test the authorization filters.",
"version": "1.0.0",
"title": "Swagger Petstore",
"termsOfService": "http://swagger.io/terms/",
"contact": {
"email": "apiteam#swagger.io"
},
"license": {
"name": "Apache 2.0",
"url": "http://www.apache.org/licenses/LICENSE-2.0.html"
}
},
"paths": {
"/pet": {
"post": {
"summary": "Add a new pet to the store",
"description": "",
"operationId": "addPet",
"consumes": [
"application/json",
"application/xml"
],
"produces": [
"application/xml",
"application/json"
],
"parameters": [
{
"in": "body",
"name": "body",
"description": "Pet object that needs to be added to the store",
"required": true,
"schema": {
"$ref": "#/definitions/Pet"
}
}
],
"responses": {
"405": {
"description": "Invalid input"
}
}}
}},
"definitions": {
"Pet": {
"required": ["id", "name"],
"type": "object",
"properties": {
"id": {
"type": "integer",
"description": "Id of the pet",
"example": 123
},
"name": {
"type": "string",
"description": "Name of the pet",
"example": "Jammy"
},
"nickname": {
"type": "string",
"description": "Nickname of the pet",
"example": "Jam"
}
}
}
}
}
When I send a request body with fields which are not present in the schema, I don't get 400 response from API gateway. I have applied the configuration to Validate body, headers, query string.
Is this an open issue in API gateway? Or am I missing something?
So with swagger v2 and openapiv3 specs the default behavior is to accept all additional properties that your spec does not define. If you include the required pet id and name and additional unused propertues like foo and bar, you post should succeed.
If you want more strict validation that fails when additional properties are sent then set additionalProperties to false in your pet schema or do that and change the spec version to 3.x.x
Using the JS AWS SDK and passing the following parameters:
{
"StartTime": 1548111915,
"EndTime": 1549321515,
"MetricDataQueries": [
{
"Id": "m1",
"MetricStat": {
"Metric": {
"MetricName": "NetworkOut",
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-[redacted]"
}
]
},
"Period": 300,
"Stat": "Average",
"Unit": "Gigabytes"
}
}
]
}
This is the output:
[
{
"Id": "m1",
"Label": "NetworkOut",
"Timestamps": [],
"Values": [],
"StatusCode": "Complete",
"Messages": []
}
]
The query closely matches the sample request found at https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html#API_GetMetricData_Examples
I am sure that the instance is a valid instance that has definitely had NetworkOut traffic during that date range.
What reason could account for the lack of elements in Values array?
A better solution was to omit "Unit" altogether, which allowed AWS to choose the appropriate unit, not only in scale but in category.
I tried it and got the same (empty) result as you.
I then changed Gigabytes to Bytes and got a result. So, it could be that you need to reduce your Unit size.
Here's the command I used for the AWS CLI:
aws cloudwatch get-metric-data --start-time 1548111915 --end-time 1549321515 --metric-data-queries '[
{
"Id": "m1",
"MetricStat": {
"Metric": {
"MetricName": "NetworkOut",
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-xxx"
}
]
},
"Period": 300,
"Stat": "Average",
"Unit": "Bytes"
}
}
]'
For future inquisitors, there are multiple reasons for which the aws cli silently returns an empty dataset instead of an error, because the input requirements are stricter than the standard user's expectations, but the output requirements are much looser. Examples
wrong unit
incomplete list of dimensions
typos, case-sensitivity, etc
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-getmetricstatistics-data/
https://github.com/grafana/grafana/issues/9852#issuecomment-395023506
I know that we can deploy our applications through pivotal cloud foundry.We can push build packs that provide framework and run time support for your applications.I want to create a Jenkins job to list all the build packs available on my cloud foundry.How this can be achieved.Thanxx
You can use the CLI to list the buildpacks: cf buildpacks or you can just query the cloud controller directly (api.system domain) by GETing /v2/buildpacks, however you need to be an authenticated user to make this request.
Even more you can launch curl directly from cf client command:
# cf curl /v2/buildpacks
{
"total_results": 9,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "b7890a54-f7c5-4973-a3da-e1a48ba6811d",
"url": "/v2/buildpacks/b7890a54-f7c5-4973-a3da-e1a48ba6811d",
"created_at": "2017-05-24T12:53:27Z",
"updated_at": "2017-05-24T12:53:27Z"
},
"entity": {
"name": "binary_buildpack",
"position": 1,
"enabled": true,
"locked": false,
"filename": "binary_buildpack-cached-v1.0.11.zip"
}
},
...
"metadata": {
"guid": "95e3f977-09d1-4b96-96bc-e34125e3b3a2",
"url": "/v2/buildpacks/95e3f977-09d1-4b96-96bc-e34125e3b3a2",
"created_at": "2017-05-24T12:54:03Z",
"updated_at": "2017-05-24T12:54:04Z"
},
"entity": {
"name": "staticfile_buildpack",
"position": 8,
"enabled": true,
"locked": false,
"filename": "staticfile_buildpack-cached-v1.4.5.zip"
}
}
]
}
Doc https://apidocs.cloudfoundry.org/258/