WSO2DSS 3.5.1 - Issue in JSON response from a RESTful service - wso2

I have defined a restful service with nested queries. Output mapping is defined in XML. I get proper response as XML. But if I request a JSON response using Accept:Application/json I get
{
"Fault": {
"faultcode": "soapenv:Server",
"faultstring": "Error while writing to the output stream using JsonWriter",
"detail": ""
}
}
I was getting below exception in 3.5.0 and I found a jira saying it is fixed in 3.5.1. So I tried in 3.5.1 now I am not getting below exception but the same output.
javax.xml.stream.XMLStreamException: Invalid Staring element
Please note I have also tried the escapeNonPrintableChar="true" option in my queries but no use. Strange thing is it working for different data sets. Just one particular data set is throwing this output.
I have changed the JSON formatters as below and got it to work but there is a problem in that.
<messageFormatter contentType="application/json" class="org.apache.axis2.json.JSONMessageFormatter"/>
<!--messageFormatter contentType="application/json" class="org.apache.axis2.json.gson.JsonFormatter" / -->
<messageBuilder contentType="application/json" class="org.apache.axis2.json.JSONOMBuilder"/>
<!--messageBuilder contentType="application/json" class="org.apache.axis2.json.gson.JsonBuilder" /-->
If I use above formatter the null values are not represented properly. Like I get
"Person": {
"Name": {
"#nil": "true"
}
but I want it as (like the other JSON formatter used to give)
"Person": {
"Name": null
}
Any help please. Is there a bug still left in this area?

When you are creating the query in your output response, you define the format you want to receive the response you can select xml or json, in the case you mention you can select the json option, then select generate response this creates this json structure.
{
"entries": {
"entry": [
{
"field1": "$column1",
"field2": "$column2"
}
]
}
}
Then you can modify the answer you need with your fields. Here is an example of how I use it in my query
{
"Pharmacies": {
"Pharmacy": [
{
"ID": "$Id",
"Descripcion": "$Desc",
"Latitude": "$Latitude",
"Longitude": "$Longitude",
"Image": "$Image"
}
]
}
}
The values ​​with "$" are correspond to the name of the column of the query
Regards

Related

Power Bi filtered export API is not working

I am working on a embeded powerbi within salesforce where i am using filtere which doing a export of file using rest api. The filter json looks like below . This is passed in the body of the POST request callout
{
"format": "PDF",
"powerBIReportConfiguration": {
"ReportLevelFilters": [
{
"Filter": "User / Id in ('0055700000633IsAAI')"
}
]
}
}
Endpoint which i am calling is
https://api.powerbi.com/v1.0/myorg/groups/XXXX-XXXX-XXXX-XXXX/reports/XXXX-XXXX-XXXX-XXXX/ExportTo
When the file is getting downloaded , i am getting all the data instead the filtered data. Anyting i am missing in the configuration
Take the spaces out of the Table/Column expression, per the examples here, also some of your JSON names don't have the correct case. Here's the Fiddler capture of a successful request using the Power BI .NET Client:
{
"format": "PDF",
"powerBIReportConfiguration": {
"reportLevelFilters": [
{
"filter": "DimCustomer/CustomerAlternateKey in ('AW00011000')"
}
]
}
}
So something like
{
"format": "PDF",
"powerBIReportConfiguration": {
"reportLevelFilters": [
{
"filter": "User/Id in ('0055700000633IsAAI')"
}
]
}
}
I am having this same issue. I saw a post regarding a Power Automate flow which highlighted that when the report is published the filters need to be cleared. However even with this done, the reportLevelFilters do not seem to have an effect.
I have also tested the URL string params which work fine as per these docs.

AWS Sagemaker Monitoring Error: Encoding mismatch

I am trying to run model monitoring on a model in AWS Sagemaker. The monitoring jobs are failing due to " Encoding mismatch: Encoding is JSON for endpointInput, but Encoding is base64 for endpointOutput. We currently only support the same type of input and output encoding at the moment."
Encoding is JSON for endpointInput and base64 for endpointOutput but the expected is json for both input and output.
I tried using the json_content_types in the DataCaptureConfig but the endpointOutput is still in base64 encoded.
Below is my DataCaptureConfig which i used in the deploy :
data_capture_config=DataCaptureConfig(
enable_capture = True,
sampling_percentage=100,
json_content_types = 'application/json',
destination_s3_uri=MY_BUCKET)
My capture files from the model looks something like this:
{
"captureData": {
"endpointInput": {
"observedContentType": "application/json",
"mode": "INPUT",
"data": "{ === json data ===}",
"encoding": "JSON"
},
"endpointOutput": {
"observedContentType": "*/*",
"mode": "OUTPUT",
"data": "{====base 64 encoded output ===}",
"encoding": "BASE64"
}
},
"eventMetadata": {
=== some metadata ===
}
I have observed that the output content type is not being recognized as the json/application.
So I need a workaround/procedure to get the output in the json encoded form.
Please help to the get JSON encoding for both input and output data.
Similar issue reported here , but there is no response.
I have come across a similar issue earlier while invoking the endpoint using boto3 sagemaker-runtime. Try adding the 'Accept' request parameter in invoke_endpoint function with value as 'application/json'.
refer for more help https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_runtime_InvokeEndpoint.html#API_runtime_InvokeEndpoint_RequestSyntax
While deploying the endpoint please set the CaptureContentTypeHeader in the DataCaptureConfig and appropriately map the output either Json ontentTypes or CsvContentTypes.
https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CaptureContentTypeHeader.html
Doing this would set the encoding accordingly . If this is not set the default is base64 encoding and hence the issue .

DataMapper mediator: mapping failed when property's parent in input JSON is null

I'm trying to map a result json object with nested fields in wso2 through the data mapper in wso2 esb integrator. Here is what I'm trying to achieve:
Input json file to map:
{
"name":"John",
"location": {
"id": 1,
"city": "Sydney"
}
}
Output json file to get:
{
"name":"John",
"city": "Sydney"
}
It works fine until the input Json become
{
"name":"John",
"location": null
}
}
the result I need is
{
"name":"John"
}
but instead I got an exception because location is null.
ERROR {org.wso2.carbon.mediator.datamapper.DataMapperMediator} - DataMapper mediator : mapping failed Error while reading input stream. Script engine unable to execute the script javax.script.ScriptException: TypeError: Cannot get property "city" of null in <eval> at line number 1
My problem is how to handle it properly in DataMapper mediator that field should not be mapped under certain conditions.
If anyone could help me a i would stay quite grateful.
Thank you.
It seems I fixed the problem.
you can add any condition check in .dmc file in Registry project.
if (inputroot.location != null) {
outputroot[0].city = inputroot.location.city;
}

Google Cloud Vision Api only return "name"

I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?
This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.
Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.

When predicting, what are the valid values for dataFormat?

Problem
Using the REST API, I have trained and deployed a model that I now want to use for prediction. I've defined the collections for prediction input and output and uploaded a json file formatted accordingly to the cloud storage. However, when trying to create a prediction job I cannot figure out what value to use for the dataFormat field, which is a required parameter. Is there any way to list all valid values?
What I've tried
My requests look like the one below. I've tried JSON, NEWLINE_DELIMITED_JSON (like when importing data into BigQuery), and even the json mime type application/json, in pretty much all different cases I can think of (upper and lower combined with snake, camel, etc.).
{
"jobId": "my_predictions_123",
"predictionInput": {
"modelName": "projects/myproject/models/mymodel",
"inputPaths": [
"gs://model-bucket/data/testset.json"
],
"outputPath": "gs://model-bucket/predictions/0/",
"region": "us-central1",
"dataFormat": "JSON"
},
"predictionOutput": {
"outputPath": "gs://my-bucket/predictions/1/"
}
}
All my attempts have only gotten me this back though:
{
"error": {
"code": 400,
"message": "Invalid value at 'job.prediction_input.data_format' (TYPE_ENUM), \"JSON\"",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "job.prediction_input.data_format",
"description": "Invalid value at 'job.prediction_input.data_format' (TYPE_ENUM), \"JSON\""
}
]
}
]
}
}
From Cloud ML API reference document https://cloud.google.com/ml/reference/rest/v1beta1/projects.jobs#DataFormat, the data format field in your request should be "TEXT" for all text inputs (including JSON, CSV, etc).