Google Cloud Vision Api only return "name" - google-cloud-platform

I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?

This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.

Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.

Related

Power Bi filtered export API is not working

I am working on a embeded powerbi within salesforce where i am using filtere which doing a export of file using rest api. The filter json looks like below . This is passed in the body of the POST request callout
{
"format": "PDF",
"powerBIReportConfiguration": {
"ReportLevelFilters": [
{
"Filter": "User / Id in ('0055700000633IsAAI')"
}
]
}
}
Endpoint which i am calling is
https://api.powerbi.com/v1.0/myorg/groups/XXXX-XXXX-XXXX-XXXX/reports/XXXX-XXXX-XXXX-XXXX/ExportTo
When the file is getting downloaded , i am getting all the data instead the filtered data. Anyting i am missing in the configuration
Take the spaces out of the Table/Column expression, per the examples here, also some of your JSON names don't have the correct case. Here's the Fiddler capture of a successful request using the Power BI .NET Client:
{
"format": "PDF",
"powerBIReportConfiguration": {
"reportLevelFilters": [
{
"filter": "DimCustomer/CustomerAlternateKey in ('AW00011000')"
}
]
}
}
So something like
{
"format": "PDF",
"powerBIReportConfiguration": {
"reportLevelFilters": [
{
"filter": "User/Id in ('0055700000633IsAAI')"
}
]
}
}
I am having this same issue. I saw a post regarding a Power Automate flow which highlighted that when the report is published the filters need to be cleared. However even with this done, the reportLevelFilters do not seem to have an effect.
I have also tested the URL string params which work fine as per these docs.

File in Postman Request Body is not Saved in the Collection

I have a POST request which validates a text/csv file in the request body. The request runs successfully in postman: returns HTTP code 200. The Request Body in the Postman Console is populated with the file path and name i.e. src:"/Users/username/Downloads/demo_file.csv" however when the collection is exported the file value in the request is empty. See below.
Question. Why is it empty, is this a bug / known issue?
"key": "Content-Type",
"name": "Content-Type",
"value": "text/csv",
"type": "text"
}
],
"body": {
"mode": "file",
"file": {}
As a quick test, I added the file to the same location as the postman collection and updated the value i.e. "file": {demo_file.csv} but the file was not found when the collection was run using newman.
Question: Should the relative path be used?
First of all, due to security reasons, Postman runner doesn't support file uploading
directly. Find further detail at here.
Question. Why is it empty, is this a bug / known issue?
No, that is not a bug nor the issue, it is by designed.
Question: Should the relative path be used?
If your file on the same location where collection is located; you just need to give file name without braces as follows,
"mode": "file",
"file": "demo_file.csv"

How to upload large amounts of stopwords into AWS Elasticsearch

Is it possible to upload a stopwords.txt onto AWS Elasticsearch and specify it as a path by stop token filter?
If your using aws elasticsearch, the only option to do this is using the elasticsearch rest APIs.
To import large data sets, you can use the bulk API.
Edit: You can now upload "packages" to AWS Elasticsearch service, which lets you add custom lists of stopwords etc. See https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/custom-packages.html
No, it isn't possible to upload a stopwords.txt file to the hosted AWS Elasticsearch service.
What you will have to do is specify the stopwords in a custom analyzer. More details on how to do that can be found in the official documentation.
The official documentation then says to "close and reopen" the index, but again, AWS Elasticsearch doesn't allow that, so you will then have to reindex.
Example:
1. Create an index with your stopwords listed inline within a custom analyzer, e.g.
PUT /my_new_index
{
"settings": {
"analysis": {
"analyzer": {
"english_analyzer": {
"type": "english",
"stopwords": "['a', 'the', 'they', 'and']"
}
}
}
}
}
2. Reindex
POST _reindex
{
"source": {
"index": "my_index"
},
"dest": {
"index": "my_new_index"
}
}
Yes it is possible by setting stopwords_path while defining your stop token filter.
stopwords_path => A path (either relative to config location, or
absolute) to a stopwords file configuration. Each stop word should be
in its own "line" (separated by a line break). The file must be UTF-8
encoded.
Here is how I did it.
Copied stopwords.txt file in the config folder of my elasticsearch home path.
Created a custom token filter with the path set in stopwords_path
PUT /testindex
{
"settings": {
"analysis": {
"filter": {
"teststopper": {
"type": "stop",
"stopwords_path": "stopwords.txt"
}
}
}
}
}
Verified if the filter was working as expected with _analyze API.
GET testindex/_analyze
{
"tokenizer" : "standard",
"token_filters" : ["teststopper"],
"text" : "this is a text to test the stop filter",
"explain" : true,
"attributes" : ["keyword"]
}
The tokens 'a', 'an', 'the', 'to', 'is' were filtered out since I had added them in config/stopwords.txt file.
For more info:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-stop-tokenfilter.html
https://www.elastic.co/guide/en/elasticsearch/reference/2.2/_explain_analyze.html

Google Data Transfer API says completed but nothing has happened?

I'm using the Data Transfer API to programmatically transfer the files owned by user A to user B as part of our exit process.
I look up the email addresses for the two users so that I can retrieve their IDs. I also query the list of data transfer applications to get the application ID for "Drive and Docs".
I pass the built transfer definition to the API and get the following JSON back:
{
"kind": "admin#datatransfer#DataTransfer",
"etag": "\"RV_wOygBiIUZUtakV6Iq44-H_Gw/2M4Z2X_c8OpsyQOJxtWDmIHcYzo\"",
"id": "AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK",
"oldOwnerUserId": "101496053770427062754",
"newOwnerUserId": "118268322014081744703",
"applicationDataTransfers": [
{
"applicationId": "55656082996",
"applicationTransferStatus": "pending"
}
],
"overallTransferStatusCode": "inProgress",
"requestTime": "2017-03-31T10:50:48.560Z"
}
I then query the transfers API to get an update on that transfer and get the following back:
{
'kind': 'admin#datatransfer#DataTransfer',
'requestTime': '2017-03-31T10:50:48.560Z',
'applicationDataTransfers': [
{
'applicationTransferStatus': 'completed',
'applicationId': '55656082996'
}
],
'newOwnerUserId': '118268322014081744703',
'oldOwnerUserId': '101496053770427062754',
'etag': '"RV_wOygBiIUZUtakV6Iq44-H_Gw/ZVnLgj3YLcsURTSzNm8m91tNeC0"',
'overallTransferStatusCode': 'completed',
'id': 'AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK'
}
and, indeed, I get a confirmation email that the files have been transferred.
However, if I look in Google Drive for both users, the files have NOT changed ownership. For user B, a new directory has been created with the email address of user A, but it contains no files and user A still owns all of their files.
What have I done wrong or misunderstood?
Thanks.
I had faced the same issue, please provide "applicationTransferParams" with key value.
"applicationTransferParams": [
{
"key": string,
"value": [
string
]
}
]

When predicting, what are the valid values for dataFormat?

Problem
Using the REST API, I have trained and deployed a model that I now want to use for prediction. I've defined the collections for prediction input and output and uploaded a json file formatted accordingly to the cloud storage. However, when trying to create a prediction job I cannot figure out what value to use for the dataFormat field, which is a required parameter. Is there any way to list all valid values?
What I've tried
My requests look like the one below. I've tried JSON, NEWLINE_DELIMITED_JSON (like when importing data into BigQuery), and even the json mime type application/json, in pretty much all different cases I can think of (upper and lower combined with snake, camel, etc.).
{
"jobId": "my_predictions_123",
"predictionInput": {
"modelName": "projects/myproject/models/mymodel",
"inputPaths": [
"gs://model-bucket/data/testset.json"
],
"outputPath": "gs://model-bucket/predictions/0/",
"region": "us-central1",
"dataFormat": "JSON"
},
"predictionOutput": {
"outputPath": "gs://my-bucket/predictions/1/"
}
}
All my attempts have only gotten me this back though:
{
"error": {
"code": 400,
"message": "Invalid value at 'job.prediction_input.data_format' (TYPE_ENUM), \"JSON\"",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "job.prediction_input.data_format",
"description": "Invalid value at 'job.prediction_input.data_format' (TYPE_ENUM), \"JSON\""
}
]
}
]
}
}
From Cloud ML API reference document https://cloud.google.com/ml/reference/rest/v1beta1/projects.jobs#DataFormat, the data format field in your request should be "TEXT" for all text inputs (including JSON, CSV, etc).