Google Data Transfer API says completed but nothing has happened? - google-admin-sdk

I'm using the Data Transfer API to programmatically transfer the files owned by user A to user B as part of our exit process.
I look up the email addresses for the two users so that I can retrieve their IDs. I also query the list of data transfer applications to get the application ID for "Drive and Docs".
I pass the built transfer definition to the API and get the following JSON back:
{
"kind": "admin#datatransfer#DataTransfer",
"etag": "\"RV_wOygBiIUZUtakV6Iq44-H_Gw/2M4Z2X_c8OpsyQOJxtWDmIHcYzo\"",
"id": "AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK",
"oldOwnerUserId": "101496053770427062754",
"newOwnerUserId": "118268322014081744703",
"applicationDataTransfers": [
{
"applicationId": "55656082996",
"applicationTransferStatus": "pending"
}
],
"overallTransferStatusCode": "inProgress",
"requestTime": "2017-03-31T10:50:48.560Z"
}
I then query the transfers API to get an update on that transfer and get the following back:
{
'kind': 'admin#datatransfer#DataTransfer',
'requestTime': '2017-03-31T10:50:48.560Z',
'applicationDataTransfers': [
{
'applicationTransferStatus': 'completed',
'applicationId': '55656082996'
}
],
'newOwnerUserId': '118268322014081744703',
'oldOwnerUserId': '101496053770427062754',
'etag': '"RV_wOygBiIUZUtakV6Iq44-H_Gw/ZVnLgj3YLcsURTSzNm8m91tNeC0"',
'overallTransferStatusCode': 'completed',
'id': 'AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK'
}
and, indeed, I get a confirmation email that the files have been transferred.
However, if I look in Google Drive for both users, the files have NOT changed ownership. For user B, a new directory has been created with the email address of user A, but it contains no files and user A still owns all of their files.
What have I done wrong or misunderstood?
Thanks.

I had faced the same issue, please provide "applicationTransferParams" with key value.
"applicationTransferParams": [
{
"key": string,
"value": [
string
]
}
]

Related

How to get list of meaningful name upon fetching minted NFTs list from Alchemy Api?

I am new to block chain and I am using Alchemy and my NFTs and NFT Meta Data is on "Pinata". When I fetch my minted NFTs from Alchemy API, in response I get list of "contract addresses" and "Token Ids". Is there any way to get list of meaningful names of my minted NFTs instead of ids (without using loops). OR is there a way to store a meaningful name upon minting . Any help will be appreciated.
response upon calling API :
{"balance": "1", "contract": {"address": "0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}, "id": {"tokenId": "0x0000000000000000000000000000000000000000000000000000000000000000"}}]```
Is there any way to get list of meaningful names of my minted NFTs instead of ids (without using loops).
Yes! As of July 12th, 2022 (current time) -- the getNFTs endpoint includes a withMetadata query param option that defaults to true (see docs).
That means that the response should include the info you might want, including:
title: name of the NFT asset
description: brief human-readable description
media.gateway: public gateway uri for the raw asset
etc.
See full documentation here: https://docs.alchemy.com/alchemy/enhanced-apis/nft-api/getnfts
An example response might look like this:
{
"ownedNfts": [
{
"contract": {
"address": "0x0beed7099af7514ccedf642cfea435731176fb02"
},
"id": {
"tokenId": "28",
"tokenMetadata": {
"tokenType": "ERC721"
}
},
"title": "DuskBreaker #28",
"description": "Breakers have the honor of serving humanity through their work on The Dusk. They are part of a select squad of 10,000 recruits who spend their days exploring a mysterious alien spaceship filled with friends, foes, and otherworldly technology.",
"tokenUri": {
"raw": "https://duskbreakers.gg/api/breakers/28",
"gateway": "https://duskbreakers.gg/api/breakers/28"
},
"media": [
{
"raw": "https://duskbreakers.gg/breaker_images/28.png",
"gateway": "https://duskbreakers.gg/breaker_images/28.png"
}
],
"metadata": {
...
}
},
...
]
...
}
You should then be able to do something like this to get your names:
const names = ownedNfts.map((nft) => nft.title};
Use the getNFTMetadata method to get information on each NFT.

GCP recommendation data format for catalog

I am currently working on recommendation AI. since I am new to GCP recommendation, I have been struggling with data format for catalog. I read the documentation and it says each product item JSON format should be on a single line.
I understand this totally, but It would be really great if I could get what the JSON format looks like in real because the one in their documentation is very ambiguous to me. and I am trying to use console to import data
I tried to import data looking like down below but I got error saying invalid JSON format 100 times. it has lots of reasons such as unexpected token and something should be there and so on.
[
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
...
]
Maybe it was because each item was not on a single line, but I am also wondering if the above is enough for importing. I am not sure if those data should be included in another property like
{
"inputConfig": {
"productInlineSource": {
"products": [
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
}
I can see the above in the documentation but it says it is for importing inline which is using POST request. it does not mention anything about importing with console. I just guess the format is also used for console but I am not 100% sure. that is why I am asking
Is there anyone who can show me the entire data format to import data by using console?
Problem Solved
For those who might have the same question, The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
Posting this Community Wiki for better visibility.
OP edited question and add solution:
The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
However I'd like to elaborate a bit.
There are a few ways to import Importing catalog information:
Importing catalog data from Merchant Center
Importing catalog data from BigQuery
Importing catalog data from Cloud Storage
I guess this is what was used by OP, as I was able to import catalog using UI and GCS with below JSON file.
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{"id":"111","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"222","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
{"id":"333","title":"Test Movie (2020)","categories":["Adventure","Children's","Fantasy"]}
]
}
}
}
Importing catalog data inline
At the bottom of the Importing catalog information documentation you can find information:
The line breaks are for readability; you should provide an entire catalog item on a single line. Each catalog item should be on its own line.
It means you should use something similar to NDJSON - convenient format for storing or streaming structured data that may be processed one record at a time.
If you would like to try inline method, you should use this format, however it's single line but with breaks for readability.
data.json file
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{
"id": "1212",
"category_hierarchies": [ { "categories": [ "Animation", "Children's" ] } ],
"title": "Toy Story (1995)"
},
{
"id": "5858",
"category_hierarchies": [ { "categories": [ "Adventure", "Fantasy" ] } ],
"title": "Jumanji (1995)"
},
{
"id": "321123",
"category_hierarchies": [ { "categories": [ "Comedy", "Adventure" ] } ],
"title": "The Lord of the Rings: The Fellowship of the Ring (2001)"
},
]
}
}
}
Command
curl -X POST \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
--data #./data.json \
"https://recommendationengine.googleapis.com/v1beta1/projects/[your-project]/locations/global/catalogs/default_catalog/catalogItems:import"
{
"name": "import-catalog-default_catalog-1179023525XX37366024",
"done": true
}
Please keep in mind that the above method requires Service Account authentication, otherwise you will get PERMISSION DENIED error.
"message" : "Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the translate.googleapis.com. We recommend that most server applications use service accounts instead. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/.",
"status" : "PERMISSION_DENIED"

AWS Kinesis: user address in event encoded / encrypted

In my React Native mobile app, I use AWS Amplify to send info about user actions (screen views, button taps, swipes, etc.) by means of Analytics.record(...) to AWS Pinpoint which in turn feeds them into a AWS Kinesis Data Stream. I have created an AWS Lambda Python 3 function that listens to events in this data stream.
Setup has been a breeze, thanks to outstanding documentation and everything works fine - except for one thing:
When a user logs in, I update the Pinpoint Endpoint with the user ID, email address and some more attributes using Analytics.updateEndpoint(...). In the lambda function, I base64-decode the event payload as shown in this sample code and a sample event payload looks roughly like this:
{
"event_type": "_session.start",
"event_timestamp": 1572345161558,
"application": {
"app_id": "<some app ID>",
"cognito_identity_pool_id": "us-east-1:<some pool ID>",
"sdk": {},
"version_name": "<the app version I put in using updateEndpoint(...)>"
... <snipped for brevity> ...
},
"attributes": {},
"endpoint": {
"ChannelType": "APNS",
"Address": "=ABAQRuUDJD ... <some longish binary value> j0eL+69lsY=",
"EndpointStatus": "ACTIVE",
"Location": {
"Country": "US"
},
"Demographic": {
"Make": "iPhone",
"Model": "iPhone X",
"ModelVersion": "13.1.3",
...
"Platform": "ios"
},
"User": {
"UserId": "us-east-1:<Cognito ID of the user that logged in>",
"UserAttributes": {}
},
... <snipped for brevity> ...
},
"awsAccountId": "<my account ID>"
}
The user email address in the "Address" field above is not contained in the Kinesis Data Stream event as plain text, but encoded (or encrypted ?) somehow.
My question: Can anybody tell me how it is encoded / encrypted ? And, ideally, how to get the plain text address ?
I tried to base64-decode it or decrypt it using my default AWS KMS key (and a combination thereof), but no luck.
Alternatively, I could use the (plain text) user ID to look up the email address in the AWS Cognito user pool used to manage auth & auth, but getting it from the event directly would obviously be a lot simpler...
I have searched the web up and down, asked in the AWS-Amplify channel on gitter, but that Address encoding / encryption just does not seem to be documented anywhere...

Google Cloud Vision Api only return "name"

I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?
This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.
Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.

Populating search results with meta data in Amazon CloudSearch

Unfortunately, Amazon CloudSearch does not support nested JSON, meaning that the below document structure is not valid.
[{
"type": "add",
"id": 1,
"fields": {
"company_name": "My Company",
"services": [
{
"id": 123,
"name": "Construction",
"logo": "logo1.png"
},
{
"id": 456,
"name": "Programming",
"logo": "logo2.png"
}
]
}
}]
Basically I cannot nest an array of objects under the services key. In this particular scenario, only the nested name field has to be searchable, so what I could do is the following:
[{
"type": "add",
"id": 1,
"fields": {
"company_name": "My Company",
"services": [ "Construction", "Programming" ]
}
}]
The above JSON is valid, and I can still search for the service names. However, now I have lost some meta data about my services that I need when displaying the search results. Is there any way in which I can add the meta data to the document in Amazon CloudSearch and have it returned with my search results, such that I can use it when displaying the results?
Or do I have to fetch this additional meta data from my database afterwards to populate the search results with the additional data required to display the results? This does not seem feasible because it complicates my code much more than if I could fetch this data straight from CloudSearch. This would also impact the performance of the search, even though I could use caching - but I kind of want to avoid that if possible, because I don't need it for anything else right now.
So my questions are:
Can I somehow add the meta data for services to the CloudSearch documents and have it returned with my search results?
If not, should I then extract this data from my data store upon receiving the search results from CloudSearch?
Do you have any other solutions or ideas? Are there any best practices with this?
Thank you in advance!