How get a metric sample from monitoring APIs - google-cloud-platform

I took a look very carefully to monitoring API. As far as I have read, it is possible to use gcloud for creating Monitoring Policies and edit the Policies ( Using Aleert API).
Nevertheless, from one hand it seems gcloud is able only to create and edit policies options not for reading the result from such policies. From this page I read this options:
Creating new policies
Deleting existing policies
Retrieving specific policies
Retrieving all policies
Modifying existing policies
On another hand I read from result of a failed request
Summary of the result of a failed request to write data to a time series.
So it rings a bell in my mind that I do can get a list of results like all failed request to write during some period. But how?
Please, my straigh question is: can I somehow either listen alert events or get a list of alert reults throw Monitoring API v3?.
I see tag_firestore_instance somehow related to firestore but how to use it and which information can I search for? I can't find anywhere how to use it. Maybe as common get (eg. Postman/curl) or from gcloud shell.
PS.: This question was originally posted in Google Group but I was encoraged to ask here.
*** Edited after Alex's suggestion
I have an Angular page listening a document from my Firestore database
export class AppComponent {
public transfers: Observable<any[]>;
transferCollectionRef: AngularFirestoreCollection<any>;
constructor(public auth: AngularFireAuth, public db: AngularFirestore) {
this.listenSingleTransferWithToken();
}
async listenSingleTransferWithToken() {
await this.auth.signInWithCustomToken("eyJ ... CVg");
this.transferCollectionRef = this.db.collection<any>('transfer', ref => ref.where("id", "==", "1"));
this.transfers = this.transferCollectionRef.snapshotChanges().map(actions => {
return actions.map(action => {
const data = action.payload.doc.data();
const id = action.payload.doc.id;
return { id, ...data };
});
});
}
}
So, I understand there is at least one reader count to return from
name: projects/firetestjimis
filter: metric.type = "firestore.googleapis.com/document/read_count"
interval.endTime: 2020-05-07T15:09:17Z

It was a little difficult to follow what you were saying, but here's what I've figured out.
This is a list of available Firestore metrics: https://cloud.google.com/monitoring/api/metrics_gcp#gcp-firestore
You can then pass these metric types to this API
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
On that page, I used the "Try This API" tool on the right side and filled in the following
name = projects/MY-PROJECT-ID
filter = metric.type = "firestore.googleapis.com/api/request_count"
interval.endTime = 2020-05-05T15:01:23.045123456Z
In chrome's inspector, i can see that this is the GET request that the tool made:
https://content-monitoring.googleapis.com/v3/projects/MY-PROJECT-ID/timeSeries?filter=metric.type%20%3D%20%22firestore.googleapis.com%2Fapi%2Frequest_count%22&interval.endTime=2020-05-05T15%3A01%3A23.045123456Z&key=API-KEY-GOES-HERE
EDIT:
The above returned 200, but with an empty json payload.
We also needed to add the following entry to get data to populate
interval.startTime = 2020-05-04T15:01:23.045123456Z
Also try going here console.cloud.google.com/monitoring/metrics-explorer and type firestore in the "Find resource type and metric" box and see if google's own dashboards has data populating. (This is to confirm that there is actually data there for you to fetch)

Related

Sagemaker Data Capture does not write files

I want to enable data capture for a specific endpoint (so far, only via the console). The endpoint works fine and also logs & returns the desired results. However, no files are written to the specified S3 location.
Endpoint Configuration
The endpoint is based on a training job with a scikit learn classifier. It has only one variant which is a ml.m4.xlarge instance type. Data Capture is enabled with a sampling percentage of 100%. As data capture storage locations I tried s3://<bucket-name> as well as s3://<bucket-name>/<some-other-path>. With the "Capture content type" I tried leaving everything blank, setting text/csv in "CSV/Text" and application/json in "JSON".
Endpoint Invokation
The endpoint is invoked in a Lambda function with a client. Here's the call:
sagemaker_body_source = {
"segments": segments,
"language": language
}
payload = json.dumps(sagemaker_body_source).encode()
response = self.client.invoke_endpoint(EndpointName=endpoint_name,
Body=payload,
ContentType='application/json',
Accept='application/json')
result = json.loads(response['Body'].read().decode())
return result["predictions"]
Internally, the endpoint uses a Flask API with an /invocation path that returns the result.
Logs
The endpoint itself works fine and the Flask API is logging input and output:
INFO:api:body: {'segments': [<strings...>], 'language': 'de'}
INFO:api:output: {'predictions': [{'text': 'some text', 'label': 'some_label'}, ....]}
Data capture can be enabled by using the SDK as shown below -
data_capture_config = DataCaptureConfig(
enable_capture=True, sampling_percentage=100, destination_s3_uri=s3_capture_upload_path
)
predictor = model.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
endpoint_name=endpoint_name,
data_capture_config=data_capture_config,
)
Make sure to reference your data capture config in your endpoint creation step. I've always seen this method to work. Can you try this and let me know? Reference notebook
NOTE - I work for AWS SageMaker , but my opinions are my own.
So the issue seemed to be related to the IAM role. The default role (ModelEndpoint-Role) does not have access to write S3 files. It worked via the SDK since it uses another role in the sagemaker studio. I did not receive any error message about this.

Google Admin API: Directory - Unable to to clear Recovery Email or Recovery Phone Number

I'm working on a project to automate the steps in the Google Article Maintain data security after an employee leaves and have encountered trouble with automating the step Revoke Recovery Password Access. I'm following the documentation on the Google API Explorer for Updating Users and am performing an API request to the endpoint PUT https://admin.googleapis.com/admin/directory/v1/users/{userKey} with the following JSON as the body:
{
"recoveryEmail": null,
"recoveryPhone": null
}
I receive a 200 response code with no error but email and phone number remains.
If I try some dummy data (such as below) then the information updates fine:
{
"recoveryEmail": "joe#bloggs.com",
"recoveryPhone": "+1234567890"
}
I also tried:
{
"recoveryEmail": "",
"recoveryPhone": ""
}
Ideally I would like to clear the data rather than overwriting it with dummy values.
If you have a Google Workspace account, I think the problem may be related to propagation. I have tested it myself before, using the following, and it worked:
{
"recoveryEmail": "",
"recoveryPhone": ""
}
This Help Center article mentions that the changes on users can take some time to appear https://support.google.com/a/answer/7514107
I have just tested it again and it worked. I would suggest checking the user information in an incognito or private browser window, or wait up to 24-48 hours due to propagation time.

AWS Personalize: Dumping User-item interaction Dataset Created By PutEvent

Following AWS Personalize documents, I successfully imported my datasets (User, Item, Interaction) from S3, created an EventTrcker, trained the model, and deployed the campaign. The solution works without any issue and I get the recommendations.
I rely on Putevent to add new user-item interaction events. I also dump those interaction events using Lambda+firehose in my s3. But I am wondering if AWS Personalize internally creates/augments the original user-item interaction dataset? How I can access and download the revised version of the dataset? I cannot see any new dataset in "Dataset groups > Datasets" rather than my original 3 datasets...
I prefer to dump it regularly from AWS Personalize to my S3 storage rather than using my own Lambda+Firehose solution.
This is the output of my Putevent call. I see 200...but not sure it works fine or not...should I see any new dataset in "Dataset groups > Datasets" created by putevents?
{
"ResponseMetadata": {
"RequestId": "a6c96496-cbd6-4ad8-9183-371d1794cbd8",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"content-type": "application/json",
"date": "Mon, 04 Jan 2021 18:04:28 GMT",
"x-amzn-requestid": "a6c96496-cbd6-4ad8-9183-371d1794cbd8",
"content-length": "0",
"connection": "keep-alive"
},
"RetryAttempts": 0
}
}
Update: Now it's possible
AWS documentation:
https://docs.aws.amazon.com/personalize/latest/dg/export-data.html
You can use this AWS CLI command for exporting only interactions, that were added but PutEvents/PutUsers/PutItems API calls:
aws personalize create-dataset-export-job \
--job-name job name \
--dataset-arn dataset ARN \
--job-output "{\"s3DataDestination\":{\"kmsKeyArn\":\"kms key ARN\",\"path\":\"s3://bucket-name/folder-name/\"}}" \
--role-arn role ARN \
--ingestion-mode PUT
In that case --ingestion-mode PUT will make sure, that:
Specify PUT to export only data that you imported incrementally using the console or the PutEvents, PutUsers, or PutItems operations.
So I believe it covers your use case.
No, it's not possible
It's simply impossible right now to export this data.
There is no API to retrieve a dump of your Interactions dataset in Personalize.
I believe Lambda + Firehose workaround for this is correct approach.
But how to test, if PutEvents works?
To make sure, that Interactions added through PutEvents, you can make use of Filters feature:
https://docs.aws.amazon.com/personalize/latest/dg/filter-expressions.html
Pretty much create a new Filter, with similar expression:
EXCLUDE ItemID WHERE Interactions.EVENT_TYPE IN ("your_event_type_name")
Which will exclude from recommendations any item, that user previously interacted with.
Then you can test, if events added through PutEvents API are recognized correctly:
Create Filter expression as described above.
Create any campaign for simple recommendations (User-Personalization recipe).
Connect the filter to campaign.
Get recommendations for any user and save them somewhere.
Call PutEvents API with any of the recommended items, that was returned in 4 and user id from 4.
Again get recommendations for the same user as in 4.
If the item, that you did added with PutEvents call is no longer recommended, then you have a proof, that events added through PutEvents call are correctly added to Interactions dataset.
What if PutEvents call doesn't affect recommendations in that case?
Then simply you are providing incorrect values in API call. Personalize might return 200 response, even if event provided was invalid.
To fix that, try:
Make sure date is in correct format. Personalize might ignore events with very old timestamps, if there are much more newer events (it's possible to configure it in Solution config).
Check if you are not passing any strange values like "null" or "undefined" for sessionId, userId, trackingId in PutEvents params. It might cause ignoring the event by Personalize (https://github.com/aws/aws-sdk-js/issues/3371)
Make sure, you are passing correct eventType value (should match eventType in Solution and Filter).
If it still doesn't work, raise a support ticket to AWS with an example PutEvents API call params.
Are there any simpler solutions?
Well, maybe there are, but in our project we use this approach and it also tests, if filtering feature is working correctly. You will probably make use of Filtering anyways in the future, so I believe it's good enough method.

Get all items in DynamoDB with API Gateway's Mapping Template

Is there a simple way to retrieve all items from a DynamoDB table using a mapping template in an API Gateway endpoint? I usually use a lambda to process the data before returning it but this is such a simple task that a Lambda seems like an overkill.
I have a table that contains data with the following format:
roleAttributeName roleHierarchyLevel roleIsActive roleName
"admin" 99 true "Admin"
"director" 90 true "Director"
"areaManager" 80 false "Area Manager"
I'm happy with getting the data, doesn't matter the representation as I can later transform it further down in my code.
I've been looking around but all tutorials explain how to get specific bits of data through queries and params like roles/{roleAttributeName} but I just want to hit roles/ and get all items.
All you need to do is
create a resource (without curly braces since we dont need a particular item)
create a get method
use Scan instead of Query in Action while configuring the integration request.
Configurations as follows :
enter image description here
now try test...you should get the response.
to try it out on postman deploy the api first and then use the provided link into postman followed by your resource name.
API Gateway allows you to Proxy DynamoDB as a service. Here you have an interesting tutorial on how to do it (you can ignore the part related to index to make it work).
To retrieve all the items from a table, you can use Scan as the action in API Gateway. Keep in mind that DynamoDB limits the query sizes to 1MB either for Scan and Query actions.
You can also limit your own query before it is automatically done by using the Limit parameter.
AWS DynamoDB Scan Reference

Can you stop Alexa Skill session programmatically in lambda function?

Is it possible to stop a session from inside aws-lambda code if you run lambda seperately from skill.
I am trying to run aws-lambda function from SNS to stop skill session.
From what I've ready my interpretation is that you're interested in ending the session on your Amazon echo by sending an SNS message to an unrelated lambda function. If that is correct this is how I would proceed.
I have not tried this but having extensive experience with Amazon Alexa Skills in Node.js I would say this might be accomplished in a programmatic way as follows:
1) Enable a dynamodb table within your alexa app. (as seen in line 4)
exports.handler = function(event, context, callback) {
var alexa = Alexa.handler(event, context);
alexa.appId = `YOUR_ALEXA_SKILL_ID`;
alexa.dynamoDBTableName = 'NAME_OF_YOUR_DYNAMODB_TABLE'; //You literally dont need any other code it just does the saving and loading??!! WHAT?
alexa.registerHandlers(newSessionHandlers, newGameYesNoHandlers);
alexa.execute();
};
Alexa will then create a single datatable row for each unique user. Note you'll need to set up an IAm permission to allow lambda to access this data table.
2) When your session starts have it update a value in it's own data table row so that this.attributes['EXECUTING'] = True;
3) During each consecutive intent call within the session check the value of this.attributes['EXECUTING] if the value is True, Great, if it is False end the session by emitting this.emit(':tell', "Goodbye!");
4) Now, the data table rows are indexed by userId. This will be the tricky part that I have not personally tried. I suggest creating an intent where the user ask for their userId, then emit a card with the id value on it. This could then be copied into your own SMS lambda function or some other api. Alternatively amazon has recently released documentation on how to connect alexa users to your own API's.
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/linking-an-alexa-user-with-a-user-in-your-system
It really depends on what you're hoping the final product will look like.
5) Finally, use your external api or alternate lambda function along with the users userId to access the same dynamodb table and alter the value of ['EXECUTING'] to False. The next intent that is run will then check the database and cause the skill to exit.
Voila!
For more on specific code, account linking, and sms please ask additional questions. Thanks