I configure my Datastore with sync Expression like this
syncExpression(MyTable, () => {
return data => data.tenant_id('eq', "123")
}
i recieve all updation in datastore.observe() but when i update "tenant_id" 123 to 345 then datastore subscription doesnot work on this case.
How do i get change in datastore on doing this scenario, I want subscription on this case to complete my relevant task
Related
In my DynamoDb I have about 200k datapoints, there will be more in the future. When I logout my local datastorage gets cleared. When I log in, datastore starts to sync it with the cloud. The problem is that the syncing is taking really long for over 200k datapoints. The datapoints are sensorik data that its displayed on a chart.
My idea is to fetch only the data directly from the database which I need without bloating up my entire local storage.
Is there a way to fetch the data what I need without saving it into the offline storage? I was thinking to rather use AWS timeseries for my chart data.
SyncExpression Configuration is required for fetch the specific data based on your need.
DOC: https://docs.amplify.aws/lib/datastore/sync/q/platform/js/
import { DataStore, syncExpression } from 'aws-amplify';
import { Post, Comment } from './models';
DataStore.configure({
syncExpressions: [
syncExpression(Post, () => {
return post => post.rating.gt(5);
}),
syncExpression(Comment, () => {
return comment => comment.status.eq('active');
}),
]
});
I have a lambda that exports all of our loggroups to s3, and am currently using cloudwatchlogs.describeLogGroups to list
all of our logGroups.
const logGroupsResponse = await cloudwatchlogs.describeLogGroups({ limit: 50 })
The issue is that we have 69 logGroups is there any way to list (ids, and names) of absolutely all logGroups in an aws account. I see it's possible to have 1000 log groups. This is a screenshot of our console:
How come cloudwatchlogs.describeLogGroups just allows a limit of 50 which is very small?
Assuming that you are using AWS JS SDK v2, describeLogGroups API provides a nextToken in its response and also accepts a nexToken. This token is used for retrieving multiple log groups (more than 50) by sending multiple requests. We can use the following pattern to accomplish this:
const cloudwatchlogs = new AWS.CloudWatchLogs({region: 'us-east-1'});
let nextToken = null;
do {
const logGroupsResponse = await cloudwatchlogs.describeLogGroups({limit: 50, nextToken: nextToken}).promise();
// Do something with the retrieved log groups
console.log(logGroupsResponse.logGroups.map(group => group.arn));
// Get the next token. If there are no more log groups, the token will be undefined
nextToken = logGroupsResponse.nextToken;
} while (nextToken);
We are querying the AWS API in loop until there are no more log groups left.
I'm building a simple multiplayer game using AppSync (to avoid having to manage WebSocket from zero) and Amplify (specifically on Android, using Kotlin).
The problem is that when players are connected to a game, should listen for updates for the other players (consider that I have a table on dynamo with Primary Key a UUID and the corresponding game-id and position of the player).
So what I've done, is that after the player joins his game session register itself to onUpdate of Player... however now he receives the notification of the update of all the player... and even though I can easily filter this incoming events client side, I think it's a terrible idea, since AppSync have to notify to all registered users, all the updates.
Instead I would like to specify a filter on the subscription, like when you perform a query, something like:
Amplify.API.subscribe(
ModelSubscription.onUpdate(
Player::class.java,
Player.GAMEROOM.eq(currentPlayer.gameroom.id) // only players in the same gameroom
),
{ Log.i("ApiQuickStart", "Subscription established") },
{ onCreated ->
...
},
{ onFailure -> Log.e("ApiQuickStart", "Subscription failed", onFailure) },
{ Log.i("ApiQuickStart", "Subscription completed") }
)
Are there ways to obtain this?
Since this is the only tipe of subscription that I will do to Player Update, maybe with a custom resolver something might be done?
You need to add one more param into your subscription, and its subscribed mutations. I guess your GraphQL schema looks like
type Subscription {
onUpdate(gameRoomId: ID!, playerId: ID): UpdateUserResult
#aws_subscribe(mutations: ["leaveCurrentGameRoom", <other_interested_mutation>])
}
type Mutation {
leaveCurrentGameRoom: UpdateUserResult
}
type UpdateUserResult {
gameRoomId: ID!
playerId: ID!
// other fields
}
You notice Subscription.onUpdate has an optional param, playerId. The subscription can be used as follows
onUpdate("room-1") // AppSync gives updates of all players in "room-1"
onUpdate("room-1", "player-A") // AppSync gives updates of "player-A" in "room-1"
In order for AppSync to filter out irrelevant updates, the subscribed mutations, such as leaveCurrentGameRoom, have to return a type which includes the two fields, gameRoomId, and playerId. Please note that you may also update the resolvers of the mutations to resolve data for the fields
Here is another example of subscription arguments
https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments
I'm trying to use RDSDataService to query an Aurora Serverless database. When I'm trying to query, my lambda just times out (I've set it up to 5 minutes just to make sure it isn't a problem with that). I have 1 record in my database and when I try to query it, I get no results, and neither the error or data flows are called. I've verified executeSql is called by removing the dbClusterOrInstanceArn from my params and it throw the exception for not having it.
I have also run SHOW FULL PROCESSLIST in the query editor to see if the queries were still running and they are not. I've given the lambda both the AmazonRDSFullAccess and AmazonRDSDataFullAccess policies without any luck either. You can see by the code below, i've already tried what was recommended in issue #2376.
Not that this should matter, but this lambda is triggered by a Kinesis event trigger.
const AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
const RDS = new AWS.RDSDataService({apiVersion: '2018-08-01', region: 'us-east-1'})
for (record of event.Records) {
const payload = JSON.parse(new Buffer(record.kinesis.data, 'base64').toString('utf-8'));
const data = compileItem(payload);
const params = {
awsSecretStoreArn: 'arn:aws:secretsmanager:us-east-1:149070771508:secret:xxxxxxxxx,
dbClusterOrInstanceArn: 'arn:aws:rds:us-east-1:149070771508:cluster:xxxxxxxxx',
sqlStatements: `select * from MY_DATABASE.MY_TABLE`
// database: 'MY_DATABASE'
}
console.log('calling executeSql');
RDS.executeSql(params, (error, data) => {
if (error) {
console.log('error', error)
callback(error, null);
} else {
console.log('data', data);
callback(null, { success: true })
}
});
}
}
EDIT: We've run the command through the aws cli and it returns results.
EDIT 2: I'm able to connect to it using the mysql2 package and connecting to it through the URI, so it's defiantly an issue with either the aws-sdk or how I'm using it.
Nodejs excution is not waiting for the result that's why process exit before completing the request.
use mysql library https://www.npmjs.com/package/serverless-mysql
OR
use context.callbackWaitsForEmptyEventLoop =false
Problem was the RDS had to be crated in a VPC, in which the Lambda's were not in
I want to retrieve a table (with all rows) by name. I want to HTTP request using something like this on the body {"table": user}.
Tried this code without success:
'use strict';
const {Datastore} = require('#google-cloud/datastore');
// Instantiates a client
const datastore = new Datastore();
exports.getUsers = (req, res) => {
//Get List
const query = this.datastore.createQuery('users');
this.datastore.runQuery(query).then(results => {
const customers = results[0];
console.log('User:');
customers.forEach(customer => {
const cusKey = customer[this.datastore.KEY];
console.log(cusKey.id);
console.log(customer);
});
})
.catch(err => { console.error('ERROR:', err); });
}
Google Datastore is a NoSQL database that is working with entities and not tables. What you want is to load all the "records" which are "key identifiers" in Datastore and all their "properties", which is the "columns" that you see in the Console. But you want to load them based the "Kind" name which is the "table" that you are referring to.
Here is a solution on how to retrieve all the key identifiers and their properties from Datastore, using HTTP trigger Cloud Function running in Node.js 8 environment.
Create a Google Cloud Function and choose the trigger to HTTP.
Choose the runtime to be Node.js 8
In index.js replace all the code with this GitHub code.
In package.json add:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"#google-cloud/datastore": "^3.1.2"
}
}
Under Function to execute add loadDataFromDatastore, since this is the name of the function that we want to execute.
NOTE: This will log all the loaded records into the Stackdriver logs
of the Cloud Function. The response for each record is a JSON,
therefore you will have to convert the response to a JSON object to
get the data you want. Get the idea and modify the code accordingly.