My putItem is working. Now I want to make sure I update only newer information to an existing item, or add this as a new item:
ConditionlExpression:
Create new item if partition:sort doesn't exist
Update if attribute generated < :newTimestamp
So I added one line to code:
putItemRequest.SetConditionExpression(" attribute_not_exists(" + partitionName + ") OR (attribute_exists(" + partitionName + ") AND (" + timestampName + " < :" + timestamp + "))");
This should create a new item, but it looks to be trying to evaluate the attribute 'generated' when it does not exist for a new item.
The error on putItem return:
Invalid ConditionExpression: An expression attribute value used in expression is not defined; attribute value: :1461782160
From the debugger the conditionExpression look like:
m_conditionExpression = " attribute_not_exists(airport) OR (attribute_exists(airport) AND (generated < :1461782160))"
I am trying to avoid:
looking up the partition:sort
if it does not exist, putItem
else check the generated attribute
then if generated < newTimestamp update the item
Is there a way to construct the conditionExpression to meet my expectation?
Edit: Same problem when using updateItem
Code:
UpdateItemRequest updateItemRequest;
updateItemRequest.WithTableName(dynamoDbTableName);
AttributeValue hashPartition;
hashPartition.SetS(partition);
updateItemRequest.AddKey(partitionName, hashPartition);
AttributeValue hashSort;
hashSort.SetS(sort);
updateItemRequest.AddKey(sortName, hashSort);
AttributeValue hashAttribute;
hashAttribute.SetS(attribute);
Aws::Map<Aws::String, AttributeValue> attributeMap;
attributeMap[":a"] = hashAttribute;
updateItemRequest.SetUpdateExpression("SET " + timestampName + " = :" + timestamp + ", " + attributeName + " = :a");
updateItemRequest.SetExpressionAttributeValues(attributeMap);
// Allow only older items to be updated
updateItemRequest.SetConditionExpression("(" + timestampName + " < :" + timestamp + ")");
auto updateItemOutcome = dynamoDbClient.UpdateItem(updateItemRequest);
Error:
Invalid UpdateExpression: An expression attribute value used in expression is not defined; attribute value: :1461781980
That attribute value is the timestamp. It's not defined because this item doesn't exist and should be created.
Here is my current work around:
ClientConfiguration config;
config.region = Aws::Region::US_WEST_2;
Aws::DynamoDB::DynamoDBClient dynamoDbClient(config);
Aws::Map<Aws::String, AttributeValue> aMap;
PutItemRequest putItemRequest;
putItemRequest.WithTableName(dynamoDbTableName);
AttributeValue hashPartition;
hashPartition.SetS(partition);
putItemRequest.AddItem(partitionName, hashPartition);
aMap[":p"] = hashPartition;
AttributeValue hashSort;
hashSort.SetS(sort);
putItemRequest.AddItem(sortName, hashSort);
aMap[":s"] = hashSort;
AttributeValue hashTimestamp;
hashTimestamp.SetN(timestamp);
putItemRequest.AddItem(timestampName, hashTimestamp);
AttributeValue hashAttribute;
hashAttribute.SetS(attribute);
putItemRequest.AddItem(attributeName, hashAttribute);
// Do not update existing items
putItemRequest.SetConditionExpression("NOT((" + partitionName + " = :p) AND (" + sortName + " = :s))");
putItemRequest.SetExpressionAttributeValues(aMap);
auto putItemOutcome = dynamoDbClient.PutItem(putItemRequest);
if(putItemOutcome.IsSuccess())
{
poco_information(logger, "writeDb PutItem Success: " + partition + ":" + sort);
status = SWIMPROCESSOR_OK;
}
else
{
if(putItemOutcome.GetError().GetErrorType() == DynamoDBErrors::CONDITIONAL_CHECK_FAILED) {
// item exists, try to update
Aws::Map<Aws::String, AttributeValue> uMap;
uMap[":t"] = hashTimestamp;
uMap[":a"] = hashAttribute;
UpdateItemRequest updateItemRequest;
updateItemRequest.WithTableName(dynamoDbTableName);
updateItemRequest.AddKey(partitionName, hashPartition);
updateItemRequest.AddKey(sortName, hashSort);
updateItemRequest.SetUpdateExpression("SET " + timestampName + " = :t, " + attributeName + " = :a");
updateItemRequest.SetExpressionAttributeValues(uMap);
// Allow only older items to be updated
updateItemRequest.SetConditionExpression(timestampName + " < :t");
auto updateItemOutcome = dynamoDbClient.UpdateItem(updateItemRequest);
if(updateItemOutcome.IsSuccess())
{
poco_information(logger, "writeDb UpdateItem Success: " + partition + ":" + sort);
status = SWIMPROCESSOR_OK;
}
else
{
if(putItemOutcome.GetError().GetErrorType() == DynamoDBErrors::CONDITIONAL_CHECK_FAILED) {
poco_information(logger, "writeDB UpdateItem new timestamp is older then current timestamp");
status = SWIMPROCESSOR_OK;
} else {
std::string msg(updateItemOutcome.GetError().GetMessage());
poco_error(logger, "writeDb UpdateItem Failure: " + msg);
status = SWIMPROCESSOR_DBWRITEERROR;
}
}
} else {
std::string msg(putItemOutcome.GetError().GetMessage());
poco_error(logger, "writeDb PutItem Failure: " + msg);
status = SWIMPROCESSOR_DBWRITEERROR;
}
}
The service's error message says that you need to put :1461782160 in the attributeMap. UpdateExpression should be "SET " + timestampName + " = :timestamp, " + attributeName + " = :a"
and your map should be defined as follows.
AttributeValue hashAttributeA;
hashAttributeA.SetS(attribute)
AttributeValue hashAttributeTimestamp;
hashAttributeTimestamp.SetN(timestamp)
Aws::Map<Aws::String, AttributeValue> attributeMap;
attributeMap[":a"] = hashAttributeA;
attributeMap[":timestamp"] = hashAttributeTimestamp;
Related
I'm trying to make my own HTTP request to query cloudwatch metrics from AWS API. The reason is because I need to define my query in JSON format (similar to how aws cloudwatch get-metric-data --cli-input-json <json_file_name> works). And according to this link, this should be possible. However, I'm having some trouble properly making and signing my HTTP request. With the following code, I'm getting {"__type":"com.amazon.coral.service#UnknownOperationException"} error. And there is little information in the response to help me troubleshoot. Am I signing my request wrong? Or am I missing some parameters/headers?
String query = "{\n" +
" \"StartTime\": 1628589600,\n" +
" \"EndTime\": 1628590200,\n" +
" \"MetricDataQueries\": [\n" +
" {\n" +
" \"Id\": \"mymetric1\",\n" +
" \"Label\": \"counter\",\n" +
" \"MetricStat\": {\n" +
" \"Metric\": {\n" +
" \"Namespace\": \"AWS/EC2\",\n" +
" \"MetricName\": \"CPUUtilization\",\n" +
" \"Dimensions\": [\n" +
" {\n" +
" \"Name\": \"InstanceId\",\n" +
" \"Value\": \"my_instance_id\"\n" +
" }\n" +
" ]\n" +
" },\n" +
" \"Period\": 60,\n" +
" \"Stat\": \"Average\",\n" +
" \"Unit\": \"Percent\"\n" +
" }\n" +
" }\n" +
" ]\n" +
"}";
ProfileCredentialsProvider credProvider = ProfileCredentialsProvider.builder()
.profileName("my_profile").build();
Aws4PresignerParams params = Aws4PresignerParams.builder()
.doubleUrlEncode(true)
.awsCredentials(credProvider.resolveCredentials())
.signingName("monitoring")
.signingRegion(Region.US_EAST_1)
.timeOffset(0)
.build();
SdkHttpFullRequest requestToSign = SdkHttpFullRequest.builder()
.method(SdkHttpMethod.POST)
.protocol("https")
.host("monitoring.us-east-1.amazonaws.com")
.appendRawQueryParameter("Action", "GetMetricData")
.appendRawQueryParameter("Version", "2010-08-01")
.appendRawQueryParameter("StartTime", "1628589600")
.appendRawQueryParameter("EndTime", "1628590200")
.appendRawQueryParameter("MetricDataQueries.member.N", "1")
.contentStreamProvider(() -> new ByteArrayInputStream(query.getBytes(StandardCharsets.UTF_8)))
.build();
SdkHttpFullRequest signedRequest = AwsS3V4Signer.create().presign(requestToSign, params);
URL url = signedRequest.getUri().toURL();
OkHttpClient client = new OkHttpClient();
RequestBody requestBody = RequestBody.create(query.getBytes(StandardCharsets.UTF_8), MediaType.parse("application/x-amz-json-1.0"));
Request r = new Request.Builder()
.url(url)
.post(requestBody)
.build();
try (Response response = client.newCall(r).execute()) {
System.out.println(response.body().string());
}
Is there a way to export multiple SQL tables as csv by issuing specific queries from cloud-sql.
Below is the code i currently have. When I call the exportTables for multiple tables back to back, I see a 409 error. It's probably becaause cloud-sql instance is busy with an export and it's not allowing subsequent export request.
How can I get this to work ? What would be the ideal solution here.
private void exportTables(String table_name, String query)
throws IOException, InterruptedException {
HttpClient httpclient = new HttpClient();
PostMethod httppost =
new PostMethod(
"https://www.googleapis.com/sql/v1beta4/projects/"
+ "abc"
+ "/instances/"
+ "zxy"
+ "/export");
String destination_bucket =
String.join(
"/",
"gs://" + "test",
table_name,
DateTimeUtil.getCurrentDate() + ".csv");
GoogleCredentials credentials =
GoogleCredentials.getApplicationDefault().createScoped(SQLAdminScopes.all());
AccessToken access_token = credentials.refreshAccessToken();
access_token.getTokenValue();
httppost.addRequestHeader("Content-Type", "application/json");
httppost.addRequestHeader("Authorization", "Bearer " + access_token.getTokenValue());
String request =
"{"
+ " \"exportContext\": {"
+ " \"fileType\": \"CSV\","
+ " \"uri\":\""
+ destination_bucket
+ "\","
+ " \"databases\": [\""
+ "xyz"
+ "\"],"
+ " \"csvExportOptions\": {"
+ " \"selectQuery\": \""
+ query
+ "\""
+ " }\n"
+ " }"
+ "}";
httppost.setRequestEntity(new StringRequestEntity(request, "application/json", "UTF-8"));
httpclient.executeMethod(httppost);
if (httppost.getStatusCode() > 200) {
String response = new String(httppost.getResponseBody(), StandardCharsets.UTF_8);
if (httppost.getStatusCode() != 409) {
throw new RuntimeException(
"Exception occurred while exporting the table: " + table_name + " Error " + response);
} else {
throw new IOException("SQL instance seems to be busy at the moment. Please retry");
}
}
httppost.releaseConnection();
logger.info("Finished exporting table {} to {}", table_name, destination_bucket);
}
I don't have suggestion to fix the issue on Cloud SQL directly, but a solution to execute in sequence the export thanks to a new tool: Workflow
Define the data format that you want, in JSON, to define ONE export.
Then provide an array of configuration to your workflow
In this workflow,
Make a loops on the configuration array
Perform an API call to Cloud SQL to generate the export on each configuration
Get the answer of the API Call, you have the jobId
Sleep a while
Check if the export is over (with the jobId).
If not, sleep and check again
If yes, loop (and thus start the next export)
It's serverless and the free tier makes this use case free.
The backend of my application makes a request to:
https://graph.facebook.com/v2.8/me?access_token=<firebase-access-token>&fields=id,name,first_name,birthday,email,picture.type(large){url}&format=json&method=get&pretty=0&suppress_http_code=1
I get a successful (200) response with the JSON data I expect and picture field as such:
"picture": {
"data": {
"url": "https://platform-lookaside.fbsbx.com/platform/profilepic/?asid=<asid>&height=200&width=200&ext=<ext>&hash=<hash>"
}
}
(where in place of <asid> and <ext>, there are numbers and <hash> is some alphanumeric string).
However, when I make a GET request to the platform-lookaside URL above, I get a 404 error.
It's been happening every time since my very first graph.facebook request for the same user. The very first one returned a platform-lookaside URL which pointed to a proper image (not sure if this is simply coincidence).
Is there something I'm doing wrong or is this likely a bug with the Facebook API?
FB currently seems to have issues with some CDNs and therefore your issue might be only temporary. You should also see missing/broken images on some places on fb dot com. Worst time to debug your issue :)
Try this code it worked for me
GraphRequest request = GraphRequest.newMeRequest(
AccessToken.getCurrentAccessToken(), new GraphRequest.GraphJSONObjectCallback() {
#Override
public void onCompleted(JSONObject object, GraphResponse response) {
// Insert your code here
try {
String name = object.getString("name");
String email = object.getString("email");
String last_name = object.getString("last_name");
String first_name = object.getString("first_name");
String middle_name = object.getString("middle_name");
String link = object.getString("link");
String picture = object.getJSONObject("picture").getJSONObject("data").getString("url");
Log.e("Email = ", " " + email);
Log.e("facebookLink = ", " " + link);
Log.e("name = ", " " + name);
Log.e("last_name = ", " " + last_name);
Log.e("first_name = ", " " + first_name);
Log.e("middle_name = ", " " + middle_name);
Log.e("pictureLink = ", " " + picture);
} catch (JSONException e) {
e.printStackTrace();
Log.e("Sttaaaaaaaaaaaaaaaaa", e.getMessage());
}
}
});
Bundle parameters = new Bundle();
parameters.putString("fields", "id,name,email,link,last_name,first_name,middle_name,picture");
request.setParameters(parameters);
request.executeAsync();
I am rewriting an existing application and am required to use the existing application queries using JPA 2. How do I write a Named Query that replaces a dynamic query that will add sections to the where clause based on a value selection. In other words, if a value is selected from a drop down box, I will add a AND/OR section to the where clause. If not, that where clause section is omitted. For example:
int paramIdx = 0;
String query = "SELECT * FROM ANIMAL_TABLE WHERE...something";
String whereClause = "";
if(canine!= null and canine.trim().length() > 0) {
whereClause += " AND canine_type = :" + (paramIdx + 1) + viewObject.setWhereClauseParam(paramIdx, canine);
paramIdx++;
}
if(cat != null and cat.trim().length() > 0) {
whereClause += " AND cat_type = :" + (paramIdx + 1) + viewObject.setWhereClauseParam(paramIdx, cat);
}
I'm wondering if there is a way to accomplish this using JPA. Any ideas or suggestions would be helpful.
In SharePoint Community site , Category Tile view is showing wrong count of reply. In general Category total replies are 5 but in Category tile view it is showing 3. This behaviour we are able to replicate both in our test and Production environment. We also waited for one day , if this issue related to Crawl but issue still persists.
Please give some suggestions on this.
Thanks,
Sheetal
I had the exact same problem and there was no fix available. In my opinion these counters are updated through event receivers, so this has nothing to do with search crawls.
I solved this by setting the correct fields by code:
First count the topics and replies with this SSOM code
/// <summary>
/// Dictionary: categId, nr disc
/// </summary>
public Dictionary<int, int> CategoryTopicCount
{
get
{
var categoriesDiscussionsCount = new Dictionary<int, int>();
foreach (int categId in Categories)
{
var spquery = new SPQuery();
spquery.Query = ""
+ " <Where>"
+ " <And> "
+ " <IsNull>"
+ " <FieldRef Name='ParentFolderId' />"
+ " </IsNull>"
+ " <Eq>"
+ " <FieldRef Name='CategoriesLookup' LookupId='TRUE' />"
+ " <Value Type='Lookup'>" + categId + "</Value>"
+ " </Eq>"
+ " </And> "
+ " </Where>";
spquery.ViewAttributes = "Scope='RecursiveAll'";
categoriesDiscussionsCount.Add(categId, _discussionList.GetItems(spquery).Count);
}
return categoriesDiscussionsCount;
}
}
/// <summary>
/// Dictionary: categId, nr replies
/// </summary>
public Dictionary<int, int> CategoryReplyCount
{
get
{
var categoriesDiscussionsCount = new Dictionary<int, int>();
foreach (int categId in Categories)
{
//get topics of this category
var spquery = new SPQuery();
spquery.Query = ""
+ " <Where>"
+ " <And> "
+ " <IsNull>"
+ " <FieldRef Name='ParentFolderId' />"
+ " </IsNull>"
+ " <Eq>"
+ " <FieldRef Name='CategoriesLookup' LookupId='TRUE' />"
+ " <Value Type='Lookup'>" + categId + "</Value>"
+ " </Eq>"
+ " </And> "
+ " </Where>";
spquery.ViewAttributes = "Scope='RecursiveAll'";
SPListItemCollection topicsOfThisCategory = _discussionList.GetItems(spquery);
//get nr of replies for each topic of this category
var totalRepliesCategory = 0;
foreach (SPListItem topic in topicsOfThisCategory)
{
var spqueryreplies = new SPQuery();
spqueryreplies.Query = ""
+ " <Where>"
+ " <And> "
+ " <IsNotNull>"
+ " <FieldRef Name='ParentFolderId' />"
+ " </IsNotNull>"
+ " <Eq>"
+ " <FieldRef Name='ParentFolderId' />"
+ " <Value Type='Number'>" + topic.ID + "</Value>"
+ " </Eq>"
+ " </And> "
+ " </Where>";
spqueryreplies.ViewAttributes = "Scope='RecursiveAll'";
totalRepliesCategory += _discussionList.GetItems(spqueryreplies).Count;
}
categoriesDiscussionsCount.Add(categId, totalRepliesCategory);
}
return categoriesDiscussionsCount;
}
}
Then update the counters with this SSOM code:
/// <summary>
/// Update number of topics and replies for each category
/// </summary>
public void UpdateCategoriesCounts()
{
Dictionary<int, int> categoryTopicCount = this.CategoryTopicCount;
Dictionary<int, int> categoryReplyCount = this.CategoryReplyCount;
SPListItemCollection categories = _categoryList.Items;
foreach (SPListItem category in categories)
{
Console.WriteLine("Handling " + category.DisplayName);
int topicCount = category["TopicCount"] == null ? 0 : Convert.ToInt32(category["TopicCount"]);
int replyCount = category["ReplyCount"] == null ? 0 : Convert.ToInt32(category["ReplyCount"]);
Console.WriteLine("Topics: " + categoryTopicCount[category.ID] + " || Replies: " + categoryReplyCount[category.ID]);
_web.AllowUnsafeUpdates = true;
if (categoryTopicCount[category.ID] != topicCount)
category["TopicCount"] = categoryTopicCount[category.ID];
if (categoryReplyCount[category.ID] != replyCount)
category["ReplyCount"] = categoryReplyCount[category.ID];
category.SystemUpdate(false);
_web.AllowUnsafeUpdates = false;
Console.WriteLine("Saved...");
}
Console.WriteLine("Finished");
}
Hope this helps you!
PS: the same problem might occure with the 'my membership'-counters. Here we can also adjust the values through code. Check this: SharePoint 'my membership' webpart counters in community site