GCP Python client method to Create Monitoring Notification channel is not idempotent.
Duplicate Channels under same Project are getting created.
Is there a way to avoid duplication?
create_notification_channel
According the documentation it's not possible to handle duplicates directly but you can solve your issue with the following actions :
List the existing notification channels in GCP via the list_notification_channels
Loop on the actual notifications channels (maybe configured in your code), if the element exist in GCP (check from the list retrieved previously), update it via update_notification_channel function, otherwise create it with create_notification_channel function.
Related
Can anyone explain why an aws glue Workflow would have empty default run properties and no graph, when accessed from an sdk? When I view the same workflow on the aws console I can see the ui representation of the graph and the run properties.
Yet when I access the same workflow via sdks (tried java and boto3) the Workflow object show empty default run properties and no graph. The accessor methods for these attributes return empty objects or null. For example
with the java sdk
myWorkflow.getGraph() returns null
I know the workflow has a several nodes and I have run and modified the workflow many times via the console.
I've tried to research if this is a permissions issue but I can't find anything to back that up and I don't get an error. Any insights would be appreciated.
So there is a "IncludeGraph" parameter in the getWorkflow request. The default of which is False. So to get the graph returned with your workflow you must set the parameter to true.
in Java:
......yourWorkflowRequest.withIncludeGraph(true)
in boto3:
.get_workflow(Name='the_workflow', IncludeGraph=True)
I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs
Simple question, but I suspect it doesn't have a simple or easy answer. Still, worth asking.
We're creating an implementation for push notifications using AWS with our Web Server running on EC2, sending messages to a queue on SQS, which is dealt with using Lambda, which is sent finally to SNS to be delivered to the iOS/Android apps.
The question I have is this: is there a way to query SNS endpoints based on the custom user data that you can provide on creation? The only way I see to do this so far is to list all the endpoints in a given platform application, and then search through that list for the user data I'm looking for... however, a more direct approach would be far better.
Why I want to do this is simple: if I could attach a User Identifier to these Device Endpoints, and query based on that, I could avoid completely having to save the ARN to our DynamoDB database. It would save a lot of implementation time and complexity.
Let me know what you guys think, even if what you think is that this idea is impractical and stupid, or if searching through all of them is the best way to go about this!
Cheers!
There isn't the ability to have a "where" clause in ListTopics. I see two possibilities:
Create a new SNS topic per user that has some identifiable id in it. So, for example, the ARN would be something like "arn:aws:sns:us-east-1:123456789:know-prefix-user-id". The obvious downside is that you have the potential for a boat load of SNS topics.
Use a service designed for this type of usage like PubNub. Disclaimer - I don't work for PubNub or own stock but have successfully used it in multiple projects. You'll be able to target one or many users this way.
According the the [AWS documentation][1] if you try and create a new Platform Endpoint with the same User Data you should get a response with an exception including the ARN associated with the existing PlatformEndpoint.
It's definitely not ideal, but it would be a round about way of querying the User Data Endpoint attributes via exception.
//Query CustomUserData by exception
CreatePlatformEndpointRequest cpeReq = new CreatePlatformEndpointRequest().withPlatformApplicationArn(applicationArn).withToken("dummyToken").withCustomUserData("username");
CreatePlatformEndpointResult cpeRes = client.createPlatformEndpoint(cpeReq);
You should get an exception with the ARN if an endpoint with the same withCustomUserData exists.
Then you just use that ARN and away you go.
I've created an Azure event hub and now I'm trying to add a "Listen" and a "Send" shared access policy. When I attempt to save them I get the following error:
SubCode=40000. PartitionCount cannot be changed for EventHub.
I'm not changing the "Partition Count", so I have not idea why I'm getting this error. Any suggestions on how to get around this problem?
I tried a similar thing with a template to update an existing Event Hub with additional consumer groups. My original template did not specify a "partitionCount" as part of the properties but when I added it the deployment worked.
I have a system in which I get a lot of messages. Each message has a unique ID, but it can also receives updates during its lifetime. As the time between the message sending and handling can be very long (weeks), they are stored in S3. For each message only the last version is needed. My problem is that occasionally two messages of the same id arrive together, but they have two versions (older and newer).
Is there a way for S3 to have a conditional PutObject request where I can declare "put this object unless I have a newer version in S3"?
I need an atomic operation here
That's not the use-case for S3, which is eventually-consistent. Some ideas:
You could try to partition your messages - all messages that start with A-L go to one box, M-Z go to another box. Then each box locally checks that there are no duplicates.
Your best bet is probably some kind of database. Depending on your use case, you could use a regular SQL database, or maybe a simple RAM-only database like Redis. Write to multiple Redis DBs at once to avoid SPOF.
There is SWF which can make a unique processing queue for each item, but that would probably mean more HTTP requests than just checking in S3.
David's idea about turning on versioning is interesting. You could have a daemon that periodically trims off the old versions. When reading, you would have to do "read repair" where you search the versions looking for the newest object.
Couldn't this be solved by using tags, and using a Condition on that when using PutObject? See "Example 3: Allow a user to add object tags that include a specific tag key and value" here: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html#tagging-and-policies