Trying to create a shared access policy for Azure event hub results in "Partition cannot be changed for EventHub." - azure-eventhub

I've created an Azure event hub and now I'm trying to add a "Listen" and a "Send" shared access policy. When I attempt to save them I get the following error:
SubCode=40000. PartitionCount cannot be changed for EventHub.
I'm not changing the "Partition Count", so I have not idea why I'm getting this error. Any suggestions on how to get around this problem?

I tried a similar thing with a template to update an existing Event Hub with additional consumer groups. My original template did not specify a "partitionCount" as part of the properties but when I added it the deployment worked.

Related

gcp create_notification_channel creates a duplicate channel

GCP Python client method to Create Monitoring Notification channel is not idempotent.
Duplicate Channels under same Project are getting created.
Is there a way to avoid duplication?
create_notification_channel
According the documentation it's not possible to handle duplicates directly but you can solve your issue with the following actions :
List the existing notification channels in GCP via the list_notification_channels
Loop on the actual notifications channels (maybe configured in your code), if the element exist in GCP (check from the list retrieved previously), update it via update_notification_channel function, otherwise create it with create_notification_channel function.

How to enable datasharing in Redshift cluster?

I am trying to create a datashare in Redshift by following this documentation. When I type this command:
CREATE DATASHARE datashare_name
I get this message:
ERROR: CREATE DATASHARE is not enabled.
I also tried to make it using console, but same issue.
So how to enable data sharing in Redshift ?
From the documents:here
Data sharing via datashare is only available for ra3 instance types
The document lists ra3.16xlarge, ra3.4xlarge, and ra3.xlplus instance types for producer and consumer clusters.
So, if I were in your place - I would first go back and check my instance type. If still not sure, drop a simple CS ticket and ask them if anything has changed recently & documentation is not updated

AWS DynamoDB resource not found exception

I have a problem with connection to DynamoDB. I get this exception:
com.amazonaws.services.dynamodb.model.ResourceNotFoundException:
Requested resource not found (Service: AmazonDynamoDB; Status Code:
400; Error Code: ResourceNotFoundException; Request ID: ..
But I have a table and region is correct.
From the docs it's either you don't have a Table with that name or it is in CREATING status.
I would double check to verify that the table does in fact exist, in the correct region, and you're using an access key that can reach it
My problem was stupid but maybe someone has the same... I changed recently the default credentials of aws (~/.aws/credentials), I was testing in another account and forgot to rollback the values to the regular account.
I spent 1 day researching the problem in my project and now I should repay a debt to humanity and reduce the entropy of the universe a little.
Usually, this message says that your client can't reach a table in your DB.
You should check the next things:
1. Your database is running
2. Your accessKey and secretKey are valid for the database
3. Your DB endpoint is valid and contains correct protocol ("http://" or "https://"), and correct hostname, and correct port
4. Your table was created in the database.
5. Your table was created in the database in the same region that you set as a parameter in credentials. Optional, because some
database environments (e.g. Testcontainers Dynalite) don't have an incorrect value for the region. And any nonempty region value will be correct
In my case problem was that I couldn't save and load data from a table in tests with DynamoDB substituted by Testcontainers and Dynalite. I found out that in our project tables creates by Spring component marked with #Component annotation. And in tests, we are using a global setting for lazy loading components to test, so our component didn't load by default because no one call it in the test explicitly. ¯_(ツ)_/¯
If DynamoDB table is in a different region, make sure to set it before initialising the DynamoDB by
AWS.config.update({region: "your-dynamoDB-region" });
This works for me:)
Always ensure that you do one of the following:
The right default region is set up in the AWS CLI configuration files on all the servers, development machines that you are working on.
The best choice is to always specify these constants explicitly in a separate class/config in your project. Always import this in code and use it in the boto3 calls. This will provide flexibility if you were to add or change based on the enterprise requirements.
If your resources are like mine and all over the place, you can define the region_name when you're creating the resource.
I do this for all my instantiations as it forces me to think about what I'm putting/calling where.
boto3.resource("dynamodb", region_name='us-east-2')
I was getting this issue in my .NetCore Application.
Following fixed the issue for me in Startup class --> ConfigureServices method
services.AddDefaultAWSOptions(
new AWSOptions
{
Region = RegionEndpoint.GetBySystemName("eu-west-2")
});
I got Error warning Lambda : lifecycleIteration=0 lambda handler returned an error: ResourceNotFoundException: Requested resource not found
I spent 1 week to fix the issue.
And so its root cause and steps to find issue is mentioned in below Git Issue thread and fixed it.
https://github.com/soto-project/soto/issues/595

Trying to set up AWS IoT button for the first time: Please correct validation errors with your trigger

Has anyone successfully set up their AWS IoT button?
When stepping through with default values I keep getting this message: Please correct validation errors with your trigger. But there are no validation errors on any of the setup pages, or the page with the error message.
I hate asking a broad question like this but it appears no one has ever had this error before.
This has been driving me nuts for a week!
I got it to work by using Custom IoT Rule instead of IoT Button on the IoT Type. The default rule name is iotbutton_xxxxxxxxxxxxxxxx and the default SQL statement is SELECT * FROM 'iotbutton/xxxxxxxxxxxxxxxx' (xxx... = serial number).
Make sure you copy the policy from the sample code into the execution role - I know that has tripped up a lot of people.
I was getting the same error. The cause turned out to be that I had multiple certificates associated with the button. This was caused by me starting over again on the wizard, generating cert & key, loading cert & key again. While on the device itself this doesn't seem to be a problem, the result was that on AWS I had multiple certs associated to the device.
Within the AWS IoT Resources view I eventually managed to delete all resources. Took some fiddling to get certs detached and able to be deleted. Once I deleted all resources I returned to the wizard, created yet another cert & key pair, pushed the Lambda code, and everything works.

AWS Lambda: error creating the event source mapping: Configuration is ambiguously defined

There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712&#670712
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error