Unable to validate or upload CloudFormation templates in AWS - amazon-web-services

In CloudFormation, when I try to validate my template by clicking the "checkbox" button above the designer, or when I try to actually click the "create stack" button, both result in the same error:
"Cannot upload this template to the S3 bucket because of an error."
This happens even when the template is the default empty template (which I assume is valid). So I don't think this is the error I should be seeing when the syntax is wrong.
{
"Parameters": {}
}
Any idea why this might be happening? When I go into the S3 service in the console, it seems I can access that okay, so I don't think it's a permissions issue.
Googling for an answer only provides one thread on the AWS forums, but they're getting an additional permissions-related error that I'm not seeing.
I'm completely new at AWS, so please feel free to point out the obvious.

Your particular error message might be caused by an expired AWS Console session. You should try refreshing your browser and see if the error message changes.
As for the default template, it is not valid because it must contain at least 1 Resource. For example, when validate (using the checkbox in the designer) the default template, I get the following error message:
Template is not valid: Template format error: At least one Resources
member must be defined.

Turns out I had read-only access to S3, which was causing the error to pop up. Once the admin made the permissions change to allow writing to S3 , I was able to do what I needed to do with CloudFormation.

Related

AWS Lambda error There was an error loading Log Streams

When I go to the Logs page the below error shows.
There was an error loading Log Streams. Please try again by refreshing this page.
Problem is there is another function that is identical except the code that is creating log files no problem.
Any suggestions?
I solved it.
I added CloudwatchLogsFullAccess and then it took some time under an hour and then it was working.
I'm not sure why I needed to do this for the second function but not the first but it's working now.
Below is the link that helped me.
https://blogs.perficient.com/2018/02/12/error-loading-log-streams/
Make sure your Lambda has already logged at least once!
Appears to me that this error occurs if that is not the case - I've tested fresh Lambdas both with and without any log statements to confirm: Without any log statements, a corresponding Log Group for the Lambda does not exist yet; after the first log statement is made, the statement then exists in a seemingly-newly-made corresponding Log Group.
Although this may seem obvious/intuitive after-the-fact, this is how I ran into this scenario: I think before any logging had occurred on my new Lambda, I tried to hook it up to CloudWatch events - I tried after that attempt to see if the Lambda was invoked (by the events) via viewing 'Monitoring' tab -> 'View logs in CloudWatch' button - and that is where I encountered this error. The Lambda had not been invoked [CloudWatch events hookup had failed], so no logging had occurred, and thus there was no corresponding Log Group made yet to examine (when trying to hyperlink into it from the Lambda Configuration).
(Fwiw, I imagine maybe a corresponding Log Group could be manually made before the first logging, but I have not tested that.)
Ensure your Lambda's Execution Role has a Policy that allows writing to CloudWatch Logs from your Lambda.
IAM console -> 'Roles' -> < your Lambda's role > -> 'Permissions' tab -> 'Permissions policies' accordion
Ensure there is a Policy listed that has parameters set like this:
'Service': "CloudWatch Logs"
'Access level': includes at least "Write"
'Resource': your Lambda is not excluded (i.e: its not set to another specific Lambda, or another directory of Lambdas, or another resource type)
'Request condition': does not preclude the context of your given Lambda execution
An example of an "AWS managed policy" that meets these requirements [out-of-the-box, being that it is AWS-managed] is "AWSLambdaBasicExecutionRole". It has these parameters:
'Service': "CloudWatch Logs"
'Access level': "Limited: Write"
'Resource': "All resources"
'Request condition': "None"
If your Role does not have such a policy already, either add a new one or edit and existing one to have the requirements listed here - then this error should be resolved.
For example, in my case before I fixed things, my Lambda's Role had a policy that was based off [AWS-managed] "AWSLambdaBasicExecutionRole", but somehow had a Resource that was limited to a different Lambda (which was my problem - insufficient permission to meet that policy from my different intended Lambda). I fixed this by adding the original [AWS-managed] "AWSLambdaBasicExecutionRole" Policy to my intended Lambda's role (I also deleted the prior-said Policy as it wasn't used by anything else, but that probably wasn't strictly necessary [although nice to tidy up]).
I resolved it by attaching CloudWatchFullAccess policy to the execution role of my lambda function

how to get shared access signature of Azure container by C++

I want to use C++ Azure API to generate a Shared Access Signature for a container on Azure and get the access string. But cannot find any good example. Almost all examples are in C#. Only found this, https://learn.microsoft.com/en-us/azure/storage/files/storage-c-plus-plus-how-to-use-files
Here is what I did,
// Retrieve a reference to a previously created container.
azure::storage::cloud_blob_container container = blob_client.get_container_reference(s2ws(eventID));
// Create the container if it doesn't already exist.
container.create_if_not_exists();
// Get the current permissions for the event.
auto blobPermissions = container.download_permissions();
// Create and assign a policy
utility::string_t policy_name = s2ws("Signature" + eventID);
azure::storage::blob_shared_access_policy policy = azure::storage::blob_shared_access_policy();
// set expire date
policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_days(10));
//give read and write permissions
policy.set_permissions(azure::storage::blob_shared_access_policy::permissions::read);
azure::storage::shared_access_policies<azure::storage::blob_shared_access_policy> policies;
//add the new shared policy
policies.insert(std::make_pair(policy_name, policy));
blobPermissions.set_policies(policies);
blobPermissions.set_public_access(azure::storage::blob_container_public_access_type::off);
container.upload_permissions(blobPermissions);
auto token = container.get_shared_access_signature(policy, policy_name);
After run this, I can see the policy is successfully set on the container, but the token got by the last line is not right. And there will always be an exception when exiting this function, the breakpoint locates in _Deallocate().
Could someone tell me what's wrong with my code? Or some examples about this? Thank you very much.
Edited
The token I got looks like,
"sv=2016-05-31&si=Signature11111122222222&sig=JDW33j1Gzv00REFfr8Xjz5kavH18wme8E7vZ%2FFqUj3Y%3D&spr=https%2Chttp&se=2027-09-09T05%3A54%3A29Z&sp=r&sr=c"
By this token, I couldn't access my blobs. The right token created by "Microsoft Azure Storage Explorer" using this policy looks like,
?sv=2016-05-31&si=Signature11111122222222&sr=c&sig=9tS91DUK7nkIlIFZDmdAdlNEfN2HYYbvhc10iimP1sk%3D
About the exception, I put all these code in a function. If without the last line, everything is okay. But if added the last line, while exiting this function, it will throw an exception and said a breakpoint was triggered. It stopped at the last line of _Deallocate() in "C:\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\include\xmemory0",
::operator delete(_Ptr);
Have no idea why this exception being thrown and how to debug because it seems it cannot be caught by my code.
Edited
After changed the last line to,
auto token = container.get_shared_access_signature(azure::storage::blob_shared_access_policy(), policy_name);
The returned token is right, I can access my blobs by using it. But the annoying exception is still there :-(
Edited
Just found the exception only happened when building in Debug. If in Release, everything is ok. So maybe it's related to compiling environment.
When creating a Shared Access Signature (SAS), there are a few permissions you set: SAS Start/Expiry, Permissions, IP ACLing, Protocol restrictions etc. Now what you could do is create an access policy on the blob container with these things, create an ad-hoc SAS (i.e. without access policy) with these things or combine these two to create a SAS token.
One key thing to keep in mind is that if something is defined in an access policy, you can't redefine them when creating a SAS. So for example, let's say you create an access policy with just Read permission and nothing else, then you can't provide any permissions when creating a SAS token while using this access policy. You can certainly define the things which are not there in the access policy (for example, you can define a SAS expiry if it is not defined in access policy).
If you look at your code (before edit), what you're doing is creating an access policy with some permissions and then creating a SAS token using the same permissions and access policy. That's why it did not work. However when you created a SAS token from Microsoft's Storage Explorer, you will notice that it only included the access policy (si=Signature11111122222222) and none of the other parameters and that's why it worked.
In your code after edit you did not include any permissions but only used the access policy (in a way you did what Storage Explorer is doing) and that's why things worked after edit.
I hope this explains the mystery behind not working/working SAS tokens.

Trying to set up AWS IoT button for the first time: Please correct validation errors with your trigger

Has anyone successfully set up their AWS IoT button?
When stepping through with default values I keep getting this message: Please correct validation errors with your trigger. But there are no validation errors on any of the setup pages, or the page with the error message.
I hate asking a broad question like this but it appears no one has ever had this error before.
This has been driving me nuts for a week!
I got it to work by using Custom IoT Rule instead of IoT Button on the IoT Type. The default rule name is iotbutton_xxxxxxxxxxxxxxxx and the default SQL statement is SELECT * FROM 'iotbutton/xxxxxxxxxxxxxxxx' (xxx... = serial number).
Make sure you copy the policy from the sample code into the execution role - I know that has tripped up a lot of people.
I was getting the same error. The cause turned out to be that I had multiple certificates associated with the button. This was caused by me starting over again on the wizard, generating cert & key, loading cert & key again. While on the device itself this doesn't seem to be a problem, the result was that on AWS I had multiple certs associated to the device.
Within the AWS IoT Resources view I eventually managed to delete all resources. Took some fiddling to get certs detached and able to be deleted. Once I deleted all resources I returned to the wizard, created yet another cert & key pair, pushed the Lambda code, and everything works.

AWS Lambda: error creating the event source mapping: Configuration is ambiguously defined

There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712&#670712
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error

faster search of file in s3 bucket in aws console

I am searching for a specific file in a S3 bucket that has a lot of files. In my application I get an error of 403 access denied, and with s3cmd I am getting an error of 403 (Forbidden) if I try to get a file from the bucket. My problem is that I am not sure if the permissions are the problem (because I can get other files) or the file isn't present on the bucket. I have started to search in the Amazon console interface, but I am scrolling for hours and I have not arrived at "4...." (I am still at "39...") and the file I am looking for is in a folder "C03215".
So, is there a faster way to verify that the file exists on the bucket? Or is there a way to do auto-scrolling and meanwhile doing something else (because if I do not scroll nothing new is loading)?
P.S.: I have no permission to list with s3cmd
Regarding accelerating the scrolling in the console
Like you I have many thousands of objects that takes an eternity to scroll through to in the console.
I recently discovered though how to jump straight to a specific path/folder in the console that is going to save my mouse finger and my sanity!
This will only work for folders though not the actual leaf objects themselves.
In the URL bar of your browser when viewing a bucket you will see something like:
console.aws.amazon.com/s3/home?region=eu-west-1#&bucket=your-bucket-name&prefix=
If you append your object's path after the prefix and hit enter you assume that it should jump to that object but it does nothing (in chrome at least).
However if you append your object's path after the prefix, hit enter and then hit refresh (f5) the console will reload at your specified location.
e.g.
console.aws.amazon.com/s3/home?region=eu-west-1#&bucket=your-bucket-name&prefix=development/2015-04/TestEvent/93edfcbg-5e27-42d3-a2f9-3d86a63d27f9/
There was much joy in our office when this was figured out!
The only "faster way" is to have the s3:ListBucket permission on the bucket, because, as you have noticed, S3's response to a GET request is intentionally ambiguous if you don't.
If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error.
If you don’t have the s3:ListBucket permission, Amazon S3 will return an HTTP status code 403 ("access denied") error.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html
Also, there's not a way to accelerate scrolling in the console.